Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have there been any declarations by various AI companies (e.g. OpenAI, Anthropic, Perplexity) that they are actually relying upon these llms.txt files?

Is there any evidence that the presence of the llms.txt files will lead to increased inclusion in LLM responses?



And if they are, can I put subtly incorrect data in this file to poison llm responses while keeping my content designed for humans of the best quality?


I'm curious, what would be the reason for doing this?


Keep in mind you're asking this question on a site where users regularly defend the Luddites, Ted Kaczynski, and other people who thought they were doing great things for humanity but who actually weren't even doing themselves any favors.


Undermine the usefulness of llms in an attempt to force people to visit your site directly.


If one doesn’t want LLMs to scrape data and knows the LLMs will be ignoring the robots.txt file.


Anthropic itself publishes a bunch of its own llm.txt files. So I guess that means something




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: