Have there been any declarations by various AI companies (e.g. OpenAI, Anthropic, Perplexity) that they are actually relying upon these llms.txt files?
Is there any evidence that the presence of the llms.txt files will lead to increased inclusion in LLM responses?
And if they are, can I put subtly incorrect data in this file to poison llm responses while keeping my content designed for humans of the best quality?
Keep in mind you're asking this question on a site where users regularly defend the Luddites, Ted Kaczynski, and other people who thought they were doing great things for humanity but who actually weren't even doing themselves any favors.
Is there any evidence that the presence of the llms.txt files will lead to increased inclusion in LLM responses?