It seems to be a big problem for AI-generated summaries of scientific research, where the top-performing AIs ignores key details about 25% of the time, and often much worse. The problem is exactly what I said: the research has a new idea which is (by necessity) not going to be in the pretraining data, so the AI ignores or misstates it. I don't expect this problem to be solved anytime soon, it is inherent to how 2020s artificial neural networks work.
https://royalsocietypublishing.org/doi/10.1098/rsos.241776
Edit: I was thinking about the “overly snide” and am reminded of Sam Bankman-Fried:
The problem is not snideness, it is arrogant cynicism leading to stupidity.