Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems to be a big problem for AI-generated summaries of scientific research, where the top-performing AIs ignores key details about 25% of the time, and often much worse. The problem is exactly what I said: the research has a new idea which is (by necessity) not going to be in the pretraining data, so the AI ignores or misstates it. I don't expect this problem to be solved anytime soon, it is inherent to how 2020s artificial neural networks work.

https://royalsocietypublishing.org/doi/10.1098/rsos.241776

Edit: I was thinking about the “overly snide” and am reminded of Sam Bankman-Fried:

  I would never read a book. I think, if you wrote a book, you fucked up, and it should have been a six paragraph blog post.
The problem is not snideness, it is arrogant cynicism leading to stupidity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: