Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI therapy bots fuel delusions and give dangerous advice, Stanford study finds (arstechnica.com)
8 points by olyellybelly 8 months ago | hide | past | favorite | 1 comment


Color me unsurprised. It should be common knowledge that they hallucinate and are not suitable for fields requiring accuracy. This is unlikely to change until we drop the llm and work on real agi, like Carmack is doing. Neural nets may not be the problem, but certainly this model-stuffing, combined with the mechanism it works by (token prediction, not understanding) doesn't work in any field requiring accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: