Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> wrong or misleading explanations

Exactly the same issue occurs with search.

Unfortunately not everybody knows to mistrust AI responses, or have the skills to double-check information.



No, it's not the same. Search results send/show you one or more specific pages/websites. And each website has a different trust factor. Yes, plenty of people repeat things they "read on the Internet" as truths, but it's easy to debunk some of them just based on the site reputation. With AI responses, the reputation is shared with the good answers as well, because they do give good answers most of the time, but also hallucinate errors.


Community notes on X seems to be one of the highest profile recent experiments trying to address this issue



> Tools like SourceFinder must be paired with education — teaching people how to trace information themselves, to ask: Where did this come from? Who benefits if I believe it?

These are very important and relevant questions to ask oneself when you read about anything, but we also keep in mind that even those question can be misused and they can drive you to conspiracy theories.


If somebody asks a question on Stackoverflow, it is unlikely that a human who does not know the answer will take time out of their day to completely fabricate a plausible sounding answer.


People are confidently incorrect all the time. It is very likely that people will make up plausible sounding answers on StackOverflow.

You and I have both taken time out of our days to write plausible sounding answers that are essentially opposing hallucinations.


Sites like stackoverflow are inherently peer-reviewed, though; they've got a crowdsourced voting system and comments that accumulate over time. People test the ideas in question.

This whole "people are just as incorrect as LLMs" is a poor argument, because it compares the single human and the single LLM response in a vacuum. When you put enough humans together on the internet you usually get a more meaningful result.


At least it used to be true.


Have you ever heard of Dunning Kruger effect?

There's a reason why there are upvotes, solution and third party edit system in StackOverflow - people will spend time to write their "hallucinations" very confidently.


What is it about people making up lies to defend LLMs? In what world is it exactly the same as search? They're literally different things, since you get information from multiple sources and can do your own filtering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: