Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> a lot of people seem to see LLMs as smarter than themselves

I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?

Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.

That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.



I think you're spot on here. It's the same idea as scammers and con artists; people can be convinced of things that they might rationally reject if the language is persuasive enough. This isn't some new exploit in human behavior or an epidemic of people who are less intelligent than before; we've just never had to deal with the amount plausible enough sounding coherent human language being almost literally unlimited before. If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify (like how most of us hopefully don't need to be concerned their older relatives will transfer their bank account contents to benevolent foreign royalties with the expectation of being rewarded handsomely). It's hard to feel especially confident in this though given how much more open-ended the potential deceptions are (without even getting into the question of "intent" from the models or the creators of them).


My belief is that the function of a story is to provide social cover for our actions. Other people need to evaluate us (both in the moment and after the dust has settled) and while careful data analysis can do the job, who has time for that crap.

As such the story can be completely divorced from reality. The important thing is that the story is a good one. A good story transfers your social cover for yourself to your supervisor. They don't have to understand what you did and explain why it's okay that it failed. They just have to understand the story structure that you gave them. Listen to this great story, it's not my report's fault for this failure, and it's certainly not mine, just bad luck.

Additionally, the good (and sufficiently original) story is a gift because your supervisor can reuse it for new scenarios.

The good salesman gives you the story you need to excuse the purchase that will enable you to succeed. The bad salesman sells you on a story that you need a frivolous purchase.

And this is why job hoping is "bad". Eventually the incompetent employee uses up all of their good stories and management catches onto their act. It's embedded into our language. "Oh we've all heard this story before." The job hopper leaves just as their good stories are exhausted and can start over fresh at the new employer.

All of this in response to

> If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify

Yes, if we're lucky that is what will happen. But I fear that we're going to have to transition to a very low trust society for that to happen.

Reliance on the story is reliant on the trust that someone has done the real work. Distrust of the story implies a wider scale distrust in others and institutions.

Maybe we can add a tradition of annotating our stories with arguments and proofs. Although I've spent a two decade career desperately trying to give highly technical people arguments and proofs and I've seen stories completely unmoored from reality win out every time.

Optimistically, I'm just really bad at it and it's actually a natural transition. Pessimistically, we're in for a bumpy ride.


I'm not sure I'm quite as pessimistic as you, just because I tend to treat most predictions of how society will adapt to things as a whole as fairly low confidence, but I certainly don't disagree that it at least seems hard to imagine people getting past all this quickly.

The idea of story being how people justify making their decisions is interesting. I'm reminded of a couple of anecdotes my father has repeated a few times over the years about two distinct medical circumstances he's had. When he was first diagnosed with sleep apnea, he apparently was very skeptical that he had any reason to do anything because the sleep doctor told him things like "this will help you be less sleepy during the day" and "you won't start nodding off as you drive" when he didn't feel like either of those experiences happened to him. Eventually a different sleep doctor did convince him it was worthwhile to treat, and he's used a CPAP since then, he still seems not to feel like it would have made sense for him to start when he first got the diagnosis. Through the lens you've given, the original doctor didn't give him a compelling enough story to justify the effort on his part. On the other hand, the first time he talked to a nutritionist about changing his diet, he apparently mentioned something about how he wanted to at least be able to eat ice cream occasionally, even if it was less often, rather than not ever be able to eat it again, and the nutritionist replied "Of course! that would make life not worth living". He ended up being much more open to listening to the advice of the nutritionist than I would have expected, and I think it would be reasonable to argue that was because the nutritionist was able to give him a story that seemed compelling about what his life would be like with the suggested changes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: