Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The grifter is a nihilist. Nothing is holy to a grifter and if you let them they will rob every last word of its original meaning.

The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.

The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.

I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.

Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.

Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.

¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.



That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders.


Is this inherent to LLMs by the way, or is it a training choice? I would love for an LLM to talk more slowly when it is unsure.

This topic needs careful consideration and I should use more brain cycles on it. Please insert another coin.


It's fairly inherent. Talking more slowly wouldn't make it more accurate, since it's a next-token predictor: you'd have to somehow make it produce more tokens before "making up its mind" (i.e., outputing something that's sufficiently-correlated with a particular answer that it's a point of no return), and even that is only useful to the extent it's memorised a productive algorithm.

You could make the user interface display the output more slowly "when it is unsure", but that'd show you the wrong thing: a tie between "brilliant" and "excellent" is just as uncertain as a tie between "yes" and "no".


Telling the LLM to walk through the response, step by step is a prompt engineering thing though.


It is. Studied in the literature under the name "chain of thought" (CoT), I believe. It's still subject to the limitations I mentioned. (Though the output is more persuasive to a human even when the answer is the same, so you should be careful.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: