Yes but unironically. It may seem obvious now that the LLM is just a word salad generator with no sentience, but look at the astounding evolution of ChatGPT 2 to ChatGPT 5 in a mere 3 years. I don't think it's at all improbable that ChatGPT 8 could be prompted to blend seamlessly in almost any online forum and be essentially undetectable. Is the argument essentially that life must be carbon based? Anything produced from neural network weights inside silicon simply cannot achieve sentience? If that's true, why?
> I think it’s important to highlight at this stage that I am not, in fact, “anti-LLM”. I’m anti-the branding of it as “artificial intelligence”, because it’s not intelligent. It’s a form of machine learning.
It's a bit weird to be against the use of the phrase "artificial intelligence" and not "machine learning". Is it possible to learn without intelligence? Methinks the author is a bit triggered by the term "intelligence" at a base primal level ("machines can't think!").
> “Generative AI” is just a very good Markov chain that people expect far too much from.
The author of this post doesn't know the basics of how LLMs work. The whole reason LLMs work so well is that they are extremely stateful and not memoryless, the key property of Markov processes.
A lot of this discussion is just sort of moot because the cold hard calculus of economics will dictate the future of AI coding. If it turns out it's just a cognitive burden that makes programmers worse, the bubble will pop and eventually the companies that move away from the technology will come out on top. If it turns out to make software engineering much more efficient, it will become the de factor standard and you will become obsolete as a professional engineer (at least at the vast majority of employers) regardless of how you feel about it. How you wish to code in your free time is up to you and one that doesn't really warrant an argument one way or the other since there is no wrong answer.
I don't think either of these are the best choices for this. Chatgpt 5.2 pro and gemini 3 pro deep thinking I believe are the strongest LLMs at "pure thought", i.e. things like mathematical reasoning.
I don't think it's that serious...it's an interesting experiment that assumes people will take it in good faith. The idea is also of course to attach the transcript log and how you prompted the LLM so that anyone can attempt to reproduce if they wish.
If this were a competition, some people would try hard to win it. But the goal here is exploration, not exploitation. Once the answers are revealed, it's unlikely a winner will be identified, but a bunch of mathematicians who tried prompting AI with the questions might learn something from the exercise.
I don't think it's that hard. MacKenzie Scott Bezos managed to give away nearly half (not accounting for appreciation) of the wealth she obtained from her divorce in a few short years.
She got them from the divorce. She didn’t have to convince anyone to pry them loose.
Notably, she played a huge part in how Amazon was structured due to her influence on Bezos.
I do find it very interesting though the apparently common pattern here of ‘woman gives away massive fortune she got from x to make the world better/rehabilitate her image’ or something.
Meanwhile, the men all seem to go on hooker binges. See Bezos, and now Gates (vs Epstein files).
No one, including the people getting screwed at the end, are actually innocent, but some definitely are more guilty than others eh?
Yes, but who did we automate out of a job by building crappy software? Accountants are more threatened by AI than any of the software we created before, same with Lawyers, teachers. We didnt automate any physical labourers out of a job too.
reply