Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me, one of the most interesting things that have come out of LLMs is the confirmation that humans are very bad at reasoning and, consequently, its' a very bad idea to try and make machines that "think like humans", because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers.

For instance -I'm not trying to be mean and I'm certainly not blaming you in particular, because I've seen this very often- but the reasoning that because LLMs can generate language, and humans can generate language not only LLMs are somehow like humans but also humans are like LLMs is not sound.

For example, walls have ears, cats have ears, therefore walls are like cats and cats are like walls. That doesn't work because walls' ears are not like cats' ears and even if they were, that still wouldn't make walls cats and cats walls, it would just make them both entities with ears.



>-I'm not trying to be mean and I'm certainly not blaming you in particular, because I've seen this very often- but the reasoning that because LLMs can generate language, and humans can generate language not only LLMs are somehow like humans but also humans are like LLMs is not sound.

Nah. Nobody personifies LLMs like this. What you're laying out here is a fundamental mistake that you'd have to be extremely stupid to make. I think barely anyone is making this mistake to even qualify mentioning it.

Seriously who here things that LLMs are anything like humans? That is not the claim. The claim is that LLMs understand you. Intelligence and understanding are clearly orthogonal to "human-like"


>because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers.

There is no evidence, basically none whatsoever that general "perfect logical reasoning" is a thing that actually exists in the real world. None.

No animal we've observed does it. Humans certainly don't do it. The only realm this idea actually works is Fiction. and this was not like for a lack of trying. Some of the greatest minds worked on this for decades and some people still don't seem to get it. Logic doesn't scale. They break at real world relationships.

Logic systems are that guy in the stands yelling that he could've made the shot, while he's not even on the field.


That's a common take but it doesn't really hold any water: computers are logic machines and all of Computer Science is based on logic; and it works just fine.

Besides which, you may not hear about them in the news but pretty much all the classical, symbolic- and logic-based approaches of Good, Old-Fashioned AI are still going strong and are doing very well thank you in tasks in which statistical machine learning approaches underperform.

To give a few examples: automated planning and scheduling (used e.g. by NASA in its autonomous guidance systems for its spaceships and rovers), program verification and model checking (the latter has transformed the semiconductor industry and led to several recent Turing awards), SAT-solving and constraint satisfaction (where recent algorithmic advances have made it possible to solve many instances of NP-complete decision problems in polynomial time), adversarial search (AlphaGo and friends aren't going anywhere without Monte Carlo Tree Search), program synthesis (you can generate code with LLMs, but good luck if you want it to work correctly), automated theorem proving, heuristic search, rule learning, etc etc.

To clarify, those are all logic-based approaches that remain the state of the art in classical AI tasks where statistical machine learning has made no progress in the last many decades. You may not read about them in the news and they're not even considered "AI" by many, but that's because they work and work very well, and the "AI Effect" takes hold [1].

Even poor old expert systems are the de facto standard for expressing business logic in the software industry. I guess. Informally, of course.

_________________

[1] https://en.wikipedia.org/wiki/AI_effect


>computers are logic machines and all of Computer Science is based on logic; and it works just fine.

Not what I mean. Logic is part of the real world. Logic is not the real world. The idea that you can use this small subset of the world to model the whole thing is what is incredibly suspect. No one has demonstrated this and there is no real reason to believe it can.

>To clarify, those are all logic-based approaches that remain the state of the art in classical AI tasks where statistical machine learning has made no progress in the last many decades

Logic is good at what logic does. Please don't take this to mean me calling logic useless. It's not that statistical machine learning has not made progress. But you won't beat logic on problems with clear definitions and unambiguous axioms. That is very cool but that is clearly not all of reality.


>> The idea that you can use this small subset of the world to model the whole thing is what is incredibly suspect. No one has demonstrated this and there is no real reason to believe it can.

I agree and I don't think there's any kind of logic that can do that, but there is also no other formal system that can, so far. I'm not sure if you are suggesting there is?

>> But you won't beat logic on problems with clear definitions and unambiguous axioms. That is very cool but that is clearly not all of reality.

Certainly not. Logic is a set of powerful formalisms that we can use to solve certain kinds of problem - it's a form of maths, like geometry or calculus. I don't think anyone expects that geometry or calculus is going to solve every problem in existence and the same goes for logic.


>so far. I'm not sure if you are suggesting there is?

No i wasn't. I guess i wasn't very clear in my first reply.

I was mainly getting at this,

>and, consequently, its' a very bad idea to try and make machines that "think like humans", because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers.

No one is scaling up and pouring millions of compute into LLMs for general intelligence because they thought it was an excellent idea before the fact(virtually no one did, even some of the most verbal proponents).

They're doing it because it's seems to be working in a way logic failed to. and logic had the headstart, both in research and public consciousness. Nearly all of fictional ai is an envisioning of the hard symbolic logic general intelligence system that dominated early ai research. Logic was not the underdog here.

The point i was really driving at is that you say "because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers." almost like it's a choice, like Logic and GPT are both on the field and people are going for the worse player. Logic is not even in consideration because ot couldn't make the cut.


Like I say in my earlier comment, that's not right. Logic-based AI is still dominant in many fields. There is a lot of excitement about statistical machine learning (I know, it's an understatement) but that's only because statistical machine learning is finally working and doing things that couldn't be done with logic- not because logic can't do the things that statistical machine learning can't do (it can), and not because statistical machine learning can do the things that logic can do (it can't).

There are two worlds, if you want. For me it's a mistake to try and keep them separated. All the great pioneers of AI were not only this or only that people. e.g. Shannon's MSc thesis gave us boolean logic-based circuits (logic gates) and he also introduced information theory. The people who have made real contributions to AI and to computer science were never one-trick ponies.

An analogy I like to make is that we have both airplanes and helicopters. A flying machine is something so useful to have that we 're going to use any kind we can make. Obviously a helicopter will not compete with a jet for speed, but a jet isn't anywhere as manoeuverable or flexible as a helicopter. So we use both.

>> Logic was not the underdog here.

It wasn't, but there was a bit of a Triassic extinction event, with the last AI winter of the '90s that took the expert systems and basically severed the continuity of logic-based AI research. The story is more complex than that, but logic-based AI was dealt a powerful blow, and progress slowed down. Although again like I say in my other comment, it didn't get completely extinguished. Perhaps, like we recognise birds today as the remaining dinosaurs, we'll recognise the old-new wave of logic-based AI that is hidden by the AI effect.


Would you say that this in itself is due to how incomplete human reasoning is in the first place? That as a result, our ideas of logic and what perfect logic looks like are bound to fail? Or are you saying that the purest mathematical representation of logic cannot scale to a point where they can model and predict real world relationships successfully?


The second. Mathematical logic thrives on precision, clear definitions, and unambiguous axioms, but real-world systems are often marked by vagueness, uncertainty, and dynamic change.

Gödel’s Incompleteness Theorems also demonstrates that in any sufficiently powerful mathematical system, there are true statements that cannot be proven within the system. This implies that no matter how refined a logical system you devise, it will invariably be incomplete or inconsistent when grappling with real-world phenomena.


Gödel didn't say anything about real world phenomena. He was talking about formal languages and mathematics.


Of course. But if you truly cannot model every true statement in any formally devised system, then you are by definition going to have to reject valid rules that your logic cannot verify if you intend your system to perfectly logical.


I believe that's right but only in a deductive setting, and as long as there's a requirement for soundness. Inductive and abductive logical inference are not sound and they are very useful for real-world decision-making. But that is a developing field and there are still many unknowns there.


No offense taken. I tried to be clear in my phrasing that I don't personally subscribe to the "LLMs are just like us" mentality. Just making an observation as to why people have such visceral reactions to any implication that they might be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: