Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The halting problem is part of asymptotic analysis on this case.

Just like with elementary limits you never reach it but you approach it.

The halting problem on what we typically call computers decidable but not in practical time lines.

The fact that a TM is not physically realizable doesn't change that claim.

Logical conjunctions are an example of something that is difficult in PAC learning and we know it is at least super polynomial but we don't if there are tractable forms like Schaefer's dicotomy allowing for linear time solutions for HORNSat as an example.

The tooling that works for asymptotic analysis on deterministic Turing machines does not transfer to biological nurons, because they simply aren't deterministic Turing machines.

Neurobiologists are fully aware of the limitations of modeling cortical neurons as deterministic systems.

While in pop science that difference may not be popular it is the general consensus of experts.

Your claims that cortical neurons are the same as a deterministic Turing machine is not the best accepted theory today.

In fact recent research says that qbits are a closer model to cortical neurons.

You can't blindly carry over the properties of deterministic Turing machines to qbits.

As non-deterministic Turing machine s are typically assigned to the special case of the type of NTM that defines the complexity class NP, I won't complicate the conversation with trying to explain the implications.



Here are some real challenges in creating intelligent agents, but none of that has anything to do with the halting problem. The reason you latch on to the strawman that I belive neurons are touring machines is because you want to win. I on the other hand, don't care about winning. I want to be right. The best way to be right is to change your mind immediately as you notice the flaws in your thinking.

The halting problem, or some other theoretical and esoteric complexities of computer systems, have nothing to do with the fact that current LLMs are not simply stochastic parrots. This doesn't mean they are conscious. They can't be. Because they aren't even multi-modal yet for starters. But that has nothing to do with halts or qbits. I don't even know where that red herring came from. Lay of the Penrose juice. There is no evidence that mammalian brains are room temperature quantum annealers. Nor is there evidence We need to model complete biological neurons to do learning. What if it's just a question of scale? We don't know that it's not. If it is, we are in big trouble.

Your argument is akin to saying that Gödel's incompleteness theorems, are the reason you can't complete your maths homework. Yes, in the "limit", it's true. But practically, we both know that homework can be solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: