Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To throw two pennies in the ocean of this comment section - I’d argue we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other. Which kinda invalidates a lot of predictions/discussions of “AGI” or even in general “AI”. How can one identify Artificial Intelligence/AGI without a modicum of understanding of what the hell intelligence even is.


The reason why it’s so hard to define intelligence or consciousness is because we are hopelessly biased with a datapoint of 1. We also apply this unjustified amount of mysticism around it.

https://bower.sh/who-will-understand-consciousness


I don't think we can ever know that we are generally intelligent. We can be unsure, or we can meet something else which possesses a type of intelligence that we don't, and then we'll know that our intelligence is specific and not general.

So to make predictions about general intelligence is just crazy.

And yeah yeah I know that OpenAI defines it as the ability to do all economically relevant tasks, but that's an awful definition. Whoever came up with that one has had their imagination damaged by greed.


All intelligence is specific, as evidenced by the fact that a universal definition regarding the specifics of "common sense" doesn't exist.


Common is not the same as general. A general key would open every lock. Common keys... well they're quite familiar.


My point was that all intelligence is based on an individual's experiences, therefore an individual's intelligence is specific to those experiences.

Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.


So if you encounter an unknown intelligence, like I dunno some kind of extra dimensional pen pal with a wildly different biology and environment than our own... Would you be open to the possibilities:

- despite our difference we have the same kind of intelligence

- our intelligences intersect, but there are capacities that each has that the other doesn't

?

It seems like for either to be true there would have to be some place of common ground into which we could both generalize independently of our circumstance. Mathematics is often thought to be such a place for instance, there's plenty of sci fi about beaming prime numbers into space as an attempt to leverage that common ground. Are you saying there aren't such places? That SETI is hopeless?


It's certainly possible that we may encounter other alien lifeforms whose intelligence intersects our own.

It's just not guaranteed.


If we assume this about intelligence:

> Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.

...then we might fail to recognized them as intelligent when we meet them. Same goes for emergent artificial doohickeys. A theory that allows for generalization might never fine an example of it, but it's still better than a theory which doesn't because the second sort surely won't.


When you make the term "general intelligence" so broad that it expands beyond the realm of human senses & concepts, statements about it become unfalsifiable because you, a human, can't conceive of a way to test said statement.

Unfalsifiable statements are worthless because they can't be tested.

So, at the very least, there's no point in humans trying to theorize about intelligence so general that it expands beyond human comprehension.

Basically, in the context of universal intelligence, I'm an atheist & you're agnostic.


A universal definition of “chair” is pretty hard to pin down, too…


What are your sources for that claim?


Ontology

https://en.wikipedia.org/wiki/Ontology

Or: just try, then try your best to find ways your definition fails. You should find it challenging, to put it mildly, to create a bulletproof definition, if you’re really looking for angles to attack each definition you can think of. They’ll end up being too broad, or too narrow. Or coming up short on defining when exactly a non-chair becomes a chair, and vice-versa, or what the boundaries of a chair are (where chairness begins and ends).

And if that one is tricky…


How would I know when my definition is too broad?


Exactly. Do exactly what you’re doing now, but to your own definitions of “chair”. You get it.


Hold on. You're the one saying that a definition can be too broad & acting like that actually means something important.

So I'm asking how you define a definition as "too broad".

Because my perspective is that definitions that are in fact too broad are unimportant because no one uses them.


Useful definitions! Yes, easy.

Universal definitions? Extremely hard.


Do you know of a human culture in which a chair is defined as something else that an elevated seat with a back?


This so much this. We don’t even have a good model for how invertebrate minds work or a good theory of mind. We can keep imitating understanding but it’s far from any actual intelligence.


I'm not sure we or evolution needed a theory of mind. Evolution stuck neurons together in various ways and fiddled with it till it worked without a master plan and the LLM guys seem to be doing something rather like that.


LLM guys took a very specific layout of neurons and said “if we copy paste this enough times, we’ll get intelligence.”


mmm, no because unlike biological entities, large models learn by imitation, not by experience


> we still lack schematic-level understanding of what “intelligence” even is or how it works. Not to mention how it interfaces with “consciousness”, and their likely relation to each other

I think you can get pretty far starting from behavior and constraints. The brain needs to act in such a way as to pay for its costs. And not just day to day costs, also ability to receive and give that initial inheritance.

From cost of execution we can derive an imperative for efficiency. Learning is how we avoid making the same mistakes and adapt. Abstractions are how we efficiently carry around past experience to be applied in new situations. Imagination and planning are how we avoid the high cost of catastrophic mistakes.

Consciousness itself falls from the serial action bottleneck. We can't walk left and right at the same time, or drink coffee before brewing it. Behavior has a natural sequential structure, and this forces the distributed activity in the brain to centralized on a serial output sequence.

My mental model is that of a structure-flow recursion. Flow carves structure, and structure channels flow. Experiences train brains and brain generated actions generate experiences. Cutting this loop and analyzing parts of it in isolation does not make sense, like trying to analyze the matter and motion in a hurricane separately.


I did the math some years ago on how much computing is required to simulate a human brain - a brain has around 90 billion neurons with each neuron having an average of 7,000 connections to other neurons. Lets assume thats all we need. So what do we need to simulate a neuron, one cpu? or can we fit more than one in a CPU, lets say 100 so we're down to one billion cpu's and 70 trillion messages flying between them every what? mSec?.

Simulating that is a long way away - so the only possibility is that brains have some sort of redundancy and we can optimise that away. Though computers are faster than brains so its possible maybe, how much faster? So lets say a neuron does its work in a mS and we can simulate this work in 1uS, ie a thousand times faster - thats still a lot. Can we get to a million times faster? even then its still a lot. Not to mention the power required for this.

Even if we can fit a million neurons in a CPU thats still 90 million CPU's. Only 10% are active say, still 9 million CPU's, a thousand times faster - 9,000 cpu's nearly there but still a while away.


We don't even have an accurate convincing model of how the functions of the brain really work, so it's crazy to even think about its simulation like that. I have no doubt that the cost would be tremendous if we could even do it, but I don't even think we know what to do.

The LLM stuff seems most distinctly to not be an emulation of the human brain in any sense, even if it displays human-like characteristics at times.


That would require philosophical work, something that the technicians building this stuff refuse to acknowledge as having value.

Ultimately this comes down to the philosophy of language and of the history of specific concepts like intelligence or consciousness - neither of which exist in the world as a specific quality, but are more just linguistic shorthands for a bundle of various abilities and qualities.

Hence the entire idea of generalized intelligence is a bit nonsensical, other than as another bundle of various abilities and qualities. What those are specifically doesn’t seem to be ever clarified before the term AGI is used.


> I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["<insert general intelligence buzzword>"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the <insert llm> involved in this case is not that.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it


Without going to deep into the rabbit hole, one could argue that at the first-order, intelligence is the ability to learn from experience towards a goal. In that sense, LLMs are not intelligent. They are just a (great) tool at the service of human intelligence. And so we’re just extremely far from machine intelligence.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: