Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When my friends talked about how AGI is just creating huge enough neural network & feeding it enough data, I have always compared it to: imagine locking a mouse in a library with all the knowledge in the world & expecting it to come out super intelligent.


The mouse would go mad, because libraries preserve more than just knowledge, they preserve the evolution of it. That evolution is ongoing as we discover more about ourselves and the world we live in, refine our knowledge, disprove old assumptions and theories and, on occasion, admit that we were wrong to dismiss them. Also, over time, we place different levels of importance to knowledge from the past. For example, an old alchemy manual from the middle ages used to record recipes for a cure for some nasty disease was important because it helped whoever had access to it quickly prepare some ointment that sometimes worked, but today we know that most of those recipes were random, non-scientific attempts at coming up with a solution to a medical problem and we have proven that those medicines do not work. Therefore, the importance of the old alchemist's recipe book as a source of scientific truth has gone to zero, but the historic importance of it has grown a lot, because it helps us understand how our knowledge of chemistry and its applications in health care has evolved. LLMs treat all text as equal unless it will be given hints. But those hints are provided by humans, so there is an inherent bias and the best we can hope for is that those hints are correct at the time of training. We are not pursuing AGI, we are pursuing the goal of automating the process of creation of answers that look like they are the right answers to the given question, but without much attention to factual, logical, or contextual correctness.


No. The mouse would just be a mouse. It wouldn't learn anything, because it's a mouse. It might chew on some of the books. Meanwhile, transformers do learn things, so there is obviously more to it than just the quantity of data.

(Why spend a mouse? Just sit a strawberry in a library, and if the hypothesis holds that the quantity of data is the only thing that matters holds, you'll have a super intelligent strawberry)


> Meanwhile, transformers do learn things

That's the question though, do they? One way of looking at gen AI is as a highly efficient compression and search. WinRAR doesn't learn, neither does Google - regardless of the volume of input data. Just because the process of feeding more data into gen AI is named "learning" doesn't mean that it's the same process that our brains undergo.


To the extent that we know what learning is (not very!) yes they do.


No need to waste a strawberry. Just test the furniture in the library. Either you have super intelligent chairs and tables or not.


Or a pebble; for a super intelligent pebble.

“God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man.” ― Ibn Arabi


GPT-3 is about the same complexity as a mouse brain; it did better than I expected…

… but it's still a brain the size of a mouse's.


Mice are fare more impressive though.


And they can cook. I know it, I have seen it in a movie /s


I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Don't get me wrong, organic brains learn from far fewer examples than AI, there's a lot organic brains can do that AI don't (yet), but I don't really find the intellectual capacity of mice to be particularly interesting.

On the other hand, the question of if mice have qualia, that is something I find interesting.


>but I don't really find the intellectual capacity of mice to be particularly interesting.

But you should find their self-direction capacity incredible and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same, much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

This isn't to even mention the vast cellular complexity that lets the mouse physically act on all these instructions from its brain and nervous system and continue to do so while self-recharging for up to 3 years and fighting off tiny, lethal external invaders 24/7, among other things it does to stay alive.

All of that in just a mouse.


> But you should find their self-direction capacity incredible

No, why would I?

Depending on what you mean by self-direction, that's either an evolved trait (with evolution rather than the mouse itself as the intelligence) for the bigger picture what-even-is-good, or it's fairly easy to replicate even for a much simpler AI.

The hard part has been getting them to be able to distinguish between different images, not this kind of thing.

> and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same,

https://en.wikipedia.org/wiki/Evolutionary_algorithm

> much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

Is nice, but again, this is mixing up the intelligence of the animal with the intelligence of the evolutionary process which created that instance.

I as a human have no knowledge of the evolutionary process which lets me enjoy the flavour of coriander, and my understanding of the Krebs cycle is "something about vitamin C?" rather than anything functional, and while my body knows these things it is unconventionable to claim that my body knowing it means that I know it.


I think you're completely missing the wider picture in your insistence on giving equivalency to the mouse with any modern AI, LLM or machine learning system.

The evolutionary processes behind the mouse being capable of all that are a part of the long distant past, up to the present, and their results are manifest in the physiology and cognitive abilities (such as they are) of the mouse), but this means that these abilities, conscious, instinctive and evolutionary only exist in the physical body of that mouse and nowhere else. No man-made algorithm or machine is capable of anything remotely comparable and its capacity for navigating the world is nowhere near as good. Once again, this especially applies when you consider that the mouse does all it does using absurdly tiny energy resources, far below what any LLM would need for anything similar.


Evolution is an abstract concept, and abstract concepts cannot be “intelligent” (whatever that means). This is like saying that gravity or thermodynamics are “intelligent”.


Evolution is not an abstract concept. It's a concrete algorithm with some abstract steps.


You might be surprised with what we've done by implementing the same thing as software.

https://en.wikipedia.org/wiki/Evolutionary_algorithm


I have yet to see a machine that would survive a single day in a mouse's natural habitat. And I doubt I'll see one in my lifetime.

Mediocre, or even excellent, Python and rap lyrics in Latin are easy stuff, just like chess and arithmetic. Humans just are really bad at them.


It doesn't say specifically, but I think these lasted more than a day, assuming you'll accept random predator species as a sufficient proof-of-concept substitute for mice which have to do many similar things but smaller:

https://www.televisual.com/news/behind-the-scenes-spy-in-the....


Those robots aren't acquiring their own food, and they're not edible by the creatures that surround them. They're playing on easy mode.


Still passes the "a machine that would survive a single day" test, and given machines run off electricity and we have PV already food isn't a big deal here.


> I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Isn't this distinction more about "language" than "intelligence". There are some fantastically intelligent animals, but none of them can do the tasks you mention because they're not built to process human languages.


Prior to LLMs, language was what "proved" humans were "more intelligent" than animals.

But this is besides the point; I have no doubt that if one were to make a mouse immortal and give it 50,000 years experience of reading the internet via a tokeniser that turned it into sensory nerve stimulation and it getting rewards depending on how well it can guess the response, it would probably get this good sooner simply because organic minds seem to be better at learning than AI.

But mice aren't immortal and nobody's actually given one that kind of experience, whereas we can do that for machines.

Machines can do this because they can (in some senses but not all) compensate for the sample-inefficient by being so much faster than organic synapses.


I agree with the general sentiment but want to add: Dogs certainly process human language very well. From anecdotal experience of our dogs:

In terms of spoken language they are limited, but they surprise me all the time with terms they have picked up over the years. They can definitely associate a lot of words correctly (if it interests them) that we didn't train them with at all, just by mere observation.

A LLM associates bytes with other bytes very well. But it has no notion of emotion, real world actions and reactions and so on in relation to those words.

A thing that dogs are often way better than even humans is reading body language and communicating through body language. They are hyper aware of the smallest changes in posture, movement and so on. And they are extremely good at communicating intent or manipulate (in a neutral sense) others with their body language.

This is a huge, complex topic that I don't think we really fully understand, in part because every dog also has individual character traits that influence their way of communicating very much.

Here's an example of how complex their communication is. Just from yesterday:

One of our dogs is for some reason afraid of wind. I've observed how she gets spooked by sudden movements (for example curtains at an open window).

Yesterday it was windy and we went outside (off leash in our yard), she was wary and showed subtle fear and hesitated to move around much. The other dog saw that and then calmly got closer to her, posturing towards the same direction she seemed to go. He made small very steps forward, waited a bit, let her catch up and then she let go of the fear and went sniffing around.

This all happened in a very short amount of time, a few seconds, there is a lot more to the communication that would be difficult and wordy to explain. But since I got more aware of these tiny movements (from head to tail!) I started noticing more and more extremely subtle clues of communication, that can't even be processed in isolation but typically require the full context of all movements, the pacing and so on.

Now think about what the above example all entails. What these dogs have to process, know and feel. The specificity of it, the motivations behind it. How quickly they do that and how subtle their ways of communications are.

Body language is a large part of _human_ language as well. More often than not it gives a lot of context to what we speak or write. How often are statements misunderstood because it is only consumed via text. The tone, rhythm and general body language can make all the difference.


I've yet to see ChatGPT run away from a cat.


To be fair, it’s the horsepower of a mouse, but all devoted to a single task, so not 100% comparable to the capabilities of a mouse, and language is too distributed to make a good comparison of what milestone is human-like. But it’s indeed surprising how much that little bit of horsepower can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: