Cognitive dissonance really is required to keep our “warm fuzzy empathic friendly” self image while simultaneously being ruthlessly pragmatic cold blooded killers when it suits us.
Just wait till these robot maximalists figure out that a pile of oxygen, carbon, hydrogen, and nitrogen is much cheaper than robots made out of steel and carbon fiber.
I mean, they haven't glommed onto the daily experience of giving a kid a snickers bar and asking them a question is cheaper than building a nuclear reactor to power GPT4o levels of LLM...
If we could directly convert the food energy of a Snickers bar to electricity we could easily power AI. A Snickers bar has 250 kcal, which is 1000 kJ or about 250 grams of TNT.[https://www.wolframalpha.com/input?i=250+kcal+in+joule] chatgpt-4 uses 3.6 kJ to 36 kJ per query so you could get potentially hundreds of queries on a single Snickers bar.
We only need a way to harness the power of the human body. Maybe we put people in VR for fun while using their body heat to power the AI.
TNT and other explosives have relatively little energy per kg compared to eg petrol or snickers.
That's explosives are chemicals selected / designed to be able to release their chemical energy really quickly and without needing any external oxidizer (because harvesting atmospheric oxygen would be too slow). That focus obviously leads to compromises in other areas, like energy density.
Temporarily, on the margin. A human would need multiple Snickers bars per day to survive, and can't survive on Snickers bars alone for more than couple days or weeks.
Also no human is anywhere close to being as knowledgeable and skilled as LLMs at all the things at the same time, so it hardly even compares.
Only if you also ate some random other stuff you found lying around. Doesn't even have to provide much in the way of energy, just enough 'dirt' to round out your diet with whatever other essentials you need.
Human bodies have evolved to survive for a long time on relatively little, yes. But not to evolve for a very long time on a single source of very 'clean' food like snickers bars. 'Clean' in the sense that chemically snickers has relatively well defined inputs, whereas hungry humans would eat just about anything, including insects and grass and bark or leather.
This isn’t true. There are countless cases of people surviving for months on nearly no food at all.
I’m not talking about what it takes to stay alive for long term periods. I was refuting the silly idea that you would die after a couple of days/weeks of snickers.
How much vitamin C is in a snickers bar? I think you'd get scurvy within a month or two if that's all you had.
How much vitamin A? Night blindness. Vitamin B? Neurological issues, confusion.
That's the thing with mono-diets, your body needs a diverse range of things that it can't synthesise itself.
But to the core point, in cases where the output of an LLM is good enough, many already have much lower energy requirements than humans: o4-mini is currently priced at $1.1 per million tokens of input and $4.4/million tokens of output; if that's all being spent on electricity at $0.1/kWh, that's a max of 11 kWh/million tokens in and 44 kWh/million tokens out — how many calories would a human have to burn to read, write, hear, speak, and internally monologue the equivalent of a million tokens?
They're fully aware of the obvious fact that LLMs are getting better at reasoning than humans at scale in general, and this includes power efficiency too. Meanwhile, what is not getting comparably better is robotics. This leads to obvious conclusion about natural order of things and division of labor: computers are for thinking, humans are for doing manual labor.
> the obvious fact that LLMs are getting better at reasoning than humans
I wanted to say that you were wrong, that LLMs can't reason and so it certainly isn't an obvious truth that they do it better than humans, but when I asked AI if LLMs can reason it told me that they can't which (while still not being reasoned by the LLM) seems to support the spirit of your claim since it gave a correct answer while you (a presumed human that can reason) got it wrong.
We might be elevating the importance of reasoning too much because us humans need to use it to solve many difficult problems. But if intuition was stronger, conscious/explicit/logical reasoning might not be needed. Didn't the famous mathemetician Ramanujan say that God gave him his answers in his dreams? That sounds like really powerful intuition like an LLM. Us humans can already solve a lot of incredibly complex problems intuitively, but they're quite domain-specific, like for spatial navigation and social interaction.
Anthropologist Gregory Bateson predicted we'll know machines are conscious when we ask a question and the computer responds, "That reminds me of a story."
That seems to be the hangup. I have to use a definition that would put it on equal footing to what we do as humans since that's the comparison being made.
Computers and software can be said to "understand", "think", and "reason" in their own way and informally people have always used those words in that context. Recently, software which has been trained on human-reasoned output is producing text that mimics reasoning well enough that it can be confused for the real thing, but nobody has been able to show that any reasoning (as a human reasons) is what's occurring.
If the output it produces is as useful to me as the output produced by a human with the magical and expensive capability to 'reason', why should I care?
There are several that would apply. Let's use this one as an example: Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth.
I don't think you need consciousness to reason. I don't see why repeated application of rewrite rules to extrapolate logical conclusions from antecedents shouldn't be considered reasoning. LLMs are perfectly able to match and apply rewrite rules, while using fuzzy concepts rather than being bound to crisp ontologies that make symbolic reasoning impractical to scale up. And for better or worse, LLMs can also apply simplified heuristics and rules of thumb, and end up making the same mistakes that humans make.
If you think "consciously" is a loaded term, wait until you get to "truth"!
Maybe it'd be easier to try another definition:
2
a(1)
: the power of comprehending, inferring, or thinking especially in orderly rational ways : intelligence
The same source defined intelligence as:
a(1)
: the ability to learn or understand or to deal with new or trying situations : reason
also : the skilled use of reason
And here we get the core of the issue. AI doesn't "think". It doesn't comprehend or understand what it does. There is no actual "I" in AI that didn't come from the people whose works were used to train it. At least not yet. I question if LLMs will ever be capable of anything more than producing a convincing affectation of the process used to produce the material it was trained on. I suspect that AGI will have to come from elsewhere. That doesn't mean that what passes for AI these days can't be useful, but I don't think it's capable of reason and as far as I know, nobody has proved otherwise.
Comprehend, from com- ("together" or "with") and prehendere ("to seize" or "grasp"). To take a hold of.
Can a calculator comprehend arithmetic? Can it take a hold of a number (in a register, for example), and a second number, and add them together to get a hold of the result?
What is computation, really? When we design machines to do arithmetic, do the machines actually do arithmetic, or do they just coincidentally come up with states that we humans can interpret as a correspondence with arithmetic?
More importantly, would a rose by any other name smell as sweet?
If you put a problem into text, and give it to an LLM, and an LLM applied a series of higher order pattern matching to it to produce more text, and you read the resulting text and interpret it as reasoning about the solution to a problem, has the LLM reasoned? Does the calculator calculate? Or does it really matter?
To all of you complaining about LLMs hallucinating, do try to give the same prompt to a kid on a sugar rush and let me know if you're getting more reliable responses.