Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man it is weird that the word for that is "harvesting".


I had that weird moment a few days ago learning that word is also used when you slaughter a (non human) animal:

"The chickens are harvest when they’re 32 days old"

Let’s sprout some semence in the cow (or not).


If that sounds weird, the term around here for butchering chickens is "dressing" them, as in, "We're going to dress chickens today."


The term for slaughtering pigs around here is 'turning them off' - all attempts to disconnect from the reality of what is happening.


Cognitive dissonance really is required to keep our “warm fuzzy empathic friendly” self image while simultaneously being ruthlessly pragmatic cold blooded killers when it suits us.


Pretty accurate though!


I don't think so? To me 'harvest' implies that the crop is destroyed afterwards.


We "harvest" all sorts of tree-grown products without killing the trees.


Similar for Asparagus.


Just wait till these robot maximalists figure out that a pile of oxygen, carbon, hydrogen, and nitrogen is much cheaper than robots made out of steel and carbon fiber.


I mean, they haven't glommed onto the daily experience of giving a kid a snickers bar and asking them a question is cheaper than building a nuclear reactor to power GPT4o levels of LLM...


If we could directly convert the food energy of a Snickers bar to electricity we could easily power AI. A Snickers bar has 250 kcal, which is 1000 kJ or about 250 grams of TNT.[https://www.wolframalpha.com/input?i=250+kcal+in+joule] chatgpt-4 uses 3.6 kJ to 36 kJ per query so you could get potentially hundreds of queries on a single Snickers bar.

We only need a way to harness the power of the human body. Maybe we put people in VR for fun while using their body heat to power the AI.


But then eventually you need Keanu Reeves to put boundary conditions on the AI


I was watching The Matrix Revolutions last night with my 14-year old. At one point, he told me "hey, that looks like ... Keanu Reeves"


TNT and other explosives have relatively little energy per kg compared to eg petrol or snickers.

That's explosives are chemicals selected / designed to be able to release their chemical energy really quickly and without needing any external oxidizer (because harvesting atmospheric oxygen would be too slow). That focus obviously leads to compromises in other areas, like energy density.


The snickers bar allows more than a single query for the human though


Temporarily, on the margin. A human would need multiple Snickers bars per day to survive, and can't survive on Snickers bars alone for more than couple days or weeks.

Also no human is anywhere close to being as knowledgeable and skilled as LLMs at all the things at the same time, so it hardly even compares.


> and can't survive on Snickers bars alone for more than couple days or weeks.

lol, the spoiled times we live in that you think this. The human body is capable of surviving on very little.

A thing with protein, fat and sugar would sustain you for incredible amounts of time. Many many months if not years.


Only if you also ate some random other stuff you found lying around. Doesn't even have to provide much in the way of energy, just enough 'dirt' to round out your diet with whatever other essentials you need.

Human bodies have evolved to survive for a long time on relatively little, yes. But not to evolve for a very long time on a single source of very 'clean' food like snickers bars. 'Clean' in the sense that chemically snickers has relatively well defined inputs, whereas hungry humans would eat just about anything, including insects and grass and bark or leather.


This isn’t true. There are countless cases of people surviving for months on nearly no food at all.

I’m not talking about what it takes to stay alive for long term periods. I was refuting the silly idea that you would die after a couple of days/weeks of snickers.


How much vitamin C is in a snickers bar? I think you'd get scurvy within a month or two if that's all you had.

How much vitamin A? Night blindness. Vitamin B? Neurological issues, confusion.

That's the thing with mono-diets, your body needs a diverse range of things that it can't synthesise itself.

But to the core point, in cases where the output of an LLM is good enough, many already have much lower energy requirements than humans: o4-mini is currently priced at $1.1 per million tokens of input and $4.4/million tokens of output; if that's all being spent on electricity at $0.1/kWh, that's a max of 11 kWh/million tokens in and 44 kWh/million tokens out — how many calories would a human have to burn to read, write, hear, speak, and internally monologue the equivalent of a million tokens?


days? Pretty sure I could survive at least a couple years off snicker bars


Probably not years. My guess is that scurvy, beriberi, or some other deficiency would get you in at most a year.


Though most of these diseases can be avoided with some minimal fortification to the snickers bar that wouldn't really require noticeably more energy.


I don't think I could write lengthy responses to hundreds of questions on a singular Snickers bar. I would need multiple.


They're fully aware of the obvious fact that LLMs are getting better at reasoning than humans at scale in general, and this includes power efficiency too. Meanwhile, what is not getting comparably better is robotics. This leads to obvious conclusion about natural order of things and division of labor: computers are for thinking, humans are for doing manual labor.


> the obvious fact that LLMs are getting better at reasoning than humans

I wanted to say that you were wrong, that LLMs can't reason and so it certainly isn't an obvious truth that they do it better than humans, but when I asked AI if LLMs can reason it told me that they can't which (while still not being reasoned by the LLM) seems to support the spirit of your claim since it gave a correct answer while you (a presumed human that can reason) got it wrong.


We might be elevating the importance of reasoning too much because us humans need to use it to solve many difficult problems. But if intuition was stronger, conscious/explicit/logical reasoning might not be needed. Didn't the famous mathemetician Ramanujan say that God gave him his answers in his dreams? That sounds like really powerful intuition like an LLM. Us humans can already solve a lot of incredibly complex problems intuitively, but they're quite domain-specific, like for spatial navigation and social interaction.


Anthropologist Gregory Bateson predicted we'll know machines are conscious when we ask a question and the computer responds, "That reminds me of a story."


How are you defining “reasoning”?


That seems to be the hangup. I have to use a definition that would put it on equal footing to what we do as humans since that's the comparison being made.

Computers and software can be said to "understand", "think", and "reason" in their own way and informally people have always used those words in that context. Recently, software which has been trained on human-reasoned output is producing text that mimics reasoning well enough that it can be confused for the real thing, but nobody has been able to show that any reasoning (as a human reasons) is what's occurring.


Why do you care if the software 'reasons'?

If the output it produces is as useful to me as the output produced by a human with the magical and expensive capability to 'reason', why should I care?


You didn't answer my question.


There are several that would apply. Let's use this one as an example: Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information, with the aim of seeking the truth.


I don't think you need consciousness to reason. I don't see why repeated application of rewrite rules to extrapolate logical conclusions from antecedents shouldn't be considered reasoning. LLMs are perfectly able to match and apply rewrite rules, while using fuzzy concepts rather than being bound to crisp ontologies that make symbolic reasoning impractical to scale up. And for better or worse, LLMs can also apply simplified heuristics and rules of thumb, and end up making the same mistakes that humans make.


> consciously

What does this mean?


If you think "consciously" is a loaded term, wait until you get to "truth"!

Maybe it'd be easier to try another definition:

2 a(1) : the power of comprehending, inferring, or thinking especially in orderly rational ways : intelligence

The same source defined intelligence as:

a(1) : the ability to learn or understand or to deal with new or trying situations : reason also : the skilled use of reason

And here we get the core of the issue. AI doesn't "think". It doesn't comprehend or understand what it does. There is no actual "I" in AI that didn't come from the people whose works were used to train it. At least not yet. I question if LLMs will ever be capable of anything more than producing a convincing affectation of the process used to produce the material it was trained on. I suspect that AGI will have to come from elsewhere. That doesn't mean that what passes for AI these days can't be useful, but I don't think it's capable of reason and as far as I know, nobody has proved otherwise.


Comprehend, from com- ("together" or "with") and prehendere ("to seize" or "grasp"). To take a hold of.

Can a calculator comprehend arithmetic? Can it take a hold of a number (in a register, for example), and a second number, and add them together to get a hold of the result?

What is computation, really? When we design machines to do arithmetic, do the machines actually do arithmetic, or do they just coincidentally come up with states that we humans can interpret as a correspondence with arithmetic?

More importantly, would a rose by any other name smell as sweet?

If you put a problem into text, and give it to an LLM, and an LLM applied a series of higher order pattern matching to it to produce more text, and you read the resulting text and interpret it as reasoning about the solution to a problem, has the LLM reasoned? Does the calculator calculate? Or does it really matter?


Have you ever watched an LLM with CoT solve a logical puzzle?


Well, don't blame me, I voted for Kodos...


Kodos the Executioner, or the Rigelian from The Simpsons?


I’m excited at our future where we’re mind-stapled together to be used as meat for our AI overlords to enact their obtuse plans.


If a person costs $100K/year to employ, at $0.10/kWh that would buy 1 GWh/year, or a steady power of over 100 kW.


To all of you complaining about LLMs hallucinating, do try to give the same prompt to a kid on a sugar rush and let me know if you're getting more reliable responses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: