Hacker Newsnew | past | comments | ask | show | jobs | submit | ordersofmag's commentslogin

I'm pretty sure the Luddites judged the threat the machines posed to their livelihood to be a greater damage than their employer's loss of their machines. So for them, it was an easy justification. The idea that dollar value encapsulates the only correct way to value things in the world is a pretty scary viewpoint (as your reference to the value of saving a life illustrates).

One one side there were the luddites and their livelihoods; tens of thousands of people.

On the other side, there were cheap textiles for EVERYONE - plus some profits for the manufacturers.

They might have been fighting to save their livelihoods, but their self-interest put them up against the entire world, not just their employers.


The Luddites were trying to stop themselves & their families from starving to death. The factory owners were only interested in profit. It isn't like the Luddites were given a generous re-training package and they turned it down. They had 0 rights, I mean that literally: 0.

You missed MR2Z's argument: there are more people in the world than luddites and factory owners.

During industrial revolution, the clothes (and other fabrics) were getting dramatically cheaper. A family that could only afford cheapest clothes could now get a higher quality stuff. A family that could not afford any clothes at all, could now get cheap stuff.

This is what the luddites wanted to stop. It's not "luddites starving to death" vs "factory owner get no profit", it was "luddites starving to death" vs "many many more people can not afford clothes"


Except for the fact that the Luddites' labour grievances could easily have been addressed by the factory owners (rise in pay, better conditions) while still offering cheaper fabrics through industrialization. There was simply no desire to do so. No one was saved from freezing to death by cheaper textiles.

People did starve to death and turn to things such as alcohol due to labour displacement during Industrialization. At the time, the prevailing wisdom was that lower-class people were naturally inferior. Robert Owen challenged this theory.

And yes, that was the choice given to the Luddites. Have no work (and therefore no food), because the factory owner can replace you with machines, and you have no labour rights, so he will simply cast you out and make more profit. I did not miss Mr2Z's argument, yours is just incorrect.


> No one was saved from freezing to death by cheaper textiles.

Citation needed for that one.

> Except for the fact that the Luddites' labour grievances could easily have been addressed by the factory owners (rise in pay, better conditions) while still offering cheaper fabrics through industrialization.

So how long would the employers be required to pay them, in your mind? A year? Ten? A lifetime?

It would be the end consumer of the textile that would have to pay for those former textile workers to do nothing.

People can find new jobs when the world changes. It's not pleasant, but it's frankly a lot better than trying to force their old employer to keep them on payroll in a job where they can't do work.


"People can find new jobs when the world changes. It's not pleasant, but it's frankly a lot better than trying to force their old employer to keep them on payroll in a job where they can't do work."

This is what you don't understand. There was no re-tooling or re-training for the Luddites. This wasn't a 20th century downsizing situation. This was one step above slavery. They didn't just go get new jobs. They got extremely precarious work with no labour rights (at all) at lower pay than before and in competition with hordes of desperate unemployed labourers. This has nothing to do with free market economics like you're posting.

"citation needed for that one."

Actually no, you're the one who keeps saying that industrialization / replacing human workers with machines saved people's lives with cheap textiles, but you show no proof of this, so you're the one who needs a citation!


It’s an interesting question because the benefits of automation aren’t necessarily shared early on. If you can profitably sell a shirt for 10$ while everyone else needs to sell for 20$ there’s no reason to actually charge 10$ you might as well charge 19.95$ and sell just as many shirts for way more money.

So if society is actually saving 5c/shirt while “losing” 9$ in labor per shirt. On net society could be worse off excluding the one person who owns the factory and is way better off. Obviously eventually enough automation happens so the price actually falls meaningfully, but that transition isn’t instantaneous where decisions are made in the moment.

Further we currently subsidize farmers to a rather insane degree independent of any overall optimization for social benefit. Thus we can’t even really say optimization is the deciding factor here. Instead something else is going on, the story could have easily been framed as the factory owners doing something wrong by automating but progress is seen as a greater good than stability. And IMO that’s what actually decides the issue for most people.


In regards to both the Luddites and the farmers, you seem to forget the most important factor. Food.

In the case of the Luddites, it was a literal case of their children being threatened with starvation. "Livelihood" at the time was not fungible. The people affected could not just go apply at another industry. And there were no social services to help them eat during the transition period.

As for the farmers, any governing body realises that food security is national security. If too many people eschew farming for more lucrative fields, then the nation is at risk. Farming needs to appear as lucrative as medicine, law, and IT to encourage people to enter the field.


The luddites food requirements didn’t provide them with popular support.

Similarly US agricultural output could be cut in half without serious negative consequences. Far more corn ends up as ethanol than our food and we export vast quantities of highly subsidized food to zero benefit. Hell ethanol production costs as much in fossil fuels as we get ethanol from it, it’s literally pure wasted effort.

Rational policy would create a large scale food shortage and then let market forces take over. We could have 10 years of food on hand for every American at way less expensive than current policy with the added benefit of vastly reducing the negative externalities of farming such as depleting aquifers.


Be careful with the assumptions you're making. A risk management strategy, for example, will often appear to be of zero benefit except in the case where shit hits the fan. We can stop feeding cattle, producing ethanol, and whatever else overnight in the event that something happens.

> Rational policy would create a large scale food shortage and then let market forces take over.

Well I'm just going to state that I'm _really_ happy that you're not the one in charge and leave it at that.


You may be happy with the current status but it’s actually both risky and expensive.

Risk management means managing risks, there’s plenty of things having more farmland doesn’t actually protect you from. On the other hand having a decade of food protects you from basically everything as you get time to adjust as things change.

Just as an example, meteor strike blocks sunlight and farmland is useless for a few years. Under the current system most of us starve to death. Odds are around 1 in 1 million that it happens in a given lifetime, but countries outlive people start thinking longer term and it becomes more likely.


I fully support having huge stockpiles in addition to subsidies. There's a lot of things midway on the scale between "business as usual" and "meteor strike" where minimizing supply chain disruptions would likely prove to be of great benefit.

I completely agree that the current way things are being handled appears to have its share of problems and could stand to be better optimized. But that doesn't mean it's useless either.


Subsidies as a concept includes spending 1% as much on subsidies. Subsidies as they exist now however are a specific system that’s incredibly wasteful.

Producing dramatically less food and ending obesity are linked. If the average American eats 20% less obesity would still be an issue, but that’s a vast amount of farmland we just don’t need.

The current system isn’t designed to accommodate increased agricultural production, lowering food demands, or due to decreasing fertility the slow decline in global population. Instead the goal is almost completely to get votes from farmers.


You want to solve obesity by ... making food cost more? Assuming I've understood you correctly then I think it would be difficult for us to be more opposed to one another. I want basic necessities to be as cheap as possible. Preferably free.

I'm happy to debate what sort of free food the government should or shouldn't be handing out, what measures could be put in place to minimize waste, etc. But from my perspective the ideal is a free all you can eat buffet that's backed by the government.


No, I’m saying solving obesity reduces the need for food. Did you not see the post directly below this one posted 6+ hours before your comment where I said:

“For clarity, Ozempic etc have actually measurably decreased food consumption.”

Technology isn’t going backwards, we can expect increasingly effective medications with fewer side effects at lower costs to drive down food demand over time. Policies designed to prop up production in the face of falling demand are deeply flawed.

If you want to give people money, give them money, don’t give them lots of money so they can keep a little bit while they waste resources producing something without value.


Apologies, I saw it at the time but failed to follow. IIUC you're saying that subsidies will tend to ratchet in only the one direction.

To be clear I don't object at all to the idea of optimizing how subsidies are determined. I just don't think that subsidies and the resultant overproduction are a bad thing in general. I'm all for efficiency in the general case but I think a fair amount of paranoia is called for regarding long tail scenarios that lead to famine.


For clarity, Ozempic etc have actually measurably decreased food consumption. https://journals.sagepub.com/doi/10.1177/00222437251412834

Obviously that impacts food demand.


I am not sure at all how would we stockpile 10 years of food for each American - most of the kinds of food cannot be kept for that long. And what can be kept is unlikely to make a balanced diet.

Moreover, I am not sure how long will it take to re-build the farm industry if most farms will close. I think "10 years" is too optimistic, given how many farms will need to be spun up.


A stockpile of 10 years of food isn’t the same thing as 10 years of the modern American diet. Don’t expect veal from a government warehouse.

That said, we can preserve viable sperm for 50+ years. https://www.techexplorist.com/worlds-oldest-semen-still-viab...

Maintaining nutritional content isn’t a major hurdle.


You could revert to a granary system, but the whole point of farming subaidization was to leave the granary system that repeatedly throughout history ended up with massive famine and starvations.

Stored food is not bullet proof, and takes up a lot more bulk space than you may think. It can also take numerous years to ramp up farming production in response to a drop in yields or disaster.


We’re vastly better at food preservation, farming, and birth control today.

Suggesting we’ll run into the same issues as people before the green revolution is ignoring the progress of technology.


Yeah its better, but it is still far from perfect. You aren't going to increase farm tractor or farm implement production by 50% with a year or twos notice. Some crops like fruits takes years to establish, and unused farmland quickly succumbs to nature and starts growing trees. And if that field wasn't clear 5 years ago you now have to stump grind or bulldoze the fields because tree stumps and tree roots will mess up your farm equipment, doubly worse if its some super massive tractor and implement setup that would normally be the most productive.

And there is also all the political and financial barriers to taking unfarmed land and very quickly turning it into farmland. Who owns it and who owned it before? Who with the right knowledge to manage it properly will run it? What about other problems around them that are part to the famine.

And farming in itself is not very predictable business. Yields regularly vary by 30% just due to local weather without being considered unusual. Return on investments may be a decade down the road even if everything is done perfect. Getting people to invest long term for a potentially very short term problem is not super easy.

We got surviving rations from back in the US civil war that are still edible, but people still regularly starved and had famines despite massive leaps in food preservation technology. Hermetically sealing just a single persons food for a year is not an easy task, not to mention hundred million+.


Economies of scale are huge here. You can store well below -40 when you're talking food for 100 million people, that just doesn’t work well when you’re talking one person.

Of note I didn’t say a year or 2’s notice 10 years of food on hand would be fairly cheap and we currently have the surplus to hit that number quickly. And that’s for 100% replacement, most situations aren’t going to drop food production to zero giving us more time.


>The people affected could not just go apply at another industry.

Can you explain why? I don't understand.


FWIW this is very common idiom in several languages: https://en.wikipedia.org/wiki/Don%27t_throw_the_baby_out_wit...

basically it's a cliché

Heard of google drive?


I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.

5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.

I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".


> 5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do.

Mastery of words is thinking? In that line of argument then computers have been able to think for decades.

Humans don't think only in words. Our context, memory and thoughts are processed and occur in ways we don't understand, still.

There's a lot of great information out there describing this [0][1]. Continuing to believe these tools are thinking, however, is dangerous. I'd gather it has something to do with logic: you can't see the process and it's non-deterministic so it feels like thinking. ELIZA tricked people. LLMs are no different.

[0] https://archive.is/FM4y8 [0] https://www.theverge.com/ai-artificial-intelligence/827820/l... [1] https://www.raspberrypi.org/blog/secondary-school-maths-show...


Mastery of words is thinking?

That's the crazy thing. Yes, in fact, it turns out that language encodes and embodies reasoning. All you have to do is pile up enough of it in a high-dimensional space, use gradient descent to model its original structure, and add some feedback in the form of RL. At that point, reasoning is just a database problem, which we currently attack with attention.

No one had the faintest clue. Even now, many people not only don't understand what just happened, but they don't think anything happened at all.

ELIZA, ROFL. How'd ELIZA do at the IMO last year?


> Yes, in fact, it turns out that language encodes and embodies reasoning ... No one had the faintest clue

Funnily enough, they did, if you go back far enough. It's only the deconstructionists and the solipsists who had the audacity to think otherwise.


So people without language cannot reason? I don't think so.


There's no such thing as people without language, except for infants and those who are so mentally incapacitated that the answer is self-evidently "No, they cannot."

Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought.


There are quite a few studies to refute this highly ignorant comment. I'd suggest some reading [0].

From the abstract: "Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Astonishingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person’s thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain’s language areas when they understand a sentence, but not when they perform other nonlinguistic tasks like arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Taken together, these two complementary lines of evidence provide a clear answer to the classic question: many aspects of thought engage distinct brain regions from, and do not depend on, language."

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/


Yeah, you can prove pretty much anything with a pubmed link. Do dead salmon "think?" fMRI says maybe!

https://pmc.ncbi.nlm.nih.gov/articles/PMC2799957/

The resources that the brain is using to think -- whatever resources those are -- are language-based. Otherwise there would be no way to communicate with the test subjects. "Language" doesn't just imply written and spoken text, as these researchers seem to assume.


There’s linguistic evidence that, while language influences thought, it does not determine thought - see the failure of the strong Sapir-Whorf hypothesis. This is one of the most widely studied and robust linguistic results - we actually know for a fact that language does not determine or define thought.


How's the replication rate in that field? Last I heard it was below 50%.

How can you think without tokens of some sort? That's half of the question that has to be answered by the linguists. The other half is that if language isn't necessary for reasoning, what is?

We now know that a conceptually-simple machine absolutely can reason with nothing but language as inputs for pretraining and subsequent reinforcement. We didn't know that before. The linguists (and the fMRI soothsayers) predicted none of this.


Read about linguistic history and make up your own mind, I guess. Or don’t, I don’t care. You’re dismissing a series of highly robust scientific results because they fail to validate your beliefs, which is highly irrational. I'm no longer interested in engaging with you.


I've read plenty of linguistics work on a lay basis. It explains little and predicts even less, so it hasn't exactly encouraged me to delve further into the field. That said, linguistics really has nothing to do with arguments with the Moon-landing deniers in this thread, who are the people you should really be targeting with your advocacy of rationality.

In other words, when I (seem to) dismiss an entire field of study, it's because it doesn't work, not because it does work and I just don't like the results.


> ELIZA, ROFL. How'd ELIZA do at the IMO last year?

What's funny is the failure to grasp any contextual framing of ELIZA. When it came out people were impressed by it's reasoning, it's responses. And in your line of defense it could think because it had mastery of words!

But fast forward the current timeline 30 years. You will have been of the same camp that argued on behalf of ELIZA when the rest of the world was asking, confusingly: how did people think ChatGPT could think?


No one was impressed with ELIZA's "reasoning" except for a few non-specialist test subjects recruited from the general population. Admittedly it was disturbing to see how strongly some of those people latched onto it.

Meanwhile, you didn't answer my question. How'd ELIZA do on the IMO? If you know a way to achieve gold-medal performance at top-level math and programming competitions without thinking, I for one am all ears.


Does a prolog program think?


I don't know, you tell me. How'd your Prolog program do on the IMO problem set?


> I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.

This is the definition of the word ‘novel’.


Science is distributed. Lots of researchers at lots of different institutions research overlapping topics. That's part of its strength. In the U.S. most basic research is funded by federal grants. And as a results you'll find that research in pretty much any science area you can imagine is funded by federal grants going to multiple different institutions. In this case you're confusing things by bringing in NOAA which is a government agency (part of the Dept of Commerce). NCAR is a non-profit organization and competes for federal grant dollars with researchers at many other institutions (mostly universities). So in that sense there is a strong parallel here to Trump wanting to shut down Harvard (another non-profit organizations at which many different researchers work) and someone saying "doesn't Stanford do research on similar topics?" Yes, there is some conceptual overlap, but in detail there is not. The bigger difference is that Harvard has a big endowment and so can survive (at some level) if the federal grants it has been getting stop flowing. NCAR can't. Also, NCAR happens to have the experts and equipment (supercomputers) to do research that few other organizations can (none really in the U.S.). Harvard probably can't lay claim to that except in very narrow niches....

For perspective the annual budget for NCAR is about half the amount being spend on the new White House ballroom.


Or you're free to use the output for commercial use if you can get someone else to use the tool to make the (uncopyrighted) output you want.


Isn't that what groq did basically?

Though I'm sure they will shut their shop asap now that Nvidia basically bought them.


Nvidia didn’t buy Groq.


They did (unless you're one of the drafters of the Hart-Scott-Rodino Act, in which case, weirdly, they didn't)


Given that it's under scrutiny for regulatory bypass, it's not a purchase and is being reviewed as circumventing those very rules. Might not even happen.

I know, I'm joking: Trump likes Nvidia, but maybe he'll bump the Chinese tax to 30% to approve this deal? In a way I hope he pulls something like that, to punish Huang for his boot shining manipulations.

#iwantRAM


"basically"


The multiple meanings of many of the words in this sentence make it really poor at communicating what the site is about. "Endeavour" (with a capital 'E') is a proper name I associate with a space shuttle, and 'stellar' can mean 'having to do with stars'. So a first read for me leads to the conclusion that this site has something to do with space flight. And 'system' could mean almost anything. Maybe this site will let me personalize my own star system? All I can take away is that I'm not sure what this is, but clearly I'm not the target audience. Which I'm fine with.....


Or it doesn't. Because "software as an organic thing" like all analogies is an analogy, not truth. Systems can sit there and run happily for a decade performing the needed function in exactly the way that is needed with no 'rot'. And then maybe the environment changes and you decide to replace it with something new because you decide the time is right. Doesn't always happen. Maybe not even the majority of the time. But in my experience running high-uptime systems over multiple decades it happens. Not having somebody outside forcing you to change because it suits their philosophy or profit strategy is preferrable.


My guess is that most stuff is part of a bigger whole, and so it rots (unless it is adapted to that ever-changing whole)

Of course, you can have stuff running is constraint environment


Or more likely the 'whole' accesses the stable bit through some interface. The stable bit can happily keep doing it's job via the interface and the whole can change however it likes knowing that for that particular tasks (which hasn't changed) it can just call the interface.


LLM aren't retrained and released on a weekly time-scale. The data mining may only be reflected in the training of the next generation of the model.


If a hard drive sometimes fails, why would a raid with multiple hard drives be any more reliable?

"Do task x" and "Is this answer to task x correct?" are two very different prompts and aren't guaranteed to have the same failure modes. They might, but they might not.


RAID only works when failures are independent. E. g. if you bought two drivers from the same faulty batch which die after 1000 power-on hours RAID would not help. With LLM it’s not obvious that errors are not correlated.


> If a hard drive sometimes fails, why would a raid with multiple hard drives be any more reliable?

This is not quite the same situation. It's also the core conceit of self-healing file systems like ZFS. In the case of ZFS it not only stores redundant data but redundant error correction. It allows failures to not only be detected but corrected based on the ground truth (the original data).

In the case of an LLM backstopping an LLM, they both have similar probabilities for errors and no inherent ground truth. They don't necessarily memorize facts in their training data. Even with a RAG the embeddings still aren't memorized.

It gives you a constant probability for uncorrectable bullshit. One of the biggest problems with LLMs is the opportunity for subtle bullshit. People can also introduce subtle errors recalling things but they can be held accountable when that happens. An LLM might be correct nine out of ten times with the same context or only incorrect given a particular context. Even two releases of the same model might not introduce the error the same way. People can even prompt a model to error in a particular way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: