Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First it was tools to help humans code (Copilot) -- and no one complained because we still needed human coders.

Now it's a near fully-functional teammate that needs a bit of supervision -- still no one complained because it needs human team members to instruct it

Next it will be a full-functional teammate that is probably as good as a junior coder -- but no one will complain because senior coders will get paid more and companies will have to hire less

Then, there will be the expert AI coder, but still no one will complain because we will need system architects to design the system that the AI codes.

All along, no one will complain because those with jobs will still have them and those without will be too busy thinking about how to provide for their families. Get another job? Oh yeah, AI took that one too.

All along, we are incrementally improving AI because it is an intellectual amusement -- we simply never take into account the social consequences. This path of destruction is so blindingly obvious, it's incredible that everyone is still working on AI instead of trying to destroy it. But wait, I get it, those who jump on earliest still have a chance at a bit of profit....

...so it's all worth it in the end, right?



There were plenty of people 'complaining' about Copilot and there are plenty 'complaining' about Devin.

However, what can you really complain about? Technological progress? We can't just decide to ignore AI. Just to play devil's advocate, if there really were clear signs of this impending distruction, there could be some sort of international agreement to halt progress. Realistically, this will never happen.

> All along, we are incrementally improving AI because it is an intellectual amusement

You're contradicting yourself here. Is it just 'intellectual amusement' if this technology is as disruptive as you claim?


> You're contradicting yourself here. Is it just 'intellectual amusement' if this technology is as disruptive as you claim?

Let me be more clear: individual programmers are improving AI for its intellectual amusement, but organizations use it for its disruptive powers. Two different groups of people, with a bit of an intersection.

MOreover:

> Just to play devil's advocate, if there really were clear signs of this impending distruction, there could be some sort of international agreement to halt progress. Realistically, this will never happen.

Of course not, are you joking? There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.

We are fundamentally destructive species, who cannot see long-term problems if there is short-term gain. The only mechanism we have on a global scale to decide what to do is capitalistic motivations.


> Let me be more clear: individual programmers are improving AI for its intellectual amusement, but organizations use it for its disruptive powers. Two different groups of people, with a bit of an intersection.

That's clearer, but I still take issue with it. You can say the same about any software project or maybe even most work in general. As software engineers (I assume that's your profession too), we automate things that could have kept hundreds of people employed. There isn't that much different with AI - as long as there is money spent on the problem, there will be people willing to work on it, especially at the forefront of technolgy.

---

I agree with your second point. With that said, climate is way clearer distructive behavior, while also being more of a nuissance, a side effect of growth. AI has enormous potential and could, in the most optimistic outcomes, lead us to a utopia. Obviously, we all know that will not happen.

Also, the reason why AI progress will not be halted - we cannot allow our adversaries take the lead on this. It's really that simple.


> we automate things that could have kept hundreds of people employed

Well I do agree with that. But I think that should also be re-examined and we should automate less...


We can't unless the whole world agrees to. It's simply impractical to limit your own progress when you have foreign adversaries not doing the same.


Note that the adversaries don't have to be foreign. The same dynamics happen within domestic markets unless enforcement is ramped up to constrain the worst actors.


> We are fundamentally destructive species, who cannot see long-term problems if there is short-term gain. The only mechanism we have on a global scale to decide what to do is capitalistic motivations.

So if AI destroys us as a species and...

> There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.

Seems like we're in good shape?

If the end game is destroying ourselves, looks like our long term "problems" are solved.


I hate this framing of humanity as a “fundamentally destructive species” it’s a meaningless sound that people make with zero serious thought given to it. Exactly which species are not destructive? All animal life must consume other life in order to survive and exactly zero of them have any inhibition that would prevent them from maximizing their consumption and reproduction at the expense of all other life if they could. Humans are the only species that cares at all what happens to other forms of life and makes efforts at our own expense to limit or temper our impact. The only reason the world around you seems even remotely safe and comfortable is due to thousands of years of sustained human effort to make it so.


Humans are the unique species that have created a highly rigid capitalistic system where the only reward for the differential survival of ideas is short-term profit. So other species do have the consumptive tendency but we are the only ones that found a ruthlessly efficient system for actualizing our consumption without bound.

That is why I used the word destructive instead of "having the tendency to destroy". All animals have that, but only we have actualized it. Hence we are destructive to a level that is unseen in other species.


The evolutionary record is piled high with the bones of extinct species, extinctions caused by changes in the environment or by other species. We are not unique, we are just one of many millions of species to find a way to rapidly outcompete others but the difference is that we often choose not to. So far we can’t even hold a candle to the humble cyanobacteria in terms of wanton destruction of their environment and all life on the planet when they evolved the ability to photosynthesize. Similar though less dramatic events have likely occurred with each major evolutionary adaptation that allows a species to exploit something not available to others. For us it’s intelligence but for others it was eyes, fins, teeth, legs, claws, etc. all leaving a path of destruction and allowing the possessors of such traits to multiply and differentiate until their unique attributes are now the common necessities for survival.


That is true, and it is a good point. But the rate at which we are causing the extinction is much faster, and we do it consciously, causing harm to millions of species including ourselves. But you do have a point, I'll acknowledge that. We are not much different than a plague or a massive infection. Nevertheless, the fact that we do have intelligence means we have the moral obligation not to destroy other life and NOT to destroy at the rate we are.


I mean, for CO2, the U.S. and the EU (who were once the largest emitters) have not only flattened the curve, but have in fact reduced emissions over the past 20 years:

https://ourworldindata.org/grapher/annual-co-emissions-by-re...

China has blown up emissions astronomically, though. To a lesser extent other Asian countries have as well.

I generally agree that international regulations controlling AI are unlikely to work, though, since it seems like it might be such a powerful and disruptive technology: if it doesn't stall, it could effectively be single-shot Prisoner's Dilemma, and when you have 193 players, someone's going to defect.

Personally though I think there are two possible outcomes:

1. Progress stalls, and it turns out getting from GPT-4-Turbo to better-than-human intelligence just doesn't pan out. LLMs are stuck as junior engineers for decades. If so, this is largely good for software engineers (and somewhat good for everyone, since it means we're more productive), but society doesn't change too much.

2. Progress doesn't stall, and we hit at least slightly-superhuman intelligence within the next decade. While this would obviously be a tough shift for most knowledge workers, especially depending on how quickly the shift happens, I also think this would likely bring about incredible medical advances, as well as incredible advances in robotics that reduce the cost of physical labor as well: meaning the price of goods drops enormously, and thanks to the medical advances we significantly increase either our lifespans, or at least the quality of our lives in old age, which seems quite positive. We'd need to figure out some sort of UBI system once the labor costs drop enough, but I think most people will be in favor of that, and also most stuff will just be really cheap at that point: ultimately just the cost of electricity (even "raw materials" are priced based on the cost of labor to get the materials, and the labor would be... the cost of electricity to run the robotics).

There are probably some in between scenarios, but TBH it's hard to see anything other than "stall" vs "takeoff" as being likely: either you never get past human intelligence (stall), or you do break through the wall, and then intelligence self-improves faster than before, up to some sort of information theory limit that I think is a lot higher than the average human is operating close to (consider just the variation in intelligence between individual human beings!).

Takeoff could also result in some sort of doomsday scenario, but the current LLMs haven't seemed to have the problems that the early doomers predicted, and so I think the like, humanity-enslaving or destroying outcomes are probably just not gonna happen.


> This path of destruction is so blindingly obvious, it's incredible that everyone is still working on AI instead of trying to destroy it.

Obvious but also inevitable. Most of the discussions in this topic are short-sighted. We should look from a broader time perspective of human history. We're about to close the first quarter of 21th century with some initial advancements in AI. The second quarter will be all about transformation of the industries with the very possible introduction of AGI/ASI. This means the second half of the century must be a totally different world. With a totally different set of social constructs and different definitions for job, work, economy etc. I don't think we (humans, governments) are capable of changing the course of the history at the moment. We can only be cautious about the consequences.


This kind of take is just, let’s put it mildly, shallow.

We never complained about CAD systems replacing people who were drafting blueprints manually, not to mention all kinds of automation tools that replaced factory workers.

Why should AI coders be the same? If you can automate it, humans shouldn’t really be wasting their times on it.

There is nothing intellectual in pulling someone’s code from github to piece together some software by copy-pasting snippets.


First it was the cotton mills, and noone complained

Then it was the steam trains, and noone complained

Next it was the systems of electricity production, and noone complained


We'll all be on the beach at that point collecting our universal basic income right?


It feels like this might be sarcastic? But if not

Based on how the US currently rolls out universal programs, UBI probably looks a lot more like the current Section 8 housing program. They’re gonna guarantee something minimally livable, not beachfront property.


UBI won’t happen but the price of most goods and services will fall to a tiny fraction of what they are now. There will still be people who are considered impoverished but they’ll likely have a higher standard of living than you do.


and only if you are US citizen/resident. There will be only few country/winners in this game for now most likely: US, China, France.



I don't know, that could happen. But that's not really a great life...hamsters and pet birds also "sit on the beach" and collect "universal basic income" with some limited form of entertainment (hamser wheels and bright toys), but I for one do NOT want to be in a cage. I want genuine autonomy over my life, not being a cog in a machine, maintaining some AI machine whose needs for humanity dwindle over the year.


> I want genuine autonomy over my life, not being a cog in a machine

I have some news for you about life for the vast majority of people right now... Including yourself, most likely


What a terrible misreading of parent's comments, that's completely not in the spirit of HN and its guidelines[1].

You may feel like a cog in society, and at present, this is often not the case, compared to the situation which I talked about in a sibling comment[2].

If you disagree, try this: find a new job and put in your resignation and say that you're leaving effective immediately (or with a very short notice). Most likely, you'll have your manager and HR scrambling to get a proper handover, and sometimes followed with private comments berating you for your "lack of professionalism" and other comments like "why are you doing this to us" and "can we keep you for a little longer, please" (in violation of HR policy for the maximum notice period).

In an age where AI has replaced most humans, people would lose all leverage, and would be forced into zero-sum games (killing and maiming other people to steal their wealth) as they would not have any other mechanism to prove their worth and improve their standing in society.

[1] https://news.ycombinator.com/newsguidelines.html

[2] https://news.ycombinator.com/item?id=39743566


> In an age where AI has replaced most humans, people would lose all leverage, and would be forced into zero-sum games (killing and maiming other people to steal their wealth) as they would not have any other mechanism to prove their worth and improve their standing in society.

I don't know about you but maybe the fact so many people determine their entire worth from their job is more problematic than inventing technology that will enable people to pursue more worthwhile objectives of their own choosing.


I strongly suspect that a forcing function that causes people to have challenges that proves their self worth, society is likely to collapse in the style of the "behavioral sink" experiment[1] (though I agree the way it'd play out in humans would be a bit different).

Traditionally, these challenges have come through challenges associated with a hunter-gatherer or an agricultural lifestyle, and in contemporary times, it is associated with a job.

There are other forms which can make feel people rewarded, such as through the excess consumption of food or social media activity, but since they do not have an associated mental or physical challenge, they only have negative worth and cannot be the basis for someone's self worth and value.

[1] https://en.wikipedia.org/wiki/Behavioral_sink


> Traditionally, these challenges have come through challenges associated with a hunter-gatherer or an agricultural lifestyle, and in contemporary times, it is associated with a job.

Maybe it's time we found other "forcing functions"?


I'm not aware of mechanisms that could replace these; could you propose some that would work in a world where most or all jobs have been replaced with AI or other automated systems?


Practically everything that people do that isn't a "job" - sports, cooking, crafts, philosophy, etc. In short, "creating stuff"

Furthermore, it seems silly to me to think that all human intellectual endeavors will simply disappear. We will still need to drive science, build bigger and better AI systems, improve and repair the world, etc.


> What a terrible misreading of parent's comments, that's completely not in the spirit of HN and its guidelines[1].

And not that I agree my comment goes against the guidelines, but let me be generous and ask this of GP: What autonomy over your life now would you lose if AI were to become as prevalent as you're implying?


> Most likely, you'll have your manager and HR scrambling to get a proper handover, and sometimes followed with private comments berating you for your "lack of professionalism" and other comments like "why are you doing this to us" and "can we keep you for a little longer, please" (in violation of HR policy for the maximum notice period).

And your takeaway from this would be that that worker is an invaluable member of the company? They don't care about workers, they care about their bottom line. If they could easily replace workers, they wouldn't say anything and would just make sure you clear out your desk before you leave.


I'd argue being able to affect the bottom line is not nothing (especially if you compare the wealth difference between you and the company), and the source of what allows you to negotiate salary hikes, for example.


I don't disagree, but also don't believe this is mutually exclusive to being a cog in a company


While I disagree with much of what is expressed here, I don't think the downvotes are necessarily deserved


I like this analogy, especially because lots of homeless people sit on the beach without things to do, and we don’t really envy that they have the time to write the next great American novel.


Being homeless in a place with a nice climate isn't bad, you can camp every night under starry skies, read books, make art, hang out with your friends and hook up with hippie chicks. It only gets shitty when you get sick or the weather gets bad.


> But that's not really a great life

Most people on this planet would figuratively kill for that life. It's a crushing existence trying to survive on below $10000/year.


>I want genuine autonomy over my life, not being a cog in a machine

So that's not the case for you now?


It's good to see other people talk about what I've been privately warning about amongst my circles for almost 5 years now, though no one has really taken it seriously and I've been often dismissed as a conspiracy nut for saying this.

The reason citizens like you and me are kept around is because we have typically provided some form of economic output, in the form of labor and taxes, to society, and indirectly to the wealthy and powerful class, which is billionaires and politicians.

If most citizens don't have anything to contribute in an economy, they can be eliminated entirely from society so that this class can win the zero-sum games in society, and have natural resources to themselves. Thus, in the best case, commoners will be kept around in government provided housing (with all the things that it implies), with strict limits on reproduction and a very small baseline of resources so as to give the appearance that the government wasn't killing its people, though death would still be incentivized in the form of lower standards of medical care, so that society's dead weight can be eliminated.

In the worst case, politicians and billionaires would simply join forces to eliminate aforementioned dead weight through mass genocide. There is no downside to this, except that the first country to do this may have to face criticism, but I suspect eventually all countries would move in that direction once a few have taken their first step and established precedent.


You're not the only person who has had this idea. I have too. This is the first time I've seen someone else say it quite like this.

I have to say, it's more confronting to hear it come from someone else.

The interesting part will be the violent conflict inherent in this turn of events. Will it be so easy to mow down billions of people? Will a way be found to do the job without the natural resistance of empathetic humans (A virus? A very large drone swarm? Pin-sharp training of a traditional police force?). How fast is too risky to execute? How _slow_ is too risky to execute?


Don't fret - there will still be plenty of jobs for non-execs and seniors: meat packer, gravedigger, wastewater treatment plant worker.

Remember, the social consequences of technological innovation are irrelevant in the Church of Modernity (no complainin' - what are you, some kinda heckin' Luddite!?). You will lose your way of life and you will be happy.


If we develop the technology to replace certain jobs, then we should do it -- even if they're our jobs. Who even wants a career that only exists due to special interest lobbying to protect it from disruption?


Even better, the AI was trained on code and answers that you contributed to Stack Exchange, Quora, etc., along with textbooks for which the authors did not grant permission and received no compensation.


If you really want to make more work, blow up budgets and create positions for masses of developers and assistants then you should demand we go back to punch cards and doing all the planning and algorithm design on paper before we translate it to instructions and hand it off to computer operators.


> we are incrementally improving AI because it is an intellectual amusement

People aren't developping AI because it's a hobby; people are doing it because it's an advancement in technology in the same way we developped electronics, computers, etc.


But why do programmers themselves develop AI? I'm talking about individual motivations here. For example, I can say I just got into technical work because it's intellectually interesting, not because I wanted to further technological development.

And those individual motivations add up to a collective force that is shaped via actions that persist with differential survivability based on capitalistic forces, which still have nothing to do with improving society. And they only reason why we have those is because at one time, free markets and trading were a good proxy for societal improvement on a small scale. On a large scale, they have become completely pathological.


Next was the human who inherited the generated source and cursed every entity in existence.


That is already the case when inheriting from humans. I think the big advantage is that you don't have curse to as you won't even read it; you'll throw it into the black hole and ask for the improvements you require. Until that also is no longer needed.


Well first there was a human programmer who made a lot of non-programmer jobs obsolete -- plenty of people complained but they just had to re-school themselves.


> Now it's a near fully-functional teammate that needs a bit of supervision

It's not. It's a tech demo. Try replacing your teammates with it...


I say it in every AI thread, but the only solution here is humanity declaring war on AI and the sooner people wake up to this the better.

AI at it's limit is practical guaranteed to destroy advance human society by concentrating mass power either to a single or small group of individuals, to itself or to everyone. Whichever happens the ultimate outcome is probably the same. Whether you open source or regulate, it really doesn't matter in the end. These are not solutions.

If a small group of individuals have access to AI they in effect become gods, that power will corrupt. If AI itself develops agency then it becomes our god. And if we all have the power of gods then it's just a matter of time before someone decides they want to use this power for destruction.

What we're building here is fundamentally destabilising and that's not even factoring in all the near-term risks of AI like mass job loses. And I think it's frankly disgusting how so many people dismiss this. I was checking out the VFX subreddit post the Sora announcement and many people there were literally suicidal at the prospect of their passion and economic value become obsolete in perhaps just a matter of years. It's heartbreaking.

And who even wants this future? We act like we have no choice the matter and maybe long-term that's true, but right now at the very least we have the chance to slow this rate of progress down and reduce near-term risk.

I know I sound like lunatic in all these AI doomer threads, but I continue to feel the need to normalise the obvious radical position that we need a UN agreement prohibiting the construction of AI data centers. And where data centers are built that exceed thresholds can and should be destroyed. As a global community we must also encourage anyone working in secret AI teams to become whistle blowers so that we can reduce the risk of governments or corporations building advance AI systems in secret.

And who knows, this might only buy as 3 or 4 decades, but at the very least we will give ourselves more time to improve our biological intelligence and equip ourselves with ways to deal with the risks AI presents.


Sorry but this is a cold take. Open models will massively empower humanity once we can bring the power/compute for a given performance level in line, and regulation can absolutely curtail the concentration of power if we can just summon the political will to create real ones.

The problem is that nearly half the population in America wants to give their freedom away and create a dictator to stick it to the "elites" so moving away from a concentration of power is probably not in the cards.


If I'm understanding correctly, I think what you're saying there is some hypothetical AI regulation that can ensure AI is used for the benefit of humanity, but in practice that's unlikely to happen because humans are going to be humans.

I don't disagree. That's why I'm concerned. I think what's certain is that if we want any chance at getting this right we must slow the exponential rate of progress.


holding on to jobs is not the meaningful goal. what matters is who owns those new means of production, and how we make them socialized instead of concentrated in the hands of the bourgeoisie


It’s why I am focused on how this kind of technology gets “offline” and into locally run models. If I can buy a top of the line MAC in a few years time with something in the vicinity of half a terabyte of unified ram, and run this kind of model on my own hardware, it would be a powerful force multiplier, an a genuinely empowering tool… as opposed to having to rent it from someone in order to remain competitive with everyone else forced to rent it.


Even then you actually lack the computing power and data to create these models in the first place. When this power is centralized it can essentially control what goes into the models output.


Yes because the greedy capitalists will obviously keep this valuable technology all to themselves, hoarding it away so they alone can revel in its delights. Give me a break, this isn’t 1924, we know how things actually play out. The capitalists want to sell you a product and that is fundamentally at odds with hoarding beneficial technology. They will deliver those benefits straight to your door faster, cheaper and better than any socialized pipedream ever could. If you somehow seize these means I have no doubt they’ll only be available to top party members due to their great expense and the considered opinion that the average worker has no need for such tools in their assigned role.


Yes that’s definitely happening here /s


The biggest triumph of neoliberalism for the capitalist elite class was to achieve a high enough level of society atomisation to where the hyperindividualism pushed by this ideology fuels the erosion of solidarity in the working class to the point we are today. It doesn't help that people are constantly kept on their toes about their jobs, companies have no responsibility to their workers anymore (all of the perks is simply as a market force to incentivise workers to choose company X over Y) requiring workers to see each other as competitors, fight amongst themselves while peddling the bullshit those elites want such as "I don't need unionisation because I want to be free to negotiate my own contracts".

It created this myopic view of labour relations where some workers think they will do better on their own against the powers-that-be, completely blind that they only have a tiny modicum of power while the elites look for any opportunity to strip that power away, be it through automation, collusion in cartel-like behaviour in the job market to push wages down, etc.

It's pretty obvious on HN how far this has come, in any topic discussing unions there will be quite a few of these illuded workers thinking they can do better on their own rather than show some solidarity to fellow workers, not too dissimilar than the American feeling of being just temporarily embarassed millionaires.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: