Hacker Newsnew | past | comments | ask | show | jobs | submit | strange_quark's commentslogin

On the contrary, it’s the companies doing the lawyering. A disengagement is when the vehicle reverts fully back to manual control. Tele-operation does not count as a disengagement, and the frequency of tele-operation intervention is a closely guarded industry secret.

Oh interesting, I had figured tele-operation would count as a disengagements.

Looking into reports you mentioned in a child comment, CNN reports Cruise needed human assistance every ~5 miles [1]. And I certainly wouldn't call a system that needs assistance every ~5-10 minutes Level 4 self driving.

Subjectively, it appeared Waymo was significantly better than Cruise in 2023 but without data it's hard know what that means in terms of human intervention.

If Waymo needed human assistance every 10-20 minutes, I would agree that it also doesn't qualify as Level 4 autonomous.

[1] https://www.cnbc.com/2023/11/06/cruise-confirms-robotaxis-re...


You seem to have some deeper insight into this - in your estimation, how often does tele-operation (even a small correction) take place?

There’s been reporting on this in several mainstream publications that was accurate as far as the systems I worked with. Unfortunately I don’t want to dox myself on here, so unsatisfyingly the best I can offer is “trust me bro”.

The tele-operation is also kinda vague because as I understand it, with Waymo at least, they are not turning a steering wheel and pushing pedals at HQ, they are saying "Pull over here" etc.

In the age of ubiquitous video conferencing, does it really make sense for business travel?

My work just did a big moderately disruptive shuffling of all the teams to try to localize as many members of each squad as possible in one location and the trend since COVID stopped being as deadly has been a massive wave of RTO so management seems to believe there's benefit in in person meetings or at least professes and acts like they do. I can't assign all of the huge RTO pushes to just management justifying and propping up their office real estate portfolios.

Sure, business travel is definitely still a thing. But every company I’ve ever worked at has sort of accepted that travel days are lost anyways because people come in from all over the country, get in at different times, have delayed flights, etc. My point being that I’m skeptical that companies are going to start paying 2x, 3x, 4x the cost so that their employees can get there a few hours faster, especially when, at least in my experience, it’s hard to get them to even pay for seats with extra legroom.

But it's only an inflection point if it's sustainable. When this comes crashing down, how many people are going to be buying $70k GPUs to run an open source model?

I said open-source models, not locally-hosted models. Essentially, more power to inference-only providers such as Groq and Together AI which host the large-scale OSS LLMs who will be less affected by a crash as long as the demand for coding agents is there.

> When this comes crashing down, how many people are going to be buying $70k GPUs to run an open source model?

If the AI thing does indeed come crashing down I expect there will be a whole lot of second-hand GPUs going for pennies on the dollar.


Ok, and then? Taking a one time discount on a rapidly depreciating asset doesn’t magically make this whole industry profitable, and it’s not like you’re going to start running a GB200 in your basement.

Then I'll wait for a bunch of companies to spring up running those cheap GPUs in their data centers and selling me access to GLM-4.7 and friends.

Or I'll start one myself, if the market fails to provide!


Checked your history. From a fellow skeptic, I know how hard it is to reason with people around here. You and I need to learn to let it go. In the end, the people at the top have set this up so that either way, they win. And we're down here telling the people at our level to stop feeding the monster, but told to fuck off anyways.

So cool bro, you managed to ship a useless (except for your specific use-case) app to your iphone in an hour :O

What I think this is doing is it's pitting people against the fact that most jobs in the modern economy (mine included btw) are devoid of purpose. This is something that, as a person on the far left, I've understood for a long time. However, a lot (and I mean a loooooot) of people have never even considered this. So when they find that an AI agent is able to do THEIR job for them in a fraction of the time, they MUST understand it as the AI being some finality to human ingenuity and progress given the self-importance they've attributed to themselves and their occupation - all this instead of realizing that, you know, all of our jobs are useless, we all do the exact same useless shit which is extremely easy to replicate quickly (except for a select few occupations) and that's it.

I'm sorry to tell anyone who's reading this with a differing opinion, but if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't. I say this, again, as someone who beyond their PhD thesis (and even then) does not produce anything of value to the world, while being paid handsomely for it.


> if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't.

This doesn’t logically follow. AI agents produce loads of value. Cotton picking was and still is useful. The cotton gin didn’t replace useless work. It replaced useful work. Same with agents.


> You and I need to learn to let it go.

Definitely, it’s an unhealthy fixation.

> I'm sorry to tell anyone who's reading this with a differing opinion, but if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't.

I agree with this, but I think my take on it is a lot less nihilistic than yours. I think people vastly undersell how much effort they put into doing something, even if that something is vibecoding a slop app that probably exists. But if people are literally prompting claude with a few sentences and getting revolutionary results, then yes, their job was meaningless and they should find something to do that they’re better at.

But what frustrates me the most about this whole hype wave isn’t just that the powers that be have bet the entire economy on a fake technology, it’s that it’s sucking all of of the air out of the room. I think most people’s jobs can actually provide value and there’s so much work to be done to make _real_ progress. But instead of actually improving the world, all the time, money, and energy is being thrown into such a wasteful technology that is actively making the world a worse place. I’m sure it’s always been like this and I was just to naive too see it, but I much preferred it when at least the tech companies pretended they cared about the impact their products had on society rather than simply trying to extract the most value out of the same 5 ideas.


Yeah, I do tend to have a rather nihilistic view on things, so apologies.

I really think we're just cooked at this point. The amount of people (some great friends whom I respect) that have told me in casual conversation that if their LLM were taken from them tomorrow, they wouldn't know how to do their work (or some flavour of that statement) has made me realize how deep the problem is.

We could go on and on about this, but let's both agree to try and look inward more and attempt to keep our own things in order, while most other people get hooked on the absolute slop machine that is AI. Eventually, the LLM providers will need to start ramping up the costs of their subscriptions and maybe then will people start clicking that the shitty code that was generated for their pointless/useless app is not worth the actual cost of inference (which some conservative estimates put out to thousands of dollars per month on a subscription basis). For now, people are just putting their heads in the sand and assuming that physicists will somehow find a way to use quantum computers to speed up inference by a factor of 10^20 in the next years, while simultaneously slashing its costs (lol).

But hey, Opus 4.5 can cook up a functional app that goes into your emails and retrieves all outstanding orders - revolutionary. Definitely worth the many kWh and thousands of liters of water required, eh?

Cheers.


A couple of important points you should consider:

1. The AI water issue is fake: https://andymasley.substack.com/p/the-ai-water-issue-is-fake (This one goes into OCD-levels of detail with receipts to debunk that entire issue in all aspects.)

2. LLMs are far, far more efficient than humans in terms of resource consumption for a given task: https://www.nature.com/articles/s41598-024-76682-6 and https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans...

The studies focus on a single representative task, but in a thread about coding entire apps in hours as opposed to weeks, you can imagine the multiples involved in terms of resource conservation.

The upshot is, generating and deploying a working app that automates a bespoke, boring email workflow will be way, way, wayyyyy more efficient than the human manually doing that workflow everytime.

Hope this makes you feel better!


> 2. LLMs are far, far more efficient than humans in terms of resource consumption for a given task: https://www.nature.com/articles/s41598-024-76682-6 and https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans...

I want to push back on this argument, as it seems suspect given that none of these tools are creating profit, and so require funds / resources that are essentially coming from the combined efforts of much of the economy. I.e. the energy externalities here are monstrous and never factored into these things, even though these models could never have gotten off the ground if not for the massive energy expenditures that were (and continue to be) needed to sustain the funding for these things.

To simplify, LLMs haven't clearly created the value they have promised, but have eaten up massive amounts of capital / value produced by everyone else. But producing that capital had energy costs too. Whether or not all this AI stuff ends up being more energy efficient than people needs to be measured on whether AI actually delivers on its promises and recoups the investments.

EDIT: I.e. it is wildly unclear at this point that if we all pivot to AI that, economy-wide, we will produce value at a lower energy cost, and, even if we grant that this will eventually happen, it is not clear how long that will take. And sure, humans have these costs too, but humans have a sort of guaranteed potential future value, whereas the value of AI is speculative. So comparing energy costs of the two at this frozen moment in time just doesn't quite feel right to me.


These tools may not be turning a profit yet, but as many point out, this is simply due to deeply subsidized free usage to capture market share and discover new use cases.

However, their economic potential is undeniable. Just taking the examples in TFA and this sub-thread, the author was able to create economic value by automating rote aspects of his wife's business and stop paying for existing subscriptions to other apps. TFA doesn't mention what he paid for these tokens, but over the lifetime of his apps I'd bet he captures way more value than the tokens would have cost him.

As for the energy externalities, the ACM article puts some numbers on them. While acknowledging that this is an apples/oranges comparison, it points out that the training cost for GPT-3 (article is from mid-2024) is about 5x the cost of raising a human to adulthood.

Even if you 10x that for GPT-5, that is still only the cost of raising 50 humans to adulthood in exchange for a model that encapsulates a huge chunk of the world's knowledge, which can then be scaled out to an infinite number of tasks, each consuming a tiny fraction of the resources of a human equivalent.

As such, even accounting for training costs, these models are far more efficient than humans for the tasks they do.


I appreciate your responses to my comments, including the addition of reading material. However, I'm going to have to push back on both points.

Firstly, saying that because AI water use is on par with other industries, then we shouldn't scrutinize AI water use is a bit short-sighted. If the future Altman et al want comes to be, the shear scale of deployment of AI-focused data centers will lead to nominal water use orders of magnitude larger than other industries. Of course, on a relative scale, they can be seen as 'efficient', but even something efficient, when built out to massive scale, can suck out all of our resources. It's not AI's fault that water is a limited resource on Earth; AI is not the first industry to use a ton of water; however, eventually, with all other industries + AI combined (again, imagining the future the AI Kings want), we are definitely going 300km/h on the road to worldwide water scarcity. We are currently at a time where we need to seriously rethink our relationship with water as a society - not at a time where we can spawn whole new, extremely consumptive industries (even if, in relative terms, they're on par with what we've been doing (which isn't saying much given the state of the climate)) whose upsides are still fairly debatable and not at all proven beyond a doubt.

As for the second link, there's a pretty easy rebuke to the idea, which aligns with the other reply to your link. Sure, LLMs are more energy-efficient at generating text than human beings, but do LLMs actually create new ideas? Write new things? Any text written by an LLM will be based off of someone else's work. There is a cost to creativity - to giving birth to actual ideas - that LLMs will never be able to incur, which makes them seem more efficient, but in the end they're more efficient at (once again) tasks which us humans have provided them with plenty of examples of (like writing corporate emails! Or fairly cookie-cutter code!) but at some point the value creation is limited.

I know you disagree with me, it's ok - you are in the majority and you can feel good about that.

I honestly hope the future you foresee where LLMs solve our problems and become important building blocks to our society comes to fruition (rather than the financialized speculation tools they currently are, let's be real). If that happens, I'll be glad I was wrong.

I just don't see it happening.


These are important conversations to have because there is so much hyperbole in both directions that a lot of people end up having strong but misguided opinions. I think it's very helpful to consider the impact of LLMs in context (heheh) of the bigger picture rather than in isolation, because suddenly a lot of things fall into perspective.

For instance, all water use by data centers is a fraction of the water used by golf courses! If it really does comes down to the wire for conserving water, I think humanity has the option of foregoing a leisure activity for the relatively wealthy in exchange for accelerated productivity for the rest of the world.

And totally, LLMs might not be able to come up with new ideas, but they can super-charge the humans who do have ideas and want to develop them! An idea that would have taken months to be explored and developed can now be done in days. And given that like the majority of ideas fail, we would be failing that much faster too!

In either case, just eyeballing the numbers we have currently, on average the resources a human without AI assistance would have consumed to conclude an endeavor far outweighs the resources consumed by both that human and an assisting LLM.

I would agree that there will likely be significant problems caused by widespread adoption of AI, but at this point I think they would social (e.g. significant job displacement, even more wealth inequality) rather than environmental.


> For now, people are just putting their heads in the sand and assuming that physicists will somehow find a way to use quantum computers to speed up inference by a factor of 10^20 in the next years, while simultaneously slashing its costs (lol).

GPT-3 Da Vinci cost $20/million tokens for both input and output.

GPT-5.2 is $1.75/million for input and $14/million for output

I'd call that pretty strong evidence that they've been able to dramatically increase quality while slashing costs, over just the past ~4 years.


Isn't that kind of related with the amount of money thrown at the field? If the economy gets worse for any reason, do you think that we can still expect these level of cutting costs in the future?

> But hey, Opus 4.5 can cook up a functional app that goes into your emails and retrieves all outstanding orders - revolutionary. Definitely worth the many kWh and thousands of liters of water required, eh?

The thing is in a vacuum this stuff is actually kinda cool. But hundreds of billions in debt-financed capex that will never seen a return, and this is the best we’ve got? Absolutely cooked indeed.


Big tech laid off 150,000 people last year despite constantly beating wall st expectations and blowing more money than the Apollo program on a money losing technology with the stated goal of firing even more people. Totally insane that most people I talk to still don’t think they need a union.

Every time: Can we see the prompt? No. Ok, can we see the artifact? Also no.

In the Xitter thread she literally says it’s a toy version and needs to be iterated on.

How much longer are we going to collectively pretend this is so revolutionary and worth trillions of dollars of investment?


Sometimes I feel like I’m living in a parallel universe because all I’ve seen for the past 3+ years until very recently has been breathless think-pieces about how this specific version of AI is the future and everyone has to get on board or be left behind. Every time one of these CEOs with massive vested interests open their mouths, the press goes to bat for them and publishes dozens of new scary headlines showing number go up.

I think that's true for the tech & financial press, for obvious reasons. Outside of that bubble journalists and writers have been very anxious, given AI seems to directly threaten their profession even further.

The tech press does not universally view AI as useful. Indeed, there is an entire subindustry of anti-tech press that stands ready to reflexively denounce AI at every opportunity. But even seemingly neutral or pro-tech outlets such as Ars Technica are ambivalent. Ars has one editor apparently dedicated to saying negative things about Gemini, for example.

The next sentence is “The Bay Area has converged on Asian-American modes of socializing”

As if there is a single Asian-American culture and no Asian-Americans like going out to bars…

The whole piece is littered with weird over generalizations over huge and diverse groups of people.


It happens with more authors. Blogging and self-promotion on social media easily leads to fast and sensational "insights". It is fast food: feels good, easily digestible, way better than a thorough study involving the required ifs-and-buts and the hard work of trying to find blind spots in one's own perspective.

The latter profile isn't suitable for short content bites and doesn't sell to a large audience. The former does, but comes with a cost only the uninformed aren't aware of.

I can sympathize with that though. Non-fiction in general is a hard sell.


> I believe that Silicon Valley possesses plenty of virtues. To start, it is the most meritocratic part of America.

Oh come on, this is so untrue. Silicon Valley loves credentialism and networking, probably more than anywhere else. Except the credentials are the companies you’ve worked for or whether you know some founder or VC, instead of what school you went to or which degrees you have.

I went to a smaller college that the big tech firms didn’t really recruit from. I spent the first ~5 years of my career working for a couple smaller companies without much SV presence. Somehow I lucked into landing a role at a big company that almost everyone has definitely heard of. I didn’t find my coworkers to necessarily be any smarter or harder working than the people I worked with previously. But when I decided it was time to move on, companies that never gave me the time of day before were responding to my cold applies or even reaching out to _me_ to beg me to interview.

And don’t get me started on the senior leadership and execs I’ve seen absolutely run an entire business units into the ground and lose millions of dollars and cost people their jobs, only to “part ways” with the company, then immediately turn around and raise millions of dollars from the same guys whose money they just lost.


I guess I'll ask since you strongly disagree and ignoring the fact this is very reductionist: In your opinion, what is the most meritocratic part of America?

Isn’t the obvious answer that many would refute the premise of meaningful regional variation? In which case the claim isn’t that somewhere else is that place but rather than all places are substantially equivalent on this difficult to measure concept (or difference is unknown).

Meritocracy is not a single thing. Regions do not have a uniform relationship with merit, as they are made of many different communities all living amongst each other. "Which is the most meritocratic part of America" is not even an especially meaningful question.

Judging you based on the work you've done seems... very meritocratic to me?

I think the OP was making the point that it isn't meritocratic, at least that is how it read to me: they thought people where not meaningfully different in skill level (the people at the exclusive company being comparable to everywhere else) and that where you worked was the new way to find the 'in' people, rather than what university you graduated from (saying they had job offers based purely on getting the job at the exclusive company).

You could argue that getting a job at X or Y company by itself conveys some level of skill - but if we are honest, that is just version of saying you went to Harvard.

There's lots of cliques everywhere in life, and various ways to show status, SV is definitely not immune to that.


Yes, that is what OP is saying, I'm just not very convinced. Primarily because his sample size is quite small – he says the people in his smaller, non-SV jobs were just as competent as those in the SV company, but that could mean a number of things that are not "SV isn't meritocratic". For example, it could just be that his previous colleagues were more competent than the actual national/global average, which seems probable.

meritocratic means "judgement on merit (aka skill)"

and the story told is "no judgement on skill, only on being in-group. It's just the in-group is caused by previous employment and not birth-right/nationality/etc"


Previous employment isn't an "in-group", it's an endorsement of your skill (assuming your references pass muster).

it's the same endorsement of skill as a university diploma nowadays - not correlated in the slightest

surviving a startup says a lot more about skill than going through employee churn of some bigname corp


Personally I don't think it is the same as employment + reference. Imo it is fairly easy to game university and get a good degree from a "top" school without actually learning that much at all. Harder to get a good work reference if you don't deliver at your job.

"Where you have worked" and "what you have done" are different things.

note that the first chunk of the piece spends time to analogize SV to the CCP, in terms of its willingness to take attacks (of humor).

So, for your quote, a skeptical interpretation of the text may assert the author was merely praising SV in the same fashion one might appraise the party.


Even if that’s true, that seems like a putrid number, no?

Assuming a single 1GW the data center runs 24/7 365, it’s consuming 8.76 TwH per year. Only being able to generate $10-$12B in revenue (not profit) per year while consuming as much electricity as the entire state of Hawaii (1.5M people) seems awful.


I honestly cannot tell if this is satire. Literally a Lord Farquaad level take. Some of you may get asthma and lung cancer, but that’s a sacrifice we’re willing to make to ensure we can deliver MechaHitler to the masses.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: