Hacker Newsnew | past | comments | ask | show | jobs | submit | frankfrank13's commentslogin

bot comment


If you refresh are you still seeing it? I just got a 404 but am now able to access on refresh.

Copy/pasting below for easier reading in case you still have issues:

An AI Agent Broke Into McKinsey’s Internal Chatbot and Accessed Millions of Records in Just 2 HoursA red-team experiment found an AI agent could autonomously exploit a vulnerability in McKinsey’s internal chatbot platform, exposing millions of conversations before the issue was patched.

A security startup said their autonomous AI agent was able to break into McKinsey’s internal generative-AI platform in roughly two hours, gaining access to tens of millions of chatbot conversations and hundreds of thousands of files tied to corporate consulting work.

Researchers at red-team security firm CodeWall targeted McKinsey as part of a controlled test designed to simulate how modern hackers might use AI agents to probe corporate infrastructure. The experiment ultimately allowed the system to obtain full read-and-write access to the company’s AI chatbot database, according to a report by The Register.

CodeWall’s AI agent identified a vulnerability in Lilli, McKinsey’s proprietary generative-AI platform introduced in 2023 and now widely used across the firm. The chatbot has become a central tool inside the consulting giant. About 72 percent of McKinsey’s employees—more than 40,000 people—use Lilli, generating over 500,000 prompts every month, according to The Register.

Within two hours of launching the automated test, the researchers said their AI agent had accessed 46.5 million chatbot messages covering topics such as corporate strategy, mergers and acquisitions, and client engagements. The system also exposed 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts that govern how the chatbot behaves, The Register reported.

Because the vulnerability allowed both reading and writing data, an attacker could theoretically manipulate the chatbot’s internal prompts, quietly altering how it responds to consultants across the company. That means someone exploiting the flaw could potentially poison the advice generated by the system without deploying new code or triggering standard security alerts.

“No deployment needed. No code change,” the researchers wrote in their blog post. “Just a single UPDATE statement wrapped in a single HTTP call.”

How the AI Agent Broke In

The attack began when CodeWall’s AI agent identified publicly exposed API documentation tied to Lilli. The documentation included 22 endpoints that required no authentication, one of which logged user search queries.

While analyzing the system, the agent discovered a classic flaw: The software was taking information from users and plugging it directly into its internal database without checking it first—known as SQL injection. That’s like a building security desk automatically letting anyone make their own keycards to get in.

CodeWall disclosed the vulnerability chain to McKinsey on March 1. By the following day, the consulting firm had patched the exposed endpoints, taken the development environment offline, and restricted access to the API documentation, The Register reported.

“Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party,” a McKinsey spokesperson told The Register. “McKinsey’s cybersecurity systems are robust, and we have no higher priority than the protection of client data and information we have been entrusted with.”

The Autonomous Cybersecurity Threat

For CodeWall’s CEO, Paul Price, the bigger concern is not this specific vulnerability but the speed and autonomy of the attack itself. The AI agent that conducted the probe operated without human guidance, Price said.

“We used a specific AI research agent to autonomously select the target,” he told The Register. “Hackers will be using the same technology and strategies to attack indiscriminately.”

That shift could enable cybercriminals to conduct machine-speed intrusions, automating reconnaissance, vulnerability discovery, and exploitation at a scale traditional attackers couldn’t achieve. And as companies increasingly deploy internal AI systems like McKinsey’s Lilli, those platforms may become some of the most valuable, and vulnerable, targets.


The chat messages are very very sensitive. You could easily reverse engineer nearly every ongoing Mck engagement. The underlying data is not as sensitive, its decades of post-mortems, highly sanitized. No client names, no real numbers.

Some insider knowledge: Lilli was, at least a year ago, internal only. VPN access, SSO, all the bells and whistles, required. Not sure when that changed.

McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.

I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.

That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.

So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.

This is a failure of McKinsey's culture around technology.


Couple of things to add:

McKinsey has a weird structure where there are too many cooks in the kitchen.

Everybody there is reviewed on client impact, meaning it ends up being an everybody-for-themselves situation.

So as a developer you have little guidance (in fact, you're still being reviewed on client impact, even if you have 0 client exposure).

Then a (Senior) Partner comes in with this idea (that will get them a good review), and you jump on that. After all, it's all you can do to get a good review.

You work on it, and then the (Senior) Partner moves on. But it's not done. It's enough for the review, but continuing to work on it doesn't bring you anything, in fact, it will actually pull you down, as finishing the project doesn't give immediate client results.

So what does this mean? Most products of McKinsey are a grab-bag of raw ideas of leadership, implemented as a one-off, without a cohesive vision or even a long-term vision at all. It's all about the review cycle.

McKinsey is trying to do software like they do their other engagements. It doesn't work. You can't just do something for 6 months and then let it go. Software rots.

The fact that they laid off a good amount of (very good) software engineers in 2024 is a reflection on how they see software development.

And McKinsey's people, who go to other companies, take those ideas with them. Result: The UI of your project changes all the time, because everybody is looking at the short-term impact they have that gets them a good review, not what is best for the project in the long term.


Those comments are spot on.

McKinsey was on a spree to become the best tech consulting company and brought a lot of great tech talent but the 2023 crisis made leadership turn 180 and simply ditch/ignore all the tech experts they brought to the firm.

All the expertise has left the firm and now they are more and more becoming another BS tech consulting firm, with strategy folks that don't even know that ML is AI advising clients on Enterprise AI transformation.

The tech initiative was a failure and Lilli's problem is just a symptom of it.

I wonder what was the experience at Bain and BCG


I previously worked at BCGX, their tech arm. It's not quite as bad as you point out here, but tech workers are very much second-class-citizens. There's a "jock" vs. "nerd" dynamic between BCG business consultants and BCGX tech folks, even at senior levels. I think it's changing, but it will take a long time and many technical folks being admitted to the partnership.

I'm far from being an expert, but it sounds like this company needs some consultancy.

Can McKinsey fund McKinsey by consulting for McKinsey? Could we oroborus corporate consulting so that those consultants could be trapped in a loop and those of us doing useful work wouldn't need to interact with them anymore?

Have you seen current AI deals? This IS the future, but so much more efficient than requiring OpenAI, NVidia, MS, Amazon, etc. all be involved.

What do you mean exactly?

Why would anyone work there, then, unless that's the only place they could get hired as a dev?

And if the latter is the case, then that sort of stamps the case closed from the get-go...


Its the most political possible version of being a dev. Out of college, the highest ranking person at a tech company who knows you are is maybe a staff. At McKinsey, you regularly meet execs, boards, etc. Plus great pay, travel, insane perks. I didn't pay for a personal flight for years I had so many points.

Great money?

Years ago, I was at a Big4; had a co-worker whose spouse was working for MCK; we had more or less the same salary at the Big4, but spose at MCK was getting more or less exactly the double amount.

Then I listened and we started to calculate:

- In the MCK Office from 0900 - 2300 on MO-FR

- In the MCK Office from 1000 - 1600/1700 on SA

- Often in the MCK Office from 1000-1400 on SU

Overall, yes: The amont was the double amount - but in the end working hours were also roughly the double.


Not really when you normalize by hours you are expected to work. You're also surrounded by spineless sycophantic keeners without an original thought in their heads who would throw you off the building for a good review.

It reminds me of Lewis' "National Institute for Co-ordinated Experiments"

The health care is amazing, though. $30/mo for a family $900 deductible? Something like that. If you have a sick family member it's a no brainer.


According to levels the pay band caps out around $250k and a principal title. It's good but probably not enough for most to put up with the culture long term.

>[...] the pay band caps out around $250k [...] probably not enough for most [...]

an absolutely wild statement to 99.9+% of the world


99.9% of the world doesn't live in the US with a 4.0 GPA from a top ten university.

They're not very bright, most of them. But they're very hard workers and high achievers. They stay for the resume candy or the health care.


>[...] US with a 4.0 GPA from a top ten university. They're not very bright, most of them.

the top students from the top ten universities in the US produce... mostly not very bright people?

this is getting even stranger to the rest of us plebians. sometimes i am left in awe of how different my world is from some of you here


"US produce... mostly not very bright people?"

The top universities are not setup to mold intellectually rigorous and curious people. It's setup to make hard working, and increasingly sycophant men.

My lab mate is a former drug addict with two years of art school. Easily more intellectually curious than anyone I met at McKinsey.


How different the world is? But your credentials worship fits right in with this community.

Ideologically aligned if nothing else.

Well we can all at least imagine being some 4.0 Ivy League dude who only interacts with 4.0 Ivy League dudes. He’s not going to think that everyone he interacts with range from merely brilliant to the most studious-enlightened hardworking top of the morning fellow (or whatever adjectives to use). He’s gonna think that some of them are idiots. It’s only human.


I was a B/B- student from a foreign top 100 university. I don't know how I got accepted to a top 5 engineering school in the US. I accepted and ended my PhD with a 3.3. Im not very bright or hardworking.

What did I see at the university? Very hard working people. Very interesting research. Very shallow knowledge outside a narrow domain expertise.

These are the folks McKinsey hires... but these shallow thinkers are sent on 6 week projects for companies in industry they hadn't even heard before.

Once, no one in the team knew what product CompanyX sold... CompanyX is a a top tier multinational consumer product brand that routinely sponsors sports events, including TV ads.


In consulting it is "maximum self confidence by having minium knowledge at the same time" :-D

The product that I am referring to that companyX makes was probably used today, in some form, by >80% of the global population this morning.

Everyone would recognize it. Cartoons have made jokes about these product since the dawn of animation.

CompanyX makes is the premier manufacturer of these.

Think as pervasive and obvious as sneakers made by Nike, but it wasn't footwear.

And yet only one person in a team of a dozen consultants had ever heard of the company they'd been hired for.


>But your credentials worship fits right in with this community.

worship is an extremely strong word for a one-sentence casual comment.

but yeah, by default i will file anyone with a 4.0 from a top 10 school in the "brighter than me" category. is that worship?


Often the B+ students are way sharper but have poor incentives to work for the As. This creates bad work habits for them.

The As, then, are better at the game. Once you've become a TA and have to grade the exams, you realize how A grades are quite within reach:

For the professors, being an easy grader has almost no downsides. The contrary is a minefield of trouble. "A" students will "ask for clarifications" for any minor mistake, knowing professors will often throw them a point or two.

The exams are, typically, slight variations of problems from assignments. Often, they are the same.

Exams have no curveballs; problems or situations that you have never seen unless you did extra readings. No problem which to solve you must have read more or fully understood the core material.

The TA is primed to give 40% of a problem's points for free - just restate the problem in math and draw a picture and right out the door you get 2 out of 5 points.

Note that, as far as I can tell, this is not generally true for "hot" topics like CS or bio. These programs have so many eager kids that the material is hard. But then these fields get hard working, bright, kids that don't actually care about the material - they go to McK. Within ten years they've forgotten everything and are just consulting parrots.


Is a formal sentence which uses capital letters more sincere in its beliefs?

You can perfectly well believe that thinking that the echelons of academic success is a frictionless gold sieve is just a milquetoast belief. Believing that your beliefs are milquetoast are most often integral to said beliefs.


...what point are you trying to make? you wrote a bunch of words, but they dont seem to be an attempt at communicating anything. certainly not anything that contributes to a conversation.

I'm not the brightest tool in the shed.

When you get to partner level, you also get profit sharing on top of you salary.

Partners get 300-400k and senior partners get closer to 600-800


Not really relative to broader options in tech. The big money goes to the consulting leaders, but most of these folks look like glorified grifters more and more as time goes on.

Ultimately AI may be a big threat to the sort of “advisory” work McKinsey historically focused on.


Man, that's terrible. Have they considered bringing in some sort of business consultant to help them reorganize and restructure?

this is why working at mckinsey seems like a horrible proposition to me, personally

> McKinsey is trying to do software like they do their other engagements. It doesn't work.

I mean, it doesn't work for their consulting gigs either. There's a reason McKinsey has such a bad reputation.


But it does work for them? They make tons of money.

Well, fair point. It doesn't work for their clients.

As an ex-consultant: consulting at that level is kind of a grift. They over-promise and under-deliver as SOP. It's ripe for AI disruption, whatever that looks like.

Ideally, executives will get replaced by AI soon. Which should actually be easier than engineers. That will kind of solve the consulting problem automatically.

This would be terrible for McKinsey as they sell exclusively through executives who then punch all their wisdoms down on the plebs

So it would be great for the rest of mankind.

Their model works great.

It’s really about bypassing the existing power structure of the company. Competence of the work itself is a secondary objective. Most in-house initiatives can be slow rolled by management.

The fresh faced consultant with 2-3 steps to access the CEO neutralizes that. It seems grifty but is really exploiting bugs in corporate governance.

The current fad of firing the managers is a riff on this. Every jackass C-level is coming up with the novel idea of flattening.


This somehow implies that initiatives or strategies from consultants are somewhat successful. This is not the case in my experience.

No, you misunderstood. It is not about their output, it almost never is.

Most of the times, the business decision has already been made long before McK is hired. It’s all about legitimizing that decision and making it happen.

You can also wield them as a weapon against internal competitors or opponents. Look up how they were used to kill off Cariad for example.


They reflect the will of the principal who hired them. Success is in the eye of the beholder.

They sold their way of working to many idiotic companies which are in the process of destroying themselves …

Net conclusion: Don’t hire McKinsey to advise on AI implementation or tech org design and practices if they can’t get it right themselves.

Fair take, but you'd be hard pressed to find much resemblance to any advice McK gives to its own practices.

Pre-AI, I always said McK is good at analysis, if you need complicated analysis done, hire a consulting firm.

If you need strategy, custom software, org design, etc. I think you should figure out the analysis that needs to be done, shoot that off to a consulting firm, and then make your decision.

IME, F500 execs are delegation machines. When they wake up every morning with 30 things to delegate, and 25 execs to delegate to, they hire 5 consulting teams. Whether you hire Mck, or Deloitte, or Accenture will only come down to:

1. Your personal relationships

2. Your company's policies on procurement

3. Your budget

in that order.

McK's "secret sauce" is that if you, the exec, don't like the powerpoint pages Mck put in front of you, 3 try-hard, insecure, ivy-league educated analysts will work 80 hours to make pages you do like. A sr. partner will take you to dinner. You'll get invited to conferences and summits and roundtables, and then next time you look for a job, it will be easier.


Analysis of what? What does that mean? What's something you conceivably would need a consulting firm to "analyze?" I don't understand why management consulting firms would hire software people in the first place, and then punish them for not being on a client-facing project. That seems a bit contradictory to me, but this is all way out of my wheelhouse

Analysis:

1. How do I build a datacenter

2. How is the industrial ceramic market structured, how do they perform

3. How does a changing environment impact life insurance

Strategy:

1. Should I build a datacenter

2. Should I invest in an industrial ceramics company

3. Should I divest my life insurance subsidiary

Specifically in the software world this would be "automate some esoteric ERP migration" or "build this data pipeline" vs. "how can we be more digital native" or "how do we integrate more AI into our company"


These look like questions you would give to AI in 2026.

They are.

The problem is AI isn't CYA quality (yet) to your board.


For instance, what would we need to start offering siracha in our burger?

The only people who hire McKinsey are execs who are even more clueless than the consultants.

The executives who hire McKinsey are often not clueless, but they often lack the political power in the company to push through their plans. So they hire some well-regarded business consultancy to get an "objective" analysis what needs to be done.

How can it be that what you just wrote is such a widely known fact? I've been reading this and hearing this from consultancy people as well for many years now. If the guy lacks the political power, why don't his internal political opponents say, "nice try hiring the consultants, but we know this trick very well, you still don't get it your way".

It has to be some kind of higher level protection racket or something. Like if you hire the consultants there is some kind of kickbacks to the higherups or something with more steps involved where those who previously opposed it will now accept it if it's rubberstamped by the consultants.

Or perhaps those other players who are politically opposing this person are just dummies and don't know about this trick and actually trust the consultants. Or maybe it's a bit of a check, that you can't get anything and everything rubberstamped by the consultants, so it is some kind of sanity filter that the guy isn't proposing something that only benefits himself and screws everyone else.

And if it's the latter, then it is genuine value, a somewhat impartial second opinion. Basically there is a fog-of-war for all the execs regarding all the internal politics going on, it's not like they see through everything all the time and simply refuse to take the obviously correct decision for no reason.


There's a sort of prisoner's dilemma. If you make a fuss you'll get branded as anti-progress and sidelined. If you put your head down and just do what you're told you're a team player and will probably survive.

Aside, there's a lot of stuff online re McKinsey. I suggest searching HN plus also search "Confessions of a McKinsey Whistleblower" in your fave web search engine.

My favourite was the LRB article "When McKinsey comes to town" -- see https://news.ycombinator.com/item?id=33869800


if you don't have sufficient political clout or influence, you seek sponsorship or backing from others with it to accrue more influence for your idea. You can pay consultants to agree with your idea and produce pretty charts and whitepapers for it.

The question is, why does anyone take the word of a company seriously which will agree with any idea if you pay them? After several iterations of this game (decades by now), someone would surely say "nah, we don't care about these charts and whitepapers, we know that the company who made them will agree with anything for money, so it's still a NO"

My hunch is that in fact they won't agree with just any idea. There is a limit to how extreme the idea can get, though probably the filter is indeed weak. Still, without this filter, people would propose even wilder ideas that maximize their own expected payoff at the expense of other players, so just the fact that it has to be signed off by an external party is still enough information for the powerful decision makers that they are willing to fund their services.


Nah. They're conflicted and goal seek backwards from your wacky vision.

Look at NEOM in Saudi.

McKinsey took 130M in a year to recommend a 500B investment in a 105 mile city in the desert. Sunk 50B and project was revised to take 50 years and 8 trillion.

It's impressive salesmanship how they were able to bilk such a large sum and support interim approvals for the regime to launder favors. I can see people wanting that "conflict."


In my experience, McKinsey often gets brought in from the very top - who should be able to push through more or less what they want. They just want a scapegoat in case things go wrong.

The version I've heard is that you can pin the blame on the consultants if it goes wrong.

This is also true.

This can be simplified further: "Don't hire McKinsey." ;-)

Maybe it was opened up so it could be used in recruiting?

McKinsey challenges graduates to use AI chatbot in recruitment overhaul: https://www.ft.com/content/de7855f0-f586-4708-a8ed-f0458eb25...


Using a 2 year old paradigm.

And require a chatbot to be used that can be easily gamed by asking a model of how best to navigate it lol.

Implementing the past of AI practices is requesting something that will be easily outdone.


I am not sure what accounting or management consulting firms are doing in tech.

They look to package up something and sell it as long as they can.

AI solutions won't have enough of a shelf life, and the thought around AI is evolving too quickly.

Very happy to be wrong and learn from any information folks have otherwise.


The purpose of hiring them is to make them come to the conclusion you already have, so when it goes well you get the credit for doing it, or if it goes sideways you can pin the blame on them.

Most companies are not _just_ tech companies and don't have business analysts, consulting analysts, solutions consultants, software engineers and DBA's on staff.

Many, many, many companies are very happy with the consulting firms they hire.

Of course, those are the consulting firms that aren't publicly traded and in the news all the time (for all the wrong reasons).


Or, alternatively, there are so many companies that are weak on tech they pay for someone else to guide them.

Having done some work with these F500 companies, this is part of it. These legacy companies have long seen tech as a cost center, haven't invested in it, and are unable to attract talent. And, for whatever reason, these companies insist on working with large consulting firms, when a dedicated software or tech consulting firm that is smaller would be way better.

Ultimately, why would a large company hire a consultancy company that is bad at tech and has a lot of bad processes to do their tech for them? Because the company itself is even worse and doesn't know what good looks like. If you are hiring McKinsey or Deloitte to do your tech, it's because you are completely lost and don't have the slightest clue how to become unlost. And you have no concept of what good looks like.

If you think the actual tech talent and systems are bad, when you work with these consulting firms, they are going to do the most heavy SAFe process you have ever seen. For me, the worst part is not the tech talent, but rather the most by-the-book, heavy-handed agile process possible. Everything moves way slower because of this "agile" rot, and there is almost no concept of doing proper ideation and prototyping work.

These legacy F500 companies try to do everything cheaply with consultants and offshoring, and yet it always ends up costing way more than it would if they just had proper in-house tech talent.


Yeah its more this, the companies who ask Mck's help in software tend to hire contractors or vend out software already.

is this the same at quantumblack? They at least give the impression their assets on Brix are somewhat up to date and uesable

QuantumBlack is synonymous -- it's where all of McKinsey's AI expertise got reorganized these days, anyone working on this tool was likely doing it on a rotation in between client engagements under "QuantumBlack, AI by McKinsey"

QB is no more, leadership left, technical experts left. Just the brand stayed behind.

Is there some tight coupling on autonomy + electric cars? Seems the only 2 viable hands-free car companies are Tesla and Rivian. I don't see myself ever getting an electric car, but it doesn't seem like the big car companies are anywhere near this.


No, there is no coupling between EVs and automation.

Ford BlueCruise and Mercedes Drive Pilot are equipped on some ICE vehicles, and are hands-free driving on (some) highways.

Mercedes Drive Pilot is classified as L3 which is better than Tesla or Rivian.


> Mercedes Drive Pilot is classified as L3 which is better than Tesla or Rivian.

Try to find videos where people actually use it. A handful of 1 minute long promotional and car reviewers' videos. It's mostly a marketing move.


I know this ain't a bitch-about-bluecruise thread but it's crazy to me they shipped it as is, it disengages silently as a matter of course - only indication is an animation on the speedometer. You basically have to keep your hands in the wheel just in case, not to mention shouting at you to pay attention when you glance over at the radio. Handsfree but keep your eyeballs facing front !


They just announced eyes-free bluecruise


> Mercedes Drive Pilot is classified as L3 which is better than Tesla or Rivian.

"DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz. DRIVE PILOT operates in daytime lighting conditions when inclement weather is not present and in areas where there is not a construction zone." [0]

[0]: https://www.mbusa.com/en/owners/manuals/drive-pilot#2


I think the shift to EV is inevitable.


I agree, but it won't happen until EVs get more range.


The range is fine today, the problem is charging infrastructure now. There aren't enough high speed chargers, and we can't build more because of the same reasons we can't build more AI datacenters: power. Tesla can build tons of them because they're backed by large grid batteries that suck up the power peaks from fast charging so that they can install their charging stations anywhere that has somewhat reliable power. If you don't have the batteries to act as a peak shaver, then it's really hard to install high speed charging where people need it most in residential and commercial areas that are already oversubscribed.


It's not fine for all use cases. There are many people who are holding out because it's either not fine for their main use case, or even just a use case that occurs infrequently, but still important to them.


I'd like to see data on the distances people drive on a regular basis. For America where I am from, I think that a vast majority of people could use EVs today with the ranges they have today. I didn't see any EVs with ranges below 200+ miles and most had 260+. If you have to go further than that on a regular basis, I think that most cars won't work for your specific needs, let alone EVs. The whole range argument seems like some FUD to me that was made up by the ICE industry, honestly, because EVs have had these same ranges for a decade now.


I'm speaking out of personal experience as an EV owner in Los Angeles that takes occasional road trips. It's those occasional road trips that are preventing me from going full EV. And I'm like 99% certain I'm not a tiny minority.


I wonder if there is data out there for this kind of thing. I'd like to see it to see which one of us is correct or if we're both wrong (or right).


It already happened. 1/3rd of the global car market is EV. Range is not an issue.


Worthless comment. Of course it's not an issue for city driving. It's an issue for long trips and rural driving. No one said EVs don't serve many use cases. I have one myself.


Worthless human. More range is not needed and mark my words, mainstream EVs will not bother going beyond ~300 miles. Even the 400mi in a model s is a lot. More charging stations maybe, though we have plenty here in CA so roadtrips in a Tesla have never been a problem.


appreciate the compliment. I'm one of those Californians with a Tesla, and we keep a gas car for certain trips that would be very difficult with a Tesla. I'm not just making something up here. But whatever you say.


That has been happening consistently for almost 15 years. https://www.energy.gov/eere/vehicles/articles/fotw-1323-janu...


Better charging infrastructure and faster charging batteries will mitigate some of that.


The coupling is more with cost than drive train, but consumers most likely to pay extra for autonomy are the same ones willing to pay extra for electric.

Which is why you see it on the Mercedes ICE vehicle. Because it's a high cost vehicle to start with.


The reason for this is Rivian and Tesla bet big on software defined platforms… ie every piece of hardware talks to a small number of central computers instead of many independent systems. This gives them a huge leg up in developing software than can actually take all the available input and use it to control all aspects of the vehicle.

Downside is all the buttons are on a screen. But I’ve grudgingly decided it’s worth it for software upgrades.


No, the only Level 3 self-driving system is Drive Pilot by Mercedes. They have it on the S-Class and EQS sedans, so one ICE/hybrid and one EV.

It even comes with legal liability for the car manufacturer, that's how confident they are in the tech. None of this kind of hopium: https://en.wikipedia.org/wiki/List_of_predictions_for_autono...


It's not real L3, it's marketing department L3. Two years after launch it's still only supported in two US states. Now that Mercedes got their headline, it's effectively abandonware.

If it was real L3, Drive Pilot would be considered the vehicle operator for legal purposes. Mercedes would take full responsibility for any driving infringements or collisions that occur during its use. In reality, Mercedes cannot indemnify you from driving infringements, and for collisions they only promise to cover "insurance costs" which probably doesn't include any downstream reputational consequences of making an insurance claim.


So confident that it only works with a lead car to follow, on select stretches of freeways, below a certain speed, on sunny days


absolutely dead at gemini thinking up

> Google kills Gemini Cloud Services (killedbygoogle.com)


> In industrial scale software development, gaining access you are fully entitled to can sometimes take weeks.

4 years in consulting. I've spent the first WEEKS of a project twiddling my thumbs waiting for a laptop, just to spend more weeks waiting on access to source code, tooling, etc.

My friends on the strategy side could start and finish entire projects in that time.


Your friends on the strategy side are the folks those clients think parachute in, drop a deck and convince execs to do things, and the disappear.

Not necessarily worse, but the stereotype fits! You are, at least, soon doing tangible things.


Accenture? Lol


Alternatively, there is no justice, and even the truth is lost to partisan politics. I have a strange feeling this benefits foreign intelligence, not harms it. Mossad, for example, knows who slipped through the cracks. Knows how much worse the "truth" is beyond the code names and vague emails. Now they have more power, not less.


This kind of thing can only exist in a climate of apathy and nihilism. The powerful want you to think the situation is hopeless and nothing will change. But remember this: at no point in history has a steady state been maintained for significant periods of time. Ever.

We are at a dangerous point in history. I personally believe that inequality is inevitably going to end in violence and we're beyuond the point of avoiding this with electoral politics. People are struggling to eat and survive at a time where we'll likely mint our first trillionaire in our lifetimes. This simply can't continue.

I'm personally for outing wealthy and powerful pedophiles who are meaningfully making all of our lives worse to accrue completely unnecessary extra wealth.


What’s partisan about what your OP described? Democrats and republicans alike were entangled in Epstein’s crimes.


I meant that this event, like many events in American history, will be remembered through the lens of the party in power. At least for a long time. Only now are we beginning to understand Vietnam and Watergate, for example. I suspect the truth about Epstein will never come out, but what will come out will be made partisan by those releasing it, now or in the future.


I like this post, ofc we could all benefit from direct, and vulnerable, conversations. But one thing I never understood, even through Brené Brown, is that this direct and vulnerable communication style leads to its own set of baggage, its own unspoken agreements about what something meant.

> “I’ve been feeling a low-level tension between us, like maybe we’re quietly annoyed at each other but trying to stay polite. Is that just me?”

I'm sure in many cultures, and in many friend groups, this would go over fine. If I said this to someone, they would go into shock. They're unspoken thought would be "wow if he's saying that to me, I must annoy the shit out of him". Maybe not! Maybe that's my own unspoken understanding! But I do think this leaves a "scar" even a small one. "Direct" conversations are not without their own damaging effects. I think part of my social contract is to "deal" with things silently. Maybe in other cultures that's not the case.

If someone said that to me, I would be happy to have that conversation, but I would be on pins and needles around that person, and possibly overthinking how "annoying" I'm being, and I would have at least a small amount of resentment for the person saying it – "I have lots of friends I don't annoy, what's wrong with this person"


Just to be pretentious, this also reminds me of a conversation in Infinite Jest, where the canadian and the american spy argue about whether its right to teach their young what right and wrong is, or whether its right to discover it. The example is eating candy.

I think in the US, if you tell a kid not to eat candy, they will eat candy as soon as their guardian isn't watching. I'm not sure that's true elsewhere, for a myriad of reasons. By extension, if you tell me I'm annoying you, I might go through the motions of "repairing" the relationship, while I actually just distance myself. Ofc, that depends on who says it


As a layman I have no idea which part of this to be skeptical of, but, cool!


The part where it violates basic laws of physics, like conservation of momentum, for one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: