Can you explain how Tesla is liable? It is made clear to the driver (both in the manual and at activation of Autopilot) that they remain the final authority in control of the vehicle, and at the time of the accident, the Tesla software build did not support detection and actioning of stop lights nor stop signs.
Are folks going to sue GM and Ford when people don't pay attention when their versions of driver assist don't stop the car for an obstacle or traffic signal? Probably, but that's because America is highly litigious and has lost the concept of personal responsibility. The odds of winning are low.
(disclosure: tesla owner with >50k miles of Autopilot [EAP] use)
Tesla markets their software as "full self-driving".
Their page on their own website[1] for the autopilot has a video which starts with the words "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself." Followed by a video of a tesla driving itself on public roads with other cars around while the person's hands are not on the wheel.
While they have those disclaimers, they are only there to try to protect themselves from liability. But their marketing, and especially the words from Elon's mouth, are 100% geared towards making the consumer believe that the system is capable of driving itself.
They can claim they told the driver to always be in charge, but there's no way (in my eyes) that they can do that with a straight face. "You gotta keep your hands on the wheel and always be in charge wink."
There is no confusion in this case, though. Full Self Driving costs extra, currently $12K. If you don't pay the extra money, you don't get full self driving, you only get a fancy cruise control.
I think for many general laypeople (not the HN tech-oriented), there IS confusion. It is also not helped by Tesla's marketing, such as naming the system "autopilot" which has a different colloquial meaning than may be understood in engineering circles. Note how most other manufacturers are generally prefer naming their systems some variant of "driver assist" or "cruise control"
Sure, people without Teslas may be confused. No one who actually owns one is confused about Tesla's Autopilot because not only do you have to agree on how to use it before you can enable it, but it constantly nags and reminds you while driving.
And Wikipedia: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
>"Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
That's why I was careful to say the colloquial definition. IMO this is also why other companies deliberately name their systems something like "driver assist". It errs on the side of removing ambiguity.
The colloquial definition is generally much more aligned with "operating without having to focus on the task at hand." I don't think Tesla wants their drivers to operate the car without focusing on the act of driving because it has an "autopilot" feature.
They market the vehicles as being capable of full self driving at some point in the future. They don't market them as being capable of such today. It's in the very copy you cite at the end of your comment.
> Current Autopilot features require active driver supervision and do not make the vehicle autonomous. (Control-F of this quote gets you there)
> It's in the very copy you site at the end of your comment.
Most of the way down a long page, in faint small text in the middle of a blurb. Nothing could possibly scream "we don't want you to read this, but we're required to put it in so that we and our fans can continue to make bad-faith defenses of our marketing program" more.
on pg 39 of purchase contract its clearly stated its a bicycle for now as we are still developing our Car (tm) to be a car.
Tell me how is that different to calling a driver assist a Autopilot as it aspires to be an autopilot but for now its just a driver assist called Autopilot*.
Except the purchase page clearly state that autonomous features are not avaiable today, and the car itself constantly reminds you to be in control.
Also, Autopilot on Wikipedia: "An autopilot is a system used to control the path of an aircraft, car, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems)."
Again: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
Kinda pointless, since you sidestep the whole point.
I am not interested in PR style discussions with technicalities being used as a scapegoat points.
Its entirely unproductive and should be strictly reserved to cable news programs.
If i buy strawberry ice-cream in strawberry ice-cream packaging I dont give a fuck that there is a sign on the back of package saying this is frozen broccoli.
that is then misleading. it's as simple as that. Tesla should be fined for this and i'm really surprised they haven't already been.
Hiding behind legalese and specific wording is the textbook definition of misleading.
How would you feel if i sold you an "apple peeler" and made you sign a legal agreement where i stated that the "apple peeler" was not capable of peeling apples today but at some indeterminate point in the future? Is that not misleading?
Why are you arguing semantics for a scummy corporate?
Then maybe change the name Autopilot to something else. Cars are not publicized with Autobrake systems that you have to press the pedal to stop. That would be confusing.
Wikipedia: "An autopilot is a system used to control the path of an aircraft, car, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems)."
Well the etymology is clear too. Which I would say is the intuition some people could be using. Not sure people visit Wikipedia to learn about levels of auto piloting.
Automatic: From Greek automatos, ‘acting of itself’
Pilot: early 16th century (denoting a person who steers a ship): from French pilote, from medieval Latin pilotus, an alteration of pedota, based on Greek pēdon ‘oar’, (plural) ‘rudder
All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates.
For years, and even today, there are Tesla-produced videos that say:
> The driver is only in the seat for legal reasons. The car is driving itself.
So while there is a disclaimer, there's also Tesla's not so subtle nudge nudge wink wink implications.
They did the same with Summon. While the fine print said "Do not use while distracted. Pay attention to the vehicle at all times", the rest of their copy said:
> Have your car come to you while you deal with a fussy child.
Tesla and Musk (and remember, Tesla told the SEC Musk's Twitter is an "official company communication channel") have even linked and re-tweeted videos of people driving entirely hands-free.
Liability lies with the driver, but it would not be hard to argue that the company's under is "Bah. Pesky regulation. All this shit works, but we're just waiting on the law to catch up".
If they'd only called it "advanced cruise control with lane assist" instead, they sure as hell would have improved things.
The Autopilot in Tesla is more like the one you'd find in an A-10C (three options: hold this current level vertical path; hold this heading and altitude, or hold this altitude) than how I guess most people perceive Autopilot to work (like in what Airbus is actually developing now, ie https://www.businessinsider.com.au/airbus-completes-autonomo...
- that article talks to how people incorrectly perceive autopilot to do everything in a plane, for now at least ).
Ie it's a really useful tool, but it's in no way to be confused with FSD.
Wikipedia: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
No one is confused about Tesla's Autopilot because not only do you have to agree on how to use it before you can enable it, but it constantly nags and reminds you while driving.
Tesla should be liable for misleading consumers. They released videos demo'ing autopilot and ignoring their own guidelines of always having hands on the wheel. They call their feature "autopilot"/"FSD" and Elon has been personally promising real FSD next year every year for the past 5 years.
When people's lives are in danger, there should be greater accountability over the marketing that companies are trying to pass off as hard truth.
There seems to be a progression there from ‘I think it’ll be able to drive the easiest roads in a year or two’ to ‘It’ll be 2–4× safer than humans soon’. So in my opinion calling all of it ‘promising real FSD’ is at least half as misleading as the marketing.
He has been promising FSD capabilities. He said the car will be able to autonomously drive from west coast to the east coast, he said robo taxis will make $30k a year and that it’s financially insane to buy anything but a Tesla because Teslas will be an appreciating asset next year when FSD comes out.
He’s made too many bs claims for it to be defensible in any way.
I mostly believe you because of another video I’ve seen but lost the link to (and the nonfree software the cars run makes them exceptionally deprecating assets as Tesla remotely decreases max acceleration when the cars are resold), I guess my point was that the video dividuum shared didn’t really show claims that hyperbolic, and if I didn’t already dislike Tesla neither of your comments would convince me.
It may come down to deeper analysis that looks at both root and proximate causes. If Tesla's design was considered faulty from a safety standpoint and substantially contributed to the accident, I would imagine they will have some portion of responsibility.
I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
I think an industry wide analysis of driver assist systems is likely warranted by NHTSA to form a framework for liability and communications/marketing capabilities to consumers. Think EPA emissions stickers for autos, but for driver assist.
> I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
If the driver is not paying attention, that is the driver's fault. If you can't provide your attention, you shouldn't be in the driver's seat. And if you kill people, do not pass go, go directly to jail. Such are the consequences of being responsible for thousands of pounds of mobility moving at speed.
>If the driver is not paying attention, that is the driver's fault.
No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
I deliberately put both root and proximate causes in my first post because there can be multiple contributors to the accident, and each may bear some responsibility.
My main issue with the OP was that it lays the groundwork for absolving the manufacturer of any responsibility whatsoever, which I don't feel is appropriate. Society generally requires professionals (engineers, lawyers, doctors) that work in areas of public safety/public good to bear some responsibility. I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
> No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
Not necessarily. Automatic Emergency Braking systems across all automakers have serious constraints [1], and are still sold with broad legal disclaimers. They will attempt to stop you in the event an object is in the vehicle path, but importantly, there are no guarantees (and this is made clear in each vehicle's user manual). At the time of this accident, Tesla had no safety system for detecting red lights or stop signs, therefore no safety system to fail (when Autopilot is active, the vehicle has a "dead man switch" that commands you to torque the steering wheel and provide other input to ensure you're still alive and attentive every 10-30 seconds, depending on a variety of factors [speed, road curvature, visibility, path planning confidence, etc]).
This idea that these safety systems are foolproof and manufacturers are liable versus the driver are bizarre to say the least, but as I mention in another comment, it speaks to the broad lack of understanding and personal responsibility that has permeated society. People get in and drive, and it's someone else's problem if an adverse event occurs.
> I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
I agree with this position, and that engineering in general should be held to a high standard. If manufacturers have built what the industry has determined is industry standard, and regulators sign off (NHTSA has made no attempt to instruct Tesla to disable Autopilot fleet wide with an OTA software update), I'm unsure there's much more to do when the human pushes beyond system limits.
> To understand the strengths and weaknesses of these systems and how they differ, we piloted a Cadillac CT6, a Subaru Impreza, a Tesla Model S, and a Toyota Camry through four tests at FT Techno of America's Fowlerville, Michigan, proving ground. The balloon car is built like a bounce house but with the radar reflectivity of a real car, along with a five-figure price and a Volkswagen wrapper. For the tests with a moving target, a heavy-duty pickup tows the balloon car on 42-foot rails, which allow it to slide forward after impact.
> The car companies don't hide the fact that today's AEB systems have blind spots. It's all there in the owner's manuals, typically covered by both an all-encompassing legal disclaimer and explicit examples of why the systems might fail to intervene. For instance, the Camry's AEB system may not work when you're driving on a hill. It might not spot vehicles with high ground clearance or those with low rear ends. It may not work if a wiper blade blocks the camera. Toyota says the system could also fail if the vehicle is wobbling, whatever that means. It may not function when the sun shines directly on the vehicle ahead or into the camera mounted near the rearview mirror.
> There's truth in these legal warnings. AEB isn't intended to address low-visibility conditions or a car that suddenly swerves into your path. These systems do their best work preventing the kind of crashes that are easily avoided by an attentive driver.
The Mazda 3 Astina 2014's Radar Cruise Control had clear warnings that it would not detect motorcycle riders. Yet not once did it fail to actually do so ..
I guess they just weren't confident enough, and that I hadn't been behind behind the required number.
I was very impressed by that system, very simple yet solid. By always using radar cruise control you'd kind of take away the need to go into AEB territory, which never happened (activated) for me outside of when I tested the system using cardboard boxes.
(I want to know how my stuff works - 2/3 times came to a full stop from 30kmh just in time, third time it put 10cm into them ; I concluded this is extremely good technology that should be made mandatory everywhere).
I don't disagree with you and you make some good points. But I don't think it's fair to assume I'm claiming the safety systems are foolproof. I've worked on safety-critical systems in automotive, healthcare, and aerospace so I know better.
I think it may ultimately come down to the way Tesla's marketing is perceived. If it's found that a reasonable person would insinuate that Tesla implied their system had capabilities it did not, that gets into ethical/legal trouble and speaks to what I meant about working "appropriately". But I think we agree on this based on your previous comment about creation of a regulatory framework for communication.
As far as the personal responsibility goes, I also agree on that point. But in an immensely complex and interconnected society, this has limits because humans don't have the bandwidth to make risk-informed decisions on everything. As I mentioned in a separate comment, there are certain professions (namely: engineer, lawyer, doctor) who have obligations/responsibilities to public safety. (Hence the term "profession" which comes from professing an oath of ethics). I think it's a bad precedent to push the responsibility away from these professions. The talking point about personal responsibility seems to only go one way, and it (unsurprisingly) is the direction that allows corporations to maximize profits while also absolving themselves of risk.
If you drive over a bridge and it collapses because of a bad design, I don't think this gets chalked up to "welp, you needed to take personal responsibility for deciding on that route". If you buy flooring for your home that makes your kids sick, I wouldn't blame you for not doing due diligence on the manufacturing or chemical process. In both cases, the end-user has a reasonable expectation of safety and the professional who designed it would usually be held responsible. Maybe, as you said, the AV world needs some more oversight and regulation to communicate those risks.
Companies can’t hide stuff in small print while, at the same time, saying something else in marketing material, with a footnote saying something along the lines of “real-life performance may be different”.
Now, whether that happened with Tesla will be for a judge or a jury to decide. In the past, Tesla certainly has made bold claims in their marketing material.
It would probably come down to the question what would have been ‘normal’ for the driver to assume about Tesla’s software.
Have you ever seen those signs on the back of dump trucks on the highway that state something like, “Stay Back: Truck Driver Not Responsible for Cracked Windshields”?
I think there's a case to be made for banning level 2 and 3 autonomy entirely as too likely to result in distracted people not paying attention in general. But given that we have decided to allow them having lawsuits against car companies where the driver assistance systems work as intended isn't the right solution. If we want to crack down it should be a law or regulatory change to it applies evenly to everybody's systems.
I'm not saying necessarily that we should or should restrict that sort of limited autonomy, just that I think the tradeoffs and empirical questions aren't obvious and you could make an argument for it.
Another example is "launch control" pointless and dangerous on public roads. Any injuries or deaths from that I'd argue liability is 50/50 driver and manufacturer. Maybe even 33/33/33 driver, manufacturer, and driver's insurance company for not voiding or denying insurance over having such a "feature".
It's not only Tesla that has launch control some other models of other vehicle manufacturers also offer it.
I think Ford and GM will share a lower percentage of responsibility, if any, because they actively configure their service to prevent this. Tesla tends to wink and nod.
> Are folks going to sue GM and Ford when people don't pay attention when their versions of driver assist don't stop the car for an obstacle or traffic signal? Probably, but that's because America is highly litigious and has lost the concept of personal responsibility.
A decent case can be made for making GM and Ford responsible, if you view tort law from an economic perspective rather than a moral perspective.
Under the moral perspective approach tort law's goal is to identify those whose negligence or wrongdoing are responsible for some harm and making them bear the cost of that harm.
That works fine when it is hard to have big torts that cause so much damage that the tortfeasor cannot cover it. You then start having to have insurance. Potential tortfeasors might have insurance that pays if their torts cause too much damage, and potential victims might have insurance that pays if they get harmed by an uninsured tortfeasor.
When an uninsured tortfeasor harms an uninsured victim and the victim needs expensive emergency medical treatment that they cannot afford they get it and the hospital pays or the taxpayers pay.
Under an approach to tort law based on an economic perspective it is not about blame. It is about putting responsibility where it can do the most to reduce the number and severity of similar future torts.
If Tesla and GM and Ford are responsible when their driver assist features are involved in an accident, regardless of whether it was user error or use stupidity or bad design or a manufacturing defect or whatever that causes, they will have good data on how often this happens, how much damage it causes, how those compare between their cars and the other cars, and can try to make changes to reduce frequency and severity if it turns out that their cars are having this issues at a higher rate than expected.
Under the insurance approach in the moral-based tort system, the insurance companies will have data on how often each company's cars have these problems so they can set premiums based on that. Drivers of higher risk cars will pay more to insure against accidents they are at fault in. All of us will pay more for coverage that covers us against accidents caused by underinsured drivers.
That might provide some incentive to the car companies to address the issue--if people know they are going to have to pay more for insurance if they buy one of your cars they might be more likely to choose a different car--but it isn't likely to be very much incentive.
With the approach in the economic-based tort system there are more incentives to actually improve safety instead of just raising prices for insurance.
Are folks going to sue GM and Ford when people don't pay attention when their versions of driver assist don't stop the car for an obstacle or traffic signal? Probably, but that's because America is highly litigious and has lost the concept of personal responsibility. The odds of winning are low.
(disclosure: tesla owner with >50k miles of Autopilot [EAP] use)