Short answer is “yes”. It’s the same way a homeowner can be responsible if they negligently leave some issues on their property and a person falls to their death.
The ultimate responsibility is the person behind the wheel.
That said, I’d argue Tesla is also liable. To what extent I don’t know, but it was clearly a contributing factor.
Can you explain how Tesla is liable? It is made clear to the driver (both in the manual and at activation of Autopilot) that they remain the final authority in control of the vehicle, and at the time of the accident, the Tesla software build did not support detection and actioning of stop lights nor stop signs.
Are folks going to sue GM and Ford when people don't pay attention when their versions of driver assist don't stop the car for an obstacle or traffic signal? Probably, but that's because America is highly litigious and has lost the concept of personal responsibility. The odds of winning are low.
(disclosure: tesla owner with >50k miles of Autopilot [EAP] use)
Tesla markets their software as "full self-driving".
Their page on their own website[1] for the autopilot has a video which starts with the words "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself." Followed by a video of a tesla driving itself on public roads with other cars around while the person's hands are not on the wheel.
While they have those disclaimers, they are only there to try to protect themselves from liability. But their marketing, and especially the words from Elon's mouth, are 100% geared towards making the consumer believe that the system is capable of driving itself.
They can claim they told the driver to always be in charge, but there's no way (in my eyes) that they can do that with a straight face. "You gotta keep your hands on the wheel and always be in charge wink."
There is no confusion in this case, though. Full Self Driving costs extra, currently $12K. If you don't pay the extra money, you don't get full self driving, you only get a fancy cruise control.
I think for many general laypeople (not the HN tech-oriented), there IS confusion. It is also not helped by Tesla's marketing, such as naming the system "autopilot" which has a different colloquial meaning than may be understood in engineering circles. Note how most other manufacturers are generally prefer naming their systems some variant of "driver assist" or "cruise control"
Sure, people without Teslas may be confused. No one who actually owns one is confused about Tesla's Autopilot because not only do you have to agree on how to use it before you can enable it, but it constantly nags and reminds you while driving.
And Wikipedia: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
>"Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
That's why I was careful to say the colloquial definition. IMO this is also why other companies deliberately name their systems something like "driver assist". It errs on the side of removing ambiguity.
The colloquial definition is generally much more aligned with "operating without having to focus on the task at hand." I don't think Tesla wants their drivers to operate the car without focusing on the act of driving because it has an "autopilot" feature.
They market the vehicles as being capable of full self driving at some point in the future. They don't market them as being capable of such today. It's in the very copy you cite at the end of your comment.
> Current Autopilot features require active driver supervision and do not make the vehicle autonomous. (Control-F of this quote gets you there)
> It's in the very copy you site at the end of your comment.
Most of the way down a long page, in faint small text in the middle of a blurb. Nothing could possibly scream "we don't want you to read this, but we're required to put it in so that we and our fans can continue to make bad-faith defenses of our marketing program" more.
on pg 39 of purchase contract its clearly stated its a bicycle for now as we are still developing our Car (tm) to be a car.
Tell me how is that different to calling a driver assist a Autopilot as it aspires to be an autopilot but for now its just a driver assist called Autopilot*.
Except the purchase page clearly state that autonomous features are not avaiable today, and the car itself constantly reminds you to be in control.
Also, Autopilot on Wikipedia: "An autopilot is a system used to control the path of an aircraft, car, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems)."
Again: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
Kinda pointless, since you sidestep the whole point.
I am not interested in PR style discussions with technicalities being used as a scapegoat points.
Its entirely unproductive and should be strictly reserved to cable news programs.
If i buy strawberry ice-cream in strawberry ice-cream packaging I dont give a fuck that there is a sign on the back of package saying this is frozen broccoli.
that is then misleading. it's as simple as that. Tesla should be fined for this and i'm really surprised they haven't already been.
Hiding behind legalese and specific wording is the textbook definition of misleading.
How would you feel if i sold you an "apple peeler" and made you sign a legal agreement where i stated that the "apple peeler" was not capable of peeling apples today but at some indeterminate point in the future? Is that not misleading?
Why are you arguing semantics for a scummy corporate?
Then maybe change the name Autopilot to something else. Cars are not publicized with Autobrake systems that you have to press the pedal to stop. That would be confusing.
Wikipedia: "An autopilot is a system used to control the path of an aircraft, car, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems)."
Well the etymology is clear too. Which I would say is the intuition some people could be using. Not sure people visit Wikipedia to learn about levels of auto piloting.
Automatic: From Greek automatos, ‘acting of itself’
Pilot: early 16th century (denoting a person who steers a ship): from French pilote, from medieval Latin pilotus, an alteration of pedota, based on Greek pēdon ‘oar’, (plural) ‘rudder
All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.
The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates.
For years, and even today, there are Tesla-produced videos that say:
> The driver is only in the seat for legal reasons. The car is driving itself.
So while there is a disclaimer, there's also Tesla's not so subtle nudge nudge wink wink implications.
They did the same with Summon. While the fine print said "Do not use while distracted. Pay attention to the vehicle at all times", the rest of their copy said:
> Have your car come to you while you deal with a fussy child.
Tesla and Musk (and remember, Tesla told the SEC Musk's Twitter is an "official company communication channel") have even linked and re-tweeted videos of people driving entirely hands-free.
Liability lies with the driver, but it would not be hard to argue that the company's under is "Bah. Pesky regulation. All this shit works, but we're just waiting on the law to catch up".
If they'd only called it "advanced cruise control with lane assist" instead, they sure as hell would have improved things.
The Autopilot in Tesla is more like the one you'd find in an A-10C (three options: hold this current level vertical path; hold this heading and altitude, or hold this altitude) than how I guess most people perceive Autopilot to work (like in what Airbus is actually developing now, ie https://www.businessinsider.com.au/airbus-completes-autonomo...
- that article talks to how people incorrectly perceive autopilot to do everything in a plane, for now at least ).
Ie it's a really useful tool, but it's in no way to be confused with FSD.
Wikipedia: "Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle"
No one is confused about Tesla's Autopilot because not only do you have to agree on how to use it before you can enable it, but it constantly nags and reminds you while driving.
Tesla should be liable for misleading consumers. They released videos demo'ing autopilot and ignoring their own guidelines of always having hands on the wheel. They call their feature "autopilot"/"FSD" and Elon has been personally promising real FSD next year every year for the past 5 years.
When people's lives are in danger, there should be greater accountability over the marketing that companies are trying to pass off as hard truth.
There seems to be a progression there from ‘I think it’ll be able to drive the easiest roads in a year or two’ to ‘It’ll be 2–4× safer than humans soon’. So in my opinion calling all of it ‘promising real FSD’ is at least half as misleading as the marketing.
He has been promising FSD capabilities. He said the car will be able to autonomously drive from west coast to the east coast, he said robo taxis will make $30k a year and that it’s financially insane to buy anything but a Tesla because Teslas will be an appreciating asset next year when FSD comes out.
He’s made too many bs claims for it to be defensible in any way.
I mostly believe you because of another video I’ve seen but lost the link to (and the nonfree software the cars run makes them exceptionally deprecating assets as Tesla remotely decreases max acceleration when the cars are resold), I guess my point was that the video dividuum shared didn’t really show claims that hyperbolic, and if I didn’t already dislike Tesla neither of your comments would convince me.
It may come down to deeper analysis that looks at both root and proximate causes. If Tesla's design was considered faulty from a safety standpoint and substantially contributed to the accident, I would imagine they will have some portion of responsibility.
I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
I think an industry wide analysis of driver assist systems is likely warranted by NHTSA to form a framework for liability and communications/marketing capabilities to consumers. Think EPA emissions stickers for autos, but for driver assist.
> I also think the generalized caveat that it's always the drivers fault is a dangerous one that can incentivize manufacturer's to take unnecessary risks (e.g., less testing/quality in favor of delivering on schedule because they can always push the risk - sometimes unknown - to the end user).
If the driver is not paying attention, that is the driver's fault. If you can't provide your attention, you shouldn't be in the driver's seat. And if you kill people, do not pass go, go directly to jail. Such are the consequences of being responsible for thousands of pounds of mobility moving at speed.
>If the driver is not paying attention, that is the driver's fault.
No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
I deliberately put both root and proximate causes in my first post because there can be multiple contributors to the accident, and each may bear some responsibility.
My main issue with the OP was that it lays the groundwork for absolving the manufacturer of any responsibility whatsoever, which I don't feel is appropriate. Society generally requires professionals (engineers, lawyers, doctors) that work in areas of public safety/public good to bear some responsibility. I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
> No doubt. But by extension, if a safety feature did not work appropriately, that would be Tesla's fault, no?
Not necessarily. Automatic Emergency Braking systems across all automakers have serious constraints [1], and are still sold with broad legal disclaimers. They will attempt to stop you in the event an object is in the vehicle path, but importantly, there are no guarantees (and this is made clear in each vehicle's user manual). At the time of this accident, Tesla had no safety system for detecting red lights or stop signs, therefore no safety system to fail (when Autopilot is active, the vehicle has a "dead man switch" that commands you to torque the steering wheel and provide other input to ensure you're still alive and attentive every 10-30 seconds, depending on a variety of factors [speed, road curvature, visibility, path planning confidence, etc]).
This idea that these safety systems are foolproof and manufacturers are liable versus the driver are bizarre to say the least, but as I mention in another comment, it speaks to the broad lack of understanding and personal responsibility that has permeated society. People get in and drive, and it's someone else's problem if an adverse event occurs.
> I don't want a system where that professional standard is eroded because it's easier to push the risk down to the end user with a simple clause in a manual/contract that may never be read.
I agree with this position, and that engineering in general should be held to a high standard. If manufacturers have built what the industry has determined is industry standard, and regulators sign off (NHTSA has made no attempt to instruct Tesla to disable Autopilot fleet wide with an OTA software update), I'm unsure there's much more to do when the human pushes beyond system limits.
> To understand the strengths and weaknesses of these systems and how they differ, we piloted a Cadillac CT6, a Subaru Impreza, a Tesla Model S, and a Toyota Camry through four tests at FT Techno of America's Fowlerville, Michigan, proving ground. The balloon car is built like a bounce house but with the radar reflectivity of a real car, along with a five-figure price and a Volkswagen wrapper. For the tests with a moving target, a heavy-duty pickup tows the balloon car on 42-foot rails, which allow it to slide forward after impact.
> The car companies don't hide the fact that today's AEB systems have blind spots. It's all there in the owner's manuals, typically covered by both an all-encompassing legal disclaimer and explicit examples of why the systems might fail to intervene. For instance, the Camry's AEB system may not work when you're driving on a hill. It might not spot vehicles with high ground clearance or those with low rear ends. It may not work if a wiper blade blocks the camera. Toyota says the system could also fail if the vehicle is wobbling, whatever that means. It may not function when the sun shines directly on the vehicle ahead or into the camera mounted near the rearview mirror.
> There's truth in these legal warnings. AEB isn't intended to address low-visibility conditions or a car that suddenly swerves into your path. These systems do their best work preventing the kind of crashes that are easily avoided by an attentive driver.
The Mazda 3 Astina 2014's Radar Cruise Control had clear warnings that it would not detect motorcycle riders. Yet not once did it fail to actually do so ..
I guess they just weren't confident enough, and that I hadn't been behind behind the required number.
I was very impressed by that system, very simple yet solid. By always using radar cruise control you'd kind of take away the need to go into AEB territory, which never happened (activated) for me outside of when I tested the system using cardboard boxes.
(I want to know how my stuff works - 2/3 times came to a full stop from 30kmh just in time, third time it put 10cm into them ; I concluded this is extremely good technology that should be made mandatory everywhere).
I don't disagree with you and you make some good points. But I don't think it's fair to assume I'm claiming the safety systems are foolproof. I've worked on safety-critical systems in automotive, healthcare, and aerospace so I know better.
I think it may ultimately come down to the way Tesla's marketing is perceived. If it's found that a reasonable person would insinuate that Tesla implied their system had capabilities it did not, that gets into ethical/legal trouble and speaks to what I meant about working "appropriately". But I think we agree on this based on your previous comment about creation of a regulatory framework for communication.
As far as the personal responsibility goes, I also agree on that point. But in an immensely complex and interconnected society, this has limits because humans don't have the bandwidth to make risk-informed decisions on everything. As I mentioned in a separate comment, there are certain professions (namely: engineer, lawyer, doctor) who have obligations/responsibilities to public safety. (Hence the term "profession" which comes from professing an oath of ethics). I think it's a bad precedent to push the responsibility away from these professions. The talking point about personal responsibility seems to only go one way, and it (unsurprisingly) is the direction that allows corporations to maximize profits while also absolving themselves of risk.
If you drive over a bridge and it collapses because of a bad design, I don't think this gets chalked up to "welp, you needed to take personal responsibility for deciding on that route". If you buy flooring for your home that makes your kids sick, I wouldn't blame you for not doing due diligence on the manufacturing or chemical process. In both cases, the end-user has a reasonable expectation of safety and the professional who designed it would usually be held responsible. Maybe, as you said, the AV world needs some more oversight and regulation to communicate those risks.
Companies can’t hide stuff in small print while, at the same time, saying something else in marketing material, with a footnote saying something along the lines of “real-life performance may be different”.
Now, whether that happened with Tesla will be for a judge or a jury to decide. In the past, Tesla certainly has made bold claims in their marketing material.
It would probably come down to the question what would have been ‘normal’ for the driver to assume about Tesla’s software.
Have you ever seen those signs on the back of dump trucks on the highway that state something like, “Stay Back: Truck Driver Not Responsible for Cracked Windshields”?
I think there's a case to be made for banning level 2 and 3 autonomy entirely as too likely to result in distracted people not paying attention in general. But given that we have decided to allow them having lawsuits against car companies where the driver assistance systems work as intended isn't the right solution. If we want to crack down it should be a law or regulatory change to it applies evenly to everybody's systems.
I'm not saying necessarily that we should or should restrict that sort of limited autonomy, just that I think the tradeoffs and empirical questions aren't obvious and you could make an argument for it.
Another example is "launch control" pointless and dangerous on public roads. Any injuries or deaths from that I'd argue liability is 50/50 driver and manufacturer. Maybe even 33/33/33 driver, manufacturer, and driver's insurance company for not voiding or denying insurance over having such a "feature".
It's not only Tesla that has launch control some other models of other vehicle manufacturers also offer it.
I think Ford and GM will share a lower percentage of responsibility, if any, because they actively configure their service to prevent this. Tesla tends to wink and nod.
> Are folks going to sue GM and Ford when people don't pay attention when their versions of driver assist don't stop the car for an obstacle or traffic signal? Probably, but that's because America is highly litigious and has lost the concept of personal responsibility.
A decent case can be made for making GM and Ford responsible, if you view tort law from an economic perspective rather than a moral perspective.
Under the moral perspective approach tort law's goal is to identify those whose negligence or wrongdoing are responsible for some harm and making them bear the cost of that harm.
That works fine when it is hard to have big torts that cause so much damage that the tortfeasor cannot cover it. You then start having to have insurance. Potential tortfeasors might have insurance that pays if their torts cause too much damage, and potential victims might have insurance that pays if they get harmed by an uninsured tortfeasor.
When an uninsured tortfeasor harms an uninsured victim and the victim needs expensive emergency medical treatment that they cannot afford they get it and the hospital pays or the taxpayers pay.
Under an approach to tort law based on an economic perspective it is not about blame. It is about putting responsibility where it can do the most to reduce the number and severity of similar future torts.
If Tesla and GM and Ford are responsible when their driver assist features are involved in an accident, regardless of whether it was user error or use stupidity or bad design or a manufacturing defect or whatever that causes, they will have good data on how often this happens, how much damage it causes, how those compare between their cars and the other cars, and can try to make changes to reduce frequency and severity if it turns out that their cars are having this issues at a higher rate than expected.
Under the insurance approach in the moral-based tort system, the insurance companies will have data on how often each company's cars have these problems so they can set premiums based on that. Drivers of higher risk cars will pay more to insure against accidents they are at fault in. All of us will pay more for coverage that covers us against accidents caused by underinsured drivers.
That might provide some incentive to the car companies to address the issue--if people know they are going to have to pay more for insurance if they buy one of your cars they might be more likely to choose a different car--but it isn't likely to be very much incentive.
With the approach in the economic-based tort system there are more incentives to actually improve safety instead of just raising prices for insurance.
The articles kinda vague on why exactly the driver is being charged, though. Reading between the lines, I suspect its not just because he had the autopilot engaged, but was engaged in some form of gross negligence at the same time (sleeping, playing videogames on his phone, etc).
Failing to actively supervising the driving of the vehicle while autopilot is engaged is probably enough to sustain the mens rea for criminally negligent homicide.
Isn't blowing through a red light directly resulting in the deaths of two people typically going to result in vehicular manslaughter charges? In other words, blowing through a red light is a sufficient amount of negligence in my mind for vehicular manslaughter.
I don't think so, though I'm not positive. The cases I see in the news usually involve some extra piece of wrong doing (being under the influence, almost exclusively).
The example the parent gives is potentially a criminal liability as well. If you know the railing on your balcony is wobbly and a guest leaning on it falls to their death, you're definitely open to criminal negligence charges coming your way.
That's an opinion not a statement of current policy or reality.
In the real world we have pretty extensive precedents about consumer product safety, and manufacturers are very often liable for design decisions that lead to harm, even if nominally speaking the user was supposed to be ultimately responsible.
>Short answer is “yes” same way a homeowner can be responsible if they negligently leave some issues on their property and a person falls to their death.
There's a lot of gray area in those kinds of cases. The idea that one is negligent because one could have done something differently tends to permeate internet circle jerks about legal responsibility but these sorts of things tend to be very specific to the facts in question and the burdens of proof differ greatly between civil lawsuits and criminal prosecutions.
In this case the driver will probably be probably be financially responsible for the outcome but being found guilty of manslaughter is a much higher bar. However, many states have various vehicular <type of crime> statutes with lower requirements or "but if driving a vehicle then X" type qualifiers on the definitions of normal crimes so that may complicate things.
Yes, Pilots are responsible for the plane. Even if the autopilot messes up, there is a reason there are multiple pilots in the cockpit, they are supposed to take control.
Having said that, aircraft pilots are very highly trained relative to automobile drivers and must pass periodic medical and proficiency exams to maintain their flight status. Whereas anyone (loosely speaking) can buy a Tesla and just start driving it.
U.S. CFR Title 14, Part 1, Section 1.1 defines the duties of Pilot in Command, legal term: [...] Has the final authority and responsibility for the operation and safety of the flight. [...]
That means PIC can delegate flying to their mom if they like but they are responsible. If they delegate to hardware and it misbehaves, they are still responsible.
In the U.S., the FAA provides quite a few regulations that specifically forbid the use of autopilot in certain circumstances (e.g. "Autopilot - Minimum Altitudes for Use [0]). More over, the FAA recognizes that extensive use of autopilot functions can degrade a pilot's skills and has issued a SAFO [1] with guidance for aircraft operations and pilot training.
Well the pilots are dead in a lot of incidents so it's kinda moot? Airlines are sued all the time over crashes and right now Boeing is being sued over crashes from MCAS, even though there is a magic combination the pilots could have pressed to override it.
Generally, in any kind of crash investigation the pilots are one of the least important things to look at - telling people to do X or Y to mitigate some risk has a terrible track record.
> Well the pilots are dead in a lot of incidents so it's kinda moot?
It's not moot because most occurrences don't result in death. Occurrences are investigated both to establish fault (which can and does sometimes result in pilots losing their flight status and/or jobs), and from a safety perspective to try to prevent or minimize future occurrences.
> Generally, in any kind of crash investigation the pilots are one of the least important things to look at
This is not true. Most accidents and incidents are caused at least in part by pilot error.
> telling people to do X or Y to mitigate some risk has a terrible track record.
This is not true at all, at least when it comes to aircraft pilots. As anyone who's been to flight school knows, about 90% of the curriculum could reasonably be described as "telling the pilot to do X or Y to mitigate risk". And it works! There are millions of flights every year, and the overwhelmingly vast majority of them are safe (because pilots actively & successfully mitigate risk).
Yes, but autopilot in plane is much much different. It is not about detecting and avoiding other planes, it is about following a route and recovering from various flight situations. You don't have an autopilot for when they are on the ground.
It's probably evaluated on a case-by-case basis depending on how much the investigation determines it was caused by technical issues and pilot negligence.
The very short story is, one member of the crew maintains control of the aircraft (hands on yoke and throttles, maintains general situational awareness) while the other enters the data into the autopilot (set altitude to X, bearing Y, at speed Z). The aircraft then executes the maneuver(s) entered, while under supervision of a human.
I think we need a new distinction: not "self-driving" but "self-liable".
A lot of the FSD hype implies applications in which there isn't an active driver; cars going off and parking themselves, Uber replacing human drivers, working while commuting, etc. Whether those applications can be realized is not so much a question of technology as liability.
Conversely, if the human remains liable, this needs to be kept clear in the marketing material.
> Should Riad be found guilty, “it’s going to send shivers up and down everybody’s spine who has one of these vehicles and realizes, ‘Hey, I’m the one that’s responsible,’”
Duh. There's plenty of warnings, you are still responsible and your attention is required at all times to intervene when needed.
Of course, there are going to be plenty of situations where it will be too late for a human to intervene. Those are going to look harder, but purely from a legal standpoint you are not (yet) allowed to hand off responsibility to some algorithm just because it's baked into the car you buy.
If you're reversing with the help of a sensor that's always worked before, but this time the sensor fails and you hit someone, it is still your responsibility. On the other hand, if your brakes fail despite you having had a recent service, that wouldn't be your fault. I'm not sure where the dividing line is between those two issues.
One thought is that "always worked before" isn't good enough for engineering. Most engineers are reluctant to approve a design that is based solely on testing.
The brakes are highly engineered, known to be of exceptional reliability, have redundancy, and a dashboard indicator for some known failure modes. In the two cases of brake failure I've experienced, the dashboard lit up like a Christmas tree, and the car was still tolerably manageable. We do adapt our driving habits to the possibility of brake failure, e.g., maintaining a safe following distance, and taking extra care when there's ice.
The backup sensor is designed with the expectation that you are the backup. You were probably told that.
Regulators and insurance companies constantly analyze crash data, so a conclusion of "always worked before" based on widespread stats analyzed by engineers is possibly OK, but not "always worked before" according to the consumer.
Granted, there may be a gray area in between full driver responsibility, and full engineering responsibility. There was a HN thread just yesterday that debated what engineering actually consists of, so there's no pat answer. And no formula for deciding, which is part of the reason why we ultimately have humans deciding, through the court system in the US.
The sensor isn't supposed to be a replacement for looking where you're reversing. It merely tells you the distance so that you can park more accurately. Vehicles that don't have a rear view sound an alarm and flash warning lights while reversing*.
Yes. Tesla cars are not self-driving cars, despite their marketing. They probably won't be for years to come. I don't think anyone who has read up on the current state of self driving would come to any other conclusion.
At best the convicted drivers could sue Tesla for damages because of their advertising after conviction, because Tesla puts a lot of effort into their marketing materials to convince people that the car drives for you, but on the paperwork you sign you're reminded that it doesn't, and the software features that force you to keep your hands on the wheel at all times are a constant reminder of that.
Tesla's marketing is admissible in court as evidence, and probably will carry more weight than any warnings. Marketing is designed to be clear and easy to understand, so courts tend to assume that people understand the marketing intent more than the letter of the legalese warnings.
I have no idea what courts will do. It tends to take a lot of cases going opposite ways in slightly different situations before they settle down to something where anyone will attempt a prediction. We are still in the early days of this.
You have to put in a conscious effort to stop the car from telling you to hold on to the damn steering wheel. People install mods to their cars to disable these features, but that too requires conscious effort to do.
Tesla owners might sue Tesla for not fulfilling their promises, but I don't think they'll get out of the responsibility for autopilot accidents so easily.
Unless the court rules that Tesla's cars don't pass the minimum road safety tests and are immediately banned from accessing the public road, I don't think any court will accept "the ads told me it was fine" as an excuse for killing two people. You're expected to know the law, and the law says you're responsible for your car while you're on the road.
The subject of who's responsible for self driving cars is an interesting one, and it'll take years for real, clear decisions will be made, but that's completely irrelevant here.
That is admissable in court too. I did not claim to have any insight into how courts will rule. They could decide given the marketing the effort to prove you are in control is obviously a bug. They could decide that the effort to prove you are in control is enough to mean you are in control. Until dozens of these cases have gone to court I won't guess.
>“Whether a [Level 2] automated driving system is engaged or not, every available vehicle requires the human driver to be in control at all times, and all state laws hold the human driver responsible for the operation of their vehicles,” a NHTSA spokesperson said.
This is going to be the interesting part. To my mind this is where partial automated driving systems are basically incompatible with people and cars. There is just no way a person who is behind the wheel but who isn't driving can be in control and highly attentive in the same way as a person who is actually driving.
This might not be a hot take, but I think some of the blame should be shifted onto the Tesla. This guy killed people, and it should not have happened on his watch, however the car should not have blown through a red light. For instance my car has lane keep assist and you can drive while pretty much not touching the wheel but still paying attention. It has almost pulled me into the car next to me due to a random construction line fading on the highway.
I'm not saying this guy isn't guilty, all I'm saying is maybe Tesla should shoulder some of the blame here. I'm probably a very small minority in this thinking however.
Some quick googling brings up some US results that this only makes you liable if you were speeding or doing something else considered slightly dangerous, or you didn’t properly maintain your brakes.
It works like this in just about every US jurisdiction, it's just that most of the time the DA declines to prosecute if your brakes fail. But there's no legal protection, you're still possibly going to be charges.
If you kill someone with a car in a way that's neither negligent nor reckless I don't see how it could fulfill the criteria for involuntary manslaughter which are exactly those things.
If the drivers must be always in control of the vehicle (according to the manual and the activation of Autopilot), what's good in this feature? Couldn't the driver sue Tesla for bad marketing? They are selling a feature which is a lie, aren't they?
PD: I don't own any car and I have nothing against Tesla.
It would be a bad lawyer who didn't try to drag Tesla into court for this. Tesla has more money than whoever was driving (well I didn't look into the background, maybe the driver is filthy rich, but odds are against it), and lawyers always look for the deepest pockets. The victim's lawyer and the driver's lawyers (more likely his insurance company's lawyers) both have incentive to pin this on Tesla as much as they can, and marketing is admissible in court.
I think we should immediately prohibit the use of the term Autopilot for level 2 autonomous vehicles like the current Tesla.
The name is terribly misleading far past gray area marketing into factfully lying. Pilot assist would be a better term.
Yes.At the end of the day the car is the tool and just like with guns or knives, whoever uses the tool as an instrument for harm is at fault.I used to think one could argue: "but X made guarantees, software might be guilty, etc" yet at the same time that argument is not logical and ironically the legal framework explains it the best. We almost never give a pass to criminals even when they're not lucid and under the influence of factors that impair their judgement, how come we debate whether or not the human who used the car is at fault?
I also don't think Tesla is necessarily liable here, an unpopular opinion which i also changed over time considering that one needs to prove the car misbehaved.But even then, assuming the car truly misbehaved, making the case that the person is not liable is a big stretch.The title is still clickbait
Here is an interesting question I recently came upon, does the driver of a Tesla need consent from their passengers before turning on FSD? I would argue yes, because I don’t want to unknowingly beta test software with my life, but I am curious what others think.
As a kid, do you even have consent when your parents tell you to hop in the car cause you're going somewhere? As a passenger, isn't consent of trusting the driver inherent when you get in the car willfully ride somewhere? Could a passenger not interject and say, hey I don't feel comfortable riding with you if you enable autopilot?
The first example I think is same in a Tesla or a Toyota. Still interesting to ponder, I think there is still some ethical questions left.
The second example requires a bit of nuance, I trust the driver to operate a vehicle, not necessarily the vehicle itself. Does my trust extend to their decision whether or not to turn on autopilot? You might argue yes, but you also might not be informed enough to make that decision as a passenger.
The last example assumes the passenger has knowledge of the car and FSD which they may not.
In the last example, if you're riding with a driver and not paying attention and they are texting or reading a newspaper and get into a crash, then the liability would be on the driver. Not the cell phone company or newpaper company. I think the same for autopilot. If a warning displays to the driver that they are still liable for controlling the vehicle, and they ignore it then the driver would be liable to the passenger, not the car.
In my experience driving Teslas on autopilot (a few dozen hours), you need to keep watching the road. The whole time. Especially on on/off ramps and around traffic lights. I almost watched a Tesla I was operating drive into a meridian on a highway off-ramp because the concrete was approximately the same color as the road. I slowed down to see how long it would take to detect the coming collision ... and if I hadn't been watching carefully, it would have been too late.
Yes, drivers must still pay attention in self-driving cars. Probably a future exists where that is no longer necessary, but that's not today.
Basically you're a voluntary data labeller for their deep learning algorithms. If you take the wheel it means they need to train more on similar situations. Are they hoping to eventually experience so many different situations that out of sample data becomes unlikely? Sounds like a pipe dream to me?
Of course he's liable. He should have been driving. Fully autonomous cars aren't yet legal for anyone to drive autonomously, the driver still is responsible.
I think the real problem is the driver monitoring system. Openpilot nailed it with their eye camera monitoring. There is no need to touch the wheel which makes it more relaxing but you can not take your eyes of the road, which is far more important then keeping your hand on the wheel. Both would even be better.
Still trying to figure out why this is not mandatory for all lane keeping assistents.
In Germany it is highly (over)regulated, to a point where it isn't usable. Torque on wheel for e.g. is to low even for some wide curves.
Still eye tracking isn't mandatory.
Out of interest where do you put your hands if not on the wheel? There's always going to be latency if you need to take evasive action and your body isn't immediately ready to react.
I think the answer's yes, and it's something that would make me leary of relying on a car's automated systems for anything safety related. Ultimately if I'm going to be criminally liable in a crash caused by my car, I'd like to retain control of what's going on.
The idea that a software bug could land me in jail for manslaughter, is not one I'm a fan of.
I feel like with all the tech advantage Tesla claims to have it should have some way to turn off autopilot in situations where it's not applicable instead of always saying "you're not supposed to use it like that". Just start beeping or something when you're not on the highway and then shut off the autopilot
> realizes, ‘Hey, I’m the one that’s responsible,’” Kornhauser said. “Just like when I was driving a ’55 Chevy”
Um, no shit the driver’s responsible. This seems like a weird question to be confused about prior to level 5, which Tesla’s 2022 “full self-driving” isn’t and 2019 wasn’t either.
That would only lead to a possible cause of action for the driver (or regulators) against Tesla, but of the two cars in that crash, I doubt there's any way the outcome will be other than "the operator of the Tesla is the one legally responsible for the crash" (which I think is the correct outcome as well as the most likely outcome).
> Should Riad be found guilty, “it’s going to send shivers up and down everybody’s spine who has one of these vehicles and realizes, ‘Hey, I’m the one that’s responsible,’”
Doesn't send shivers up my spine. I know I'm responsible, that's why I keep a constant eye on the road and am prepared to take control at any time.
Though the more recent software is very good at stopping at red lights, I never trusted autopilot anywhere but the highway where Tesla themselves says it should be used. Now that I'm in the "Full Self Driving" beta I trust it on regular roads a bit more but I still 100% understand I need to be watching the road. It augments my driving not replaces it (in its current state).
The premise of this article is that a trial involving a crash with a level 2 system[1] is fundamentally different than one involving a level 1, without making the case for that.
In both cases the driver is responsible, and expected to monitor the road. Why would someone crashing while using e.g. a lane-keeping system be in a different position legally than someone using cruise control? Level 1 driving assistance has been in use for decades, and there's surely legal precedence to draw from.
> “It certainly makes us, all of a sudden, not become so complacent in the use of these things that we forget about the fact that we’re the ones that are responsible — not only for our own safety but for the safety of others.”
Well that is the thing here. There is not proper driver monitoring on these cars and it allows both autopilot and FSD systems to be open to abuse to cause crashes like this given that the drivers have become too complacent and convinced that these cars drive themselves which Tesla's falsely advertises.
So yes. Guilty as charged. Both features of autopilot and FSD allow drivers like this one to become complacent with the system.
How about the coder? The person who reviewed the code?
The person who assembled the sensor and has some kind of defect? Or steering/braking mechanism?
Remember Toyota's accelerator-by-wire spaghetti code? No oversight by any third party on that code, what has changed today (nothing except even more complex code doing more complex/dangerous things)
At some point people are going to start victim blaming pedestrians/cyclists for just "being there" when their car mows them over. I guess it already happens but it will become far more common.
Autopilot is a free feature on Teslas, and is a simple lane assist system, akin to cruise control. It is not Full Self Driving.
FSD is an expensive upgrade, and is supposed to do things like stop at lights.
Driver was on autopilot, which explicitly warns you upon activation that it doesn't stop.
It's a badly named and irresponsibly advertised lane assist system.
I recently rented an Audi Q5 and a Toyota Corolla that both had vision-based lane assist and radar distance-keeping systems baked into the cruise control. It's clearly just cruise control, slightly more sophisticated than the hand-operated throttle lock in my dad's restored 1940s antique truck, but just cruise control nonetheless. That's obvious from the engagement mechanism on the steering wheel, from the way it tugs at the power steering when the painted lane lines bend near intersections, from the "Please take control" alert and disengagement when it hasn't felt a sufficiently strong override from your hands on the wheel in 10 seconds... Is that experience anything like the implementation inside a Tesla? If so, I can't imagine why any driver would be confused, if not, that seems pretty negligent on Tesla's behalf.
It's EXACTLY like that in a Tesla. Even when your hands are on the wheel, you still get nagged to tug on it constantly. Autopilot is not a groundbreaking feature at all. It's just like the Audi Q5 (my wife owns one, and I end up driving that all the time as well since my wife has terrible night vision). My wife specifically pointed out (in a critical way) "What's the big deal with Tesla? My car does the same thing." I had to explain to her that most people conflate FSD with Autopilot.
Oh come on. I own two Teslas (a new P100D with the FSD paid for and a new "plaid") and even I'm not this big a fanboy-apologist.
Elon has promised the moon. If it's a "lane assist", call it a "lane assist."
I think the driver is primarily responsible, but Tesla is also culpable.
I've never used the "self-driving" features -- even the smart lane-assist/cruise control --- other than a few minutes on a clear road to try them out. Too scary for me.
I'm not a fan boy. I have endless criticism of Tesla, especially on the service front. And let's not even get into the build quality issues, particularly with the body panels.
I'm a fan boy of truth, reason, logic, and individual responsibility. All of which are in extremely short supply at the LA Times.
On this specific topic, truth is being shaded deliberately to drive clicks. There is an obvious motivation to make people think that a self-driving car killed people, when it was really a dipshit abusing cruise control. It can happen with my wife's Audi, and probably has happened, but without Tesla in the headline, it's not ideal clickbait.
Regarding Elon "promising the moon", I simply don't understand how FSD relates to Autopilot in this case. If you can read, any misunderstanding someone has about Autopilot being FSD is cleared up immediately upon activation. Alert after alert after message flashes on the screen.
I grew up spending a lot of time on boats, and even worked on a crab boat as a teenager. Autopilot on boats is never mistaken for a self-driving feature. It maintains a compass heading, and nobody is dumb enough to think it does anything else. So if the marketing team has purposefully called it autopilot to hype it up, I'm in agreement the name should be changed. BUT, and this is an important BUT, I don't think it would have changed the outcome of this situation. Dipshits have caused accidents abusing cruise control since cruise control came out. The lane assist version has increased the confidence of dipshits to look at their phones while driving, and that's a conversation worth having as to what should be done about it. But that's not specific to one car company.
> I'm a fan boy of truth, reason, logic, and individual responsibility.
So you are a fan boy?
> And let's not even get into the build quality issues, particularly with the body panels.
I think Tesla made a big mistake calling the "Model 3" a Tesla. They should have had a different brand for that to set consumer expectations appropriately (like the "Yearn" or the "Strive")
> when it was really a dipshit abusing cruise control.
I'm not going to call anyone who died in a car accident a "dipshit."
I'm guessing you're trying to be funny with the fanboy comments that add nothing to the conversation. I will choose not to respond to that.
Regarding your moral assertion that a person who dies in a car accident should never be called a bad name:
I absolutely will call them something bad if they died in a car accident that they caused by being negligent and could have potentially killed others.
My father's first wife died in a car accident while he was driving in 1971. They were t-boned by a drunk driver operating a stolen vehicle who ran a red light. He suffered a broken neck and woke up at the age of 21 in a hospital a few days after the accident being told he was now a widower and had to raise his 1-year-old son (my oldest brother) by himself. But that's okay. You wouldn't ever call that drunk driver anything bad right?
Just ask me if I read the article, or state that you don't think I did. I did read it by the way, and my main complaint is with the title being designed to use innuendo to draw clicks. I consider it beneath the dignity of any good publication that claims to be news. The LA Times lost the plot on this a long time ago. As extreme as Larry Elder was (and yes, i think he's extreme and wouldn't vote for him), when they called a man born and raised in Compton "the Black face of White Supremacy" while providing uncritical coverage of his opponent, a wealthy white man born into the Getty fortune, I stopped taking them seriously. I was probably late on the game with that compared to most HN readers. It was embarrassing to see a once-great publication devolve into being completely unaware of the Onion-like absurdity of their headlines.
FYA (just trying to help Dang out here), asking if someone read the article is a violation of the HN rules. I've been guilty of it myself in the past, and Dang corrected me. I try hard to help him out, because he's a good moderator and has a hard job.
It's easy to blame the driver, until you consider whether or not the premise of a "partially automated self driving car" is fit for the road. It is not possible to both "stay alert" to anywhere near the same degree required for manual driving, and let an automated navigation system take control.
If you sell something inherently dangerous to people under the premise that it is not dangerous provided you do something that is impossible (but not obviously impossible to most people) - bad things are significantly more likely to happen, in other words, it is not only dangerous but misleading.
> It's easy to blame the driver, until you consider whether or not the premise of a "partially automated self driving car" is fit for the road. It is not possible to both "stay alert" to anywhere near the same degree required for manual driving, and let an automated navigation system take control.
This is a poor argument. Aeroplane autopilots were "partial automation" for decades - possibly still are for all I know - and yet have been considered fit for purpose. But the responsibility lies with the driver to ensure the vehicle is travelling safely, same as it does with the pilot of a plane.
That's a poor comparison, autopilot is a borrowed name. Planes are not cars, there are no intersections in the air, collision avoidance is a slower game unlike in a car.
Additionally, in aviation automation has been around for a long time even though the challenges are more primitive... and they recognised early on that it is instinctual for the pilot to mentally disengage when systems become more automated, as a result this is an integral part of pilot training - it's not part of driving tests or learning to drive despite being an instinct you have to unintuitively fight against, and yet manoeuvring decisions and hazard awareness and response happens at a lot faster rate in a car than in an airliner.
In other words, automation does affect the pilot in aviation, but arguably less... and even then, they explicitly train for it to fight the instinct to disengage.
The ultimate responsibility is the person behind the wheel.
That said, I’d argue Tesla is also liable. To what extent I don’t know, but it was clearly a contributing factor.