It felt like more than a few inches. (I'm not splitting hairs here, I really do feel it was qualitatively significantly different from your description, which made it sound like the car was cautiously starting to move just a bit.)
> and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.
When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
> that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing
This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.
> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it.
For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?
I really do want self driving cars to be a thing but I wish we were going about it a bit differently and I wish other people who wanted it didn't play things down as much as they do.
> For me this is the important part. Too many people already drive far too distracted. Imagine giving many of those same distracted drivers a great reason to ignore what they're doing even more. Very frequently when I'm stopped at a traffic light I'll see the person next to me on their phone waiting for the light. Would we trust that someone doing that would lookup when their car starts driving into the intersection?
Isn't this precisely why self-driving (even if it isn't perfect) could actually make the roads safer. People, as you say, are already super distracted (partly because driving is boring).
I am one of the believers that self driving cars will one day make roads safer. I just think that the technology we have now that keeps getting called "self driving" is not there yet. If it requires the driver's full attention to keep things safe but also makes individual drivers feel like they can be a little less attentive then we don't really have a very safe situation on our hands.
When I say that the technology makes people feel like they can be less attentive I really do mean it. There was the SF tech worker who was playing candy crush or something when his Tesla smashed into a barrier on the highway. I have friends who own Teslas and talk frequently about how they like taking them on road trips because they can relax a bit more and pay less attention to the flow of traffic (it'll brake for you!!! they say). In a world where these cars have to share the road with human drivers and drive on roads that are under construction or in poor weather conditions I just don't see how we can say this is safe.
Top comment: "The cabin camera really does feel super solid at detecting when I’m distracted. Even if I’m just like searching a song on the infotainment it will get onto me which is annoying but I completely understand and am glad that it works so well ..."
First, let's assume that perfect law-abiding, self-driving cars exist. One one hand they would eliminate incidents caused by inattentive driving, on the other hand they would in fact create incidents where attentive driver in the right of way would yield for an inattentive driver. Change in total incidents would depend on proportion of these events. Anecdotal evidence is anecdotal, but in my own experience number of incidents I have avoided simply yielding when having right of way is much higher then the number of incidents I have gotten into due to my own mistake.
Second, actual "self driving" cars are far from that, especially in their interaction with other drivers.
Third, there second order effects. E.g. a car quickly, unexpectedly maneuvering could cause other car to break sharp, which could end up in an accident where the original car is not even involved. With an increase of cars behaving differently than the local custom such accidents are bound to happen.
Most probably we are going to see an increase in the number of accidents with proliferation of semi-autonomous vehicles before that number starts to dwindle.
> giving many of those same distracted drivers a great reason to ignore
the attentiveness monitoring and strike system? If they ignore it, they will quickly exhaust their strikes and get locked out for bad behavior until they learn to take it seriously.
It's hard to tell because the camera is at such a weird angle, but from what I can see the vehicle remains pretty firmly behind the stop line throughout the entire encounter. We can quibble about how fast it was accelerating, but I regularly see worse false starts at lights.
> When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
I'm actually more worried about the human case than the human+autonomous case. In the human case it is up to the entity that made the mistake to correct their own mistake. In the autonomous vehicle case you effectively have a second set of eyes as long as the driver is paying attention (which they should be and Musk was). This is why I say that it makes a difference that this was internal testing—the driver wasn't a dumb consumer trusting the vehicle, it was the CEO of the company who knew he was using in-progress software.
> This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.
Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist. At some point they have to be tested in the real world, and they will make mistakes when they leave the controlled conditions of a closed course.
> remains pretty firmly behind the stop line throughout the entire encounter
That's not the same thing as "inched forward" is my point.
> In the human case it is up to the entity that made the mistake to correct their own mistake.
You're completely ignoring how likely these events are or how severe the errors are in the first place. You can't jus count the number of correction points and measure safety solely based on that.
> Requiring that autonomous vehicles never be tested on a public road
I never said that. (How did you get from "look at how it's done in medicine" to "this should never be done"?) What I do expect is responsible testing, which implies you don't test in production until at least you yourself are damn sure that you've done everything you possibly can otherwise. Given everything in the video I see no reason to believe that was the case here.
> Requiring that autonomous vehicles never be tested on a public road in real world conditions is another way of saying that you do not believe autonomous vehicles should ever exist.
Sure, but that's a wildly different case than "it's ready for public roads we super promise"
As I noted at the very beginning of my first comment, I am a huge critic of many things that Tesla does, and releasing their software in beta to casual drivers is something I'm strongly opposed to. All I'm saying here is that this specific critique of this specific video is misplaced.
One day I hope the New Drug Application process can have a monitoring and supervision system as sophisticated as FSD Beta.
Just imagine: Constant 100% always-on supervision, supervision of the supervisors with 3-strikes you're out attentiveness monitoring, automatic and manual reporting of possible anomalies with full system+surroundings snapshots to inform diagnostics and development, immediate feedback of these into the simulations that validate new versions, and staged rollout that starts at smaller N (driving simulators are actually pretty good) and continues intensive monitoring up to larger N. Even Phase 3 trials only involve thousands of people, while FSD beta is driving a million miles per day with monitoring that feels more like Phase 1 or mayyybe Phase 2.
One day drug development will be this sophisticated, and it will be glorious.
> When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
This is not nuanced enough. I've been in a cab to JFK in the snow and the driver was speeding around a turn that the car started sliding and eventually crashed into the side of the road.
Er, your comment is the one lacking nuance. There was no snow here, nor did I claim accidents never happen. I was trying to get across a point about the parent's argument.
Your point boils down to a "what if" though. If it's as dangerous as you make it, then you should be able to show plenty of examples where actual harm is happening. Showcase those.
Over 700 allegedly fatal crashes attributable to FSD [1] that Tesla has officially reported to the government over a estimated 400M miles on FSD. Making the driver 150x more likely to be involved in a fatal crash than if they were driving on their own.
Note that these are based on auditable published statistics and are likely a overestimate of the risk as we must assume the worst when doing safety-critical analysis. Tesla could improve these numbers by not deliberately suppressing incident reports and by not deliberately choosing not investigate to avoid confirming fatality reports. But, until they do so we need to err on the side of caution and the consumer instead of the for-profit corporation.
It felt like more than a few inches. (I'm not splitting hairs here, I really do feel it was qualitatively significantly different from your description, which made it sound like the car was cautiously starting to move just a bit.)
> and Elon immediately put on the brakes. The situation was no more dangerous than when a human driver mistakes which light is theirs and does the same thing, which happens pretty regularly in my experience.
When humans make this mistake, they stop the car themselves. When this car made this mistake, someone else (Elon) had to stop it. These are not equivalent. In the former case you can argue no accident was going to happen, but in the latter you can't.
> that this is a test version of the software isn't irrelevant, it makes a huge difference—I am much less opposed to internal company testers who know what they're doing
This is on a public road. Try pulling this off (well, please don't) in medicine and see how it goes.