So the guy is in the car and he's concerned about catching his flight but when the operator asks him to do something in the waymo app he doesn't want to do it. Could it be that he'd rather keep filming for internet notoriety than stop filming and actually solve his problem?
Could it be that Waymo should just be able to stop the car? It doesn't seem at all ridiculous to you that he is already on the phone with them, and then they just read the script to tell him to pull out his phone and fire up the app? Just what I want is to ride in a car with Comcast level of customer service.
>Could it be that Waymo should just be able to stop the car?
That kind of remote control opens up the possibility - maliciously or accidentally, likely or unlikely - for every Waymo in the fleet to abruptly stop, regardless of whether it's safe to do so. That scenario is orders of magnitude worse than "sometimes a Waymo gets lost in a parking lot and it takes a thirty second call to fix it".
You don't think Google is able to stop a Waymo vehicle? They can literally control it remotely. The issue is that the person behind the phone did not have the authorization or access.
Of course not, but I'm open to the idea that they should be able to escalate to someone who can. This is a transitional period with self-driving cars, and it helps to mitigate serious potential safety issues.
Ideally, in the future you would have to expressly press some button to send an OTP to someone you wanted to allow to control your car, with fine-grained permissions.
Will that future happen? Probably not, and we'll probably see extreme corpgov oversight and mass reduction in individual freedom in return for convenience.
Anyway, I was only responding to OP's claim that support for remote access capabilities could lead to an exploit: There's already remote access capabilities in these cars.
This makes sense. If one thing is bad, another somewhat similar thing is good. Being trapped in a moving vehicle without a driver is not bad because being in a vehicle with a person is bad. Similarly getting hit with a brick is not bad because getting hit with a shovel is bad.
> Could it be that he'd rather keep filming for internet notoriety than stop filming and actually solve his problem?
The guy kept filming for under sixteen seconds after asking about and receiving clarification that she is unable to intervene with the car and needs him to use the app. I like the idea that this somehow indicates bad faith or ineptitude in your estimation.
It's worth making, but there's some serious drama queen vibes that make it feel pretty overblown. If an uber is late to the airport a reasonable person doesn't threaten the driver with covering the cost of their flight.
If an Uber driver caused you to miss a flight by driving around a parking lot in circles at a speed you can't exit the vehicle, you don't think it would be a reasonable request for the customer to ask Uber to make it right?
Fair enough, there is a difference. But now we are not looking at a missed flight so much as attempted kidnapping or imprisonment or some other much more serious crime. Which is interesting to think about with the Waymo example, but hard to take seriously in the context of the video since the rider declines to do what the customer service rep asks them to do (at least appears to for the sake of producing additional outrage for their video)
It's interesting how quickly we've gone from discussing an interesting failure mode of autonomous robots that travel in public spaces, and switched to calling it PEBKAC.
This is a unironically a great signal society is willing to accept self driving cars. Even computer security isn't this good at playing blame the user.
"Society" didn't blame the user, I think "society" would have no problem blaming the car. Hacker News isn't representative of society at large.
The signal you're seeing is the tendency of tech people to consider any risk to human safety to be less important than the benefits of the technology itself, and to always blame the human and never the tech.
You also see this manifest in any conversation about the failure modes of AI, in the inevitable knee-jerk response of "humans also do x."
Yes, that could be. And you could focus on being annoyed at this guy's social media behavior, if you like. However, it doesn't mitigate, and is less important than, the problem of the car getting into this state and Waymo not having control over it.
"…the operator asks him to do something in the waymo app he doesn't want to do it."
Why are you trying to offer an excuse for the inexcusable?
The app should have nothing to do with it. Where are the emergency stop and exit controls for the passenger? He should be able to exit the vehicle at any time.
I fed up with this bad tech and the fact that governments let Big Tech act irresponsibly to get away with this shit. Why aren't there regulations in place before this tech is allowed loose on an unsuspecting public?
If CEOs of tech companies were held directly responsible with the threat of jail time it'd stop almost instantly.
This poor man was forced to slowly circle around a parking lot eight times! It took over five minutes before they fixed the problem! He almost missed his flight! Outrage, outrage.
Reckon that's a bridge too far even for those miserable reprobates. Besides, it'd only take for a couple to be locked away to scare the shit out of rest of them. There's nothing like making an example out of a few to bring better behavior.
The real problem these days is that the culprits who perpetrate this shit are able to hide behind corporate walls—at worst the company gets fined (albeit rarely) but those who are actually responsible escape both freely and anonymously. Laws need to be changed to make employees directly accountable for their actions and to take the consequences.
Unfortunslely, introducing such laws is somewhat complicated and would meet with huge resistance for many reasons too involved to list here. Nevertheless, there's one I should mention, sometimes unavoidable mistakes occur (or important facts remain unknown) even after due diligence and rigorous testing. Laws should not hold individuals responsible for force majeure† (so-called acts of God) that they have no control over. Any change in the law would have to allow for this.
That said, we know damned well that that provision/exception is totally irrelevant in most of these cases/customer unfriendly fuck-ups, clearly they're as guilty as hell.
I'd advocate another change in the law in that shareholders need to be identifiable. We need a way of
publicly embarrassing and shaming shareholders when they invest in 'carpetbagger' companies. To avoid shame investors who invested in good faith would be left with little option than to divest themselves of their shares. In effect, shareholders have to share the responsibility for a company's bad behavior and be seen doing so.
Mind you, I can't see this happening anytime soon, as many shareholders are just too greedy to allow laws like this to come into effect. Unfortunately, the political will just isn't there.
_
† A classic example of where hubris, cocksureness, cost cutting and insufficient engineering rigor — AND force majeure led to a disaster was the collapse of the Tacoma Narrows Bridge. If engineering rigor had been followed to the letter and penny-pinching avoided then it's likely the bridge wouldn't have collapsed despite force majeure entering the scene (the physics of Kármán vortex street turbulence was not well understood at the time).
Yeah, I got a negative impression of the guy. It looped around the parking lot for five minutes. I'm not a fan of self-driving cars, but this doesn't seem like a huge deal.