Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Based on this image:

https://images.ctfassets.net/95kuvdv8zn1v/6h1C7lPC79OLOlddEE...

They and their VC backers are clearly betting on the concept that radars + lidar + imaging will be the ultimate successful solution in full self driving cars, as a completely opposite design and engineering philosophy from Tesla attempting to do "full self driving" with camera sensors and categorical rejection of lidar.

It is interesting to me that right now this is sitting on the HN homepage directly adjacent to: "Tesla to recall vehicles that may disobey stop signs (reuters.com)"



Cruise CEO here.

Our strategy has been to solve the challenges needed to operate driverless robotaxis on a well-equipped vehicle, then aggressively drive the cost down. Many OEMs are doing this in reverse order. They're trying to squeeze orders of magnitude of performance gains out of really low-cost hardware. Today it's unclear what strategy will win.

In a few years our next generation of low-cost compute and sensing lands in these vehicles and our service area will be large enough that you forget there is even a geofence. If OEMs have still not managed to get the necessary performance gains to go fully driverless, we'll know what move was the right one.

We shared several details on how our system works and our future plans here: https://www.youtube.com/playlist?list=PLkK2JX1iHuzz7W8z3roCZ...


It's good to hear a CEO say "we don't know the answer, but we're making a bet" rather than the typical Elizabeth Holmes style "We are absolutely correct and first they ignore you, then they laugh at you, then they fight you, then you get convicted of three counts of wire fraud and go to prison".


"This is a solved problem.", "We're light-years ahead of the competition", or "We expect everyone with existing hardware to be able to monetize their cars as robo-taxis by the end of next year" are other great examples.


to be fair, this is because Elizabeth Holmes is probably a sociopath, so you'd expect to hear something overly confident like that from such. Not that we knew it at the time.


I've met quite a few people like her and I really don't believe they are sociopaths - they say things that sound real and even convince themselves of it but as soon as you start peeling back details they don't exist. It's like you are talking to a Turing machine and it's you failing the test. People on these forums never really speak to these people because we are already largely scientists and love empirical evidence and work with and hang around similar people. There's a whole class of people who have learned to pattern match talking like us without anything to back it up. See the industry "Comms" for many many examples.


>There's a whole class of people who have learned to pattern match talking like us without anything to back it up. See the industry "Comms" for many many examples.

Have noticed this through my career as well. There is a whole class of people who go through life never actually doing anything. They just talk about people doing things. And get paid to talk about people doing things. And by the time people realize that they are just full of shit, they move on to the next place where they get paid to talk about people doing things. And they never actually did anything in the first place, so it's not like you can even say they were a bad worker.


I see you've discovered the people who work in "enterprise sales" for any niche software or technical product.


Roughly 1 in 20 folks on average out there is a sociopath (some quote from Jordan Peterson). It doesn't have to be full 110% on scheming world domination, but detectable (and obvious in daily life). I am not an expert on her or whole saga, but from my perspective many famous people show this trait. In business its almost mandatory to get/stay on top, it certainly gives one advantages compared to nice fair honest people.


Sociopaths are not uncommon as people think. They think sociopaths are “mad killers” but they’re just people who don’t feel remorse when ie lying and have little to no empathy. I’ve met mild sociopaths who were just bankers who would never even think about second order and third order effects of what they were doing. It’s not some kind of rare condition.

I heard the figure was 1 in 30 but 1 in 20 is close enough.


Yes, but all of those quotes are by Elon Musk…


True. I actually replied to the entirely wrong comment, should have been one up... I have a great deal more confidence in truly revolutionary things that have been factually delivered by SpaceX (such as 100+ re-uses of a rocket first stage now), and I question how much of that is really due to Musk at all. Maybe Musk as a figurehead. I wonder if all of the Elon fanboys know that much of what's been accomplished at SpaceX is thanks to Gwynne Shotwell, or even know who she is.

Things like promising "full self driving" for 6+ years now and charging people $12500 for it leave a really bad taste in my mouth and I find it difficult to square with my overall very positive impression of spacex.


I'll bite. Say I'm a Musk fanboy... I can assure you that all Elon fanboys I know are fully aware who Gwynne Shotwell is. Now, your turn: please explain how the following SpaceX accomplishments are thanks to Gwynne Shotwell (other that the hand-wavy "well if she didn't get the contracts none of this would be possible"):

- Merlin engine

- Vertical landing

- Raptor engine

... or, what exactly is the "much of what's been accomplished" that you talk about? Look, I'm not trying to minimize her role, she was clearly a great COO for SpaceX, but it seems weird to me that you try to minimize Musk's importance while at the same time picking one other singular person to highlight. I could understand the argument that "it's a team effort, no one person did this alone"; but if we're picking only one person to assign credits, then surely, _surely_ Musk is that one person, right? I understand the skepticism that he really does engineering & design, so here is supporting evidence: https://www.reddit.com/r/SpaceXLounge/comments/k1e0ta/eviden...


The anonymous “Interviewer” in the last part of that Reddit post was Sam Altman, excerpted from a 20-minute conversation with Elon Musk in 2016 [1] that I found to be interesting even as a non-fanboy.

[1]: https://www.youtube.com/watch?v=tnBQmEqBCY0


Bracing myself for the downvotes, but I suspect any wins coming out of Tesla/SpaceX these days is despite Musk, not thanks to.


I think we'll see major players leaving this industry soon. Self-driving will be a war of attrition and thus cannot be won by US companies with their insane burnrate. Europe has just as competent engineers making a tenth of their US counterparts. If I was a VC I would be head over heels investing in EU self-driving tech. They are the 'cockroaches' of this tech who will survive. I can't imagine e.g. Waymo bankrolling tens of millions of dollars in payroll for years to come.


Does it really matter? If there's no Elon, there's no SpaceX. And I say this as someone that doesn't buy into the founder cult.


I'll be more impressed when he solves climate change.


You might not give him credit even if he did. We're in a thread where people are wondering whether he's not just a figurehead for SpaceX, so... what exactly are we taking about?

AFAIK Musk's involvement in Tesla was specifically to address climate change/ help move the industry towards electric cars. To the extent that this, plus improved battery tech, ends up reducing our oil dependency and eventually contributes to "solving climate change" - would you credit any of that back to him? Or just say that he didn't single-handedly solve climate change, so it doesn't count?


Not profitable, won't happen.


> If there's no Elon, there's no SpaceX.

Why not?

What mystical gift does this one person have that 7 billion other people don't, that permits him and only him to run the company? This is a really unpopular opinion on a web site that exalts founders, but I don't think it really takes much special skill to run a company. Most (but admittedly not all) CEOs are in their position not because of their know-how, but because 1. They founded the company, and happened to be the one that flipped a coin heads 20 times in a row; or 2. They were born into that Ivy League class that closely gatekeeps CxO and SVP positions for themselves; or 3. Were descendants of one of the above.

Assuming a successful CEO is uniquely skilled is like assuming a lottery winner is uniquely skilled at winning the lottery.

I think many people, if given Elon’s financial war chest and basic knowledge of and an interest in rocketry, could have made SpaceX.


Musk didn't have that much money in the early 2000s. Compared to Bezos, he was small fish back then and SpaceX almost went bankrupt developing Falcon 1. If it really did, I don't doubt someone would explain persuasively why it could not have avoided that grim fate with a jackass founder like Musk; but they would have been forgotten already by now.

Attrition rate among space startups is insane. A lot of exciting projects like Armadillo Aerospace (by John Carmack of DOOM fame) crashed and burned. The graveyard of defunct space companies is huge.


I'm sure there are lots of other people out there who could run today's SpaceX as a space cargo trucking company. But Elon deserves the credit for creating two wildly successful companies that revolutionized their respective industries, both in the face of hugely entrenched competitors in highly regulated markets that hadn't seen successful new players in decades, and both as a side effect of his actual goal of getting humans to Mars.


The only people who were even competing were eccentric billionaires so let's be clear that he only beat a handful of other people who even had access to attempt the business. It's not that huge of an accomplishment because private space was theorized for a long time but NASA sucked up all the air in the room for the longest time but Elon's timing was just right. He out of the handful of billionaires working on this would get a chance to succeed at scale.


Honestly it’s hard to read your comment without an envious tone. You even admitted his timing was right, that alone takes skill. The point others are making is that there are lots of examples of failed companies, yet his have been successful. If anyone could have done what he’s done, why haven’t they?


Well, he was clearly competing with the faceless environment that allowed only eccentric billionaires to appear to be his only competition. If it was an open niche it would have been filled with others. Reading other threads here I learn that there have indeed been multiple failed attempts at space companies.

Maybe it's just a selection effect, but maybe they played their cards wisely and maybe some of the key choices can be attributed to the founder of the company that set the vision and picked the team carefully.

I'm personally not a fan of personality cults, but I don't think it's fair to swing too much on the other side. It doesn't strike me as plausible to think that Musk is just sitting on his ass and reaping the benefits of hard work of other people, and did that successfully with at least two companies.


He wasn't a billionaire for years after SpaceX had its first major successes and the "millionaires bad" narrative got retired since Bernie Sanders became one, so that's not going to work either.


>the credit for creating two wildly successful companies

From reading comments in other similar threads I seen the argument that Elon did not created Tesla, so maybe would be more honest to rephrase your "created" wording, I am wondering how many people know that Elon did not created Tesla so he is assigned the role just because of his big social media presence.


Apparently Tesla was: founded (as Tesla Motors) on July 1, 2003 by Martin Eberhard and Marc Tarpenning in San Carlos, California.

It gets a bit more complicated however: Ian Wright was the third employee, joining a few months later. The three went looking for venture capital funding in January 2004 and connected with Elon Musk, who contributed US$6.5 million of the initial (Series A) US$7.5 million round of investment in February 2004 and became chairman of the board of directors. Musk then appointed Eberhard as the CEO. J.B. Straubel joined in May 2004 as the fifth employee. A lawsuit settlement agreed to by Eberhard and Tesla in September 2009 allows all five (Eberhard, Tarpenning, Wright, Musk and Straubel) to call themselves co-founders.

So I guess it depends on your definition of "created".

https://en.wikipedia.org/wiki/History_of_Tesla,_Inc.#The_beg...


Then Elon is a god , depends on who defines what god means,

When Bob created his company X and later got some money from his dead uncle your "created" definition will assign the dead uncle the creator of X, I really want to see this definition, but don't segfault if you can't manage it.


he didn't create it but can definitely be credited for its success... Tesla that he bought was a failing company.

Do you really, really think that Musk is just a "big social media presence"? Why doesn't Trump or any of the Kardashians achieve similar feats?


Let me be super short

1 if you know Elon did not created Tesla then why would you use the word "created" and not be precise, even if you don't like the truth about Tesla creation you can avoid spreading falsehood and having people correcting you and the others you misinform

2 if you were wrong and thought Tesla was created by Elon, then who is at fault, Elon, Elon fanboys, the Illuminati

> Why doesn't Trump or any of the Kardashians achieve similar feats? they probably don't care about cars and space, one dude in your list managed to accomplish a big thing, he got elected by a large number of people

There are people that accomplished big things and we don't know their names or faces because they are not media stars, think at people that saved lot of lives by inventing medical procedures, or the ones that promoted introduction of safety belts in cars, or the ones that proved some chemicals are dangerous and we stop using them. \ In comparison Elon bought got his hands on an existing car company and used public money and a lot of PR to increase it's value. The timing is not a coincidence, only at this moment the batteries and climate change allinged to make it possible and remember there were electic cars before Elon appeared on the scene.


Musk set an improbable goal and he's heading to it. The other people from the Ivy League could do the same but didn't. He deserves some credit for that.

Another example Cook (coming from Compaq) is perfectly able to run Apple. A lot of people could have thought about iPhones and Macs but Jobs deserves some credit to actually start the company with Wozniak and actually pushing it to deliver those products.

Repeat with any successful FAANG or company in general.


> Jobs deserves some credit to actually start the company with Wozniak and actually pushing it to deliver those products.

...not to forget the period between 1985 and 1997 when he was ousted, founded NeXT and Pixar, and then re-hired to save Apple, which was on the brink of bankruptcy.


If all it took was money, spacex would have lots of competitors.


> Why not?

Check out how Bezos's space company is doing.


Heck, check Armadillo Aerospace if you think that Bezos is "all money no skill" and doesn't count.


To me, Musk's real gift is finding the right people and convincing them to work for him at the right time.


Elon is probably a sociopath too. Not all sociopaths are as dysfunctional as Elizabeth.


Indeed. It is very rare ( if not the only few I remember ) to see Silicon Valley or any VC funded Tech companies that is unsure of his / her tech or themselves. Their absolute optimistic nature ( If you could call it that ). People who are unsure of things, especially with leading edge tech, unproven tech and extreme difficulties tends to get my vote. I will now be keeping an eye on Cruise. Although I still think driverless cars in mainstream use is at least another 5 - 10 years away. There are just too many edge cases, but I hope people continue to work on it, as it will be part of the solution to solve housing and property market issues.


Game theory and personality dynamics strategy applies to C-levels too! For some industries, companies, etc you want to be a Holmes type of total confidence. It depends who your market is, and by that I mean the VC's you want to attract. If they go for the "arrogant boy/girl genius" schtick then that's what you do. If they want a humble intellectual, then that's what you do instead. Conversely, you may alternate between the two depending on your audience. Maybe you're humble in HN comments, but maybe a monster in VC meetings. Look at Elon's larger than life "boy genius" PR persona. It works really well. He may not be a total fraud like Holmes, but his shoddy car AI has killed at least a couple people in cases where if the car had a lidar-like system, that truck or whatever would have been identified instead of seen as part of the sky.

Also Cruise wants to license to automakers, not make their own car, so they have to act like trustworthy partners in their PR. Elon has his own car company and instead is antagonistic and belittling to automakers because he thinks there's a competitive advantage to it. Any positive sentiment towards his competitors is potentially lost sales for Tesla. Capitalism encourages zero-sum thinking and rewards zero-sum strategies.

CEOs are marketers and salespeople primarily and as such know how to play different roles for different situations. They code switch just like everyone else. The role isn't for everyone in tech because a lot of tech people don't have the people, political, and acting skills for it.

tldr; capitalism doesn't work well with honesty, in fact it works best with dishonesty. You don't have a personal relationship with a CEO or company, you're just absorbing marketing delivered via executive personalities. Personalities are perfectly valid marketing tools in capitalism. Take that as you will.


Theranos didn't use the normal set of VCs because they all thought it was a scam; they raised from some random rich people who weren't professional tech VCs instead. It's unfortunate the only thing she was convicted for was defrauding them, since being accredited investors they should be able to live with that.

As for Elon, he's currently doing a bad boy anti-government bit in an attempt to make Tesla "electric cars you can buy even if you're a Republican". Since we want those people buying EVs instead of coal rolling, that's a good thing.


>> It's unfortunate the only thing she was convicted for was defrauding them, since being accredited investors they should be able to live with that.

NO.

Investors are supposed to be able to live with all the usual risks of technology, execution, marketplace dynamics, etc.

They are NOT supposed to be OK with deliberate fraud.

If you invest at Early_Round when the tech looks promising, but then it fails to develop, CEO truthfully tells everyone what failed, the plan to overcome the failures, and you invest in Later_Round, or don't, and it ultimately fails and you lose your investment, fine.

BUT, if you invested in Early_Round and then the tech fails to develop, but the CEO straight-up lies to you and says they are "light years ahead of everyone else", shows phony endorsements from major industry players, and more so that you invest again in Later_Round, and then lose your shirt - that's fraud, and all involved in the fraud should be prosecuted, convicted, and jailed.

Anything less will create an environment where blatant lying for 100s-of-$millions is okay, and that is doomed to systemically fail.


But I think the point is that she wasn't convicted of endangering people's lives, which many would consider a far greater crime than just a con defrauding some gullible marks. People depended on those tests. They made choices (such as whether or not to have surgeries) based on the results.


I agree with your point, and I definitely wonder what was the failure in prosecution that produced those not-guilty verdicts. Not only was people's health involved with the fraudulent testing service, but the healthcare consumers did not in any way sign up for that.

The specific comment that I was responding to seemed to say it should be okay to defraud Accredited Investors should be "able to live with that".


If you're selling a pump and dump like crypto or Uber, you want the CEO to lie to you because it shows he's good at lying! Then you all go out and lie with him, then Softbank gives you a billion dollars for no reason.


Tim Draper wasn’t random.


How many other SV VCs joined Draper? FWIW, I heard that Ellison also put in some early money. If that was it, then will you agree that "mostly outside of the usual SV crowd." is accurate

How many rounds did Draper participate in? If Draper stopped after the first round or two, then it "got some early money from SV but everything else came from outsiders."


Good one!


An extremely wise set of decisions. I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions; nor to accept that only passive sensing (cameras) can be used, when Active Sensing (probing, if you will) that Radar and LiDAR use is clearly giving the control computers more information about Reality (tm).

By using all (or almost all) available Active and Passive sensing technologies, fused with geofencing and operating at 'low-ish' speeds -- surely must be the fastest way to achieve 100% accident-proof self-driving vehicles that operate on ordinary city streets. Congratulations, Cruise. Keep up the Good Work.


One argument would be that once you have many vehicles operating with LIDAR, it's unclear which systems are sufficiently robust against being disrupted by interference from other systems. Same with RADAR - while this is not new technology, we've never really had a regime with potentially dozens of systems operating in close proximity.

For all Tesla's problems, the automation-via-cameras solution is the one I find myself having the least problems with: using a single, obvious input (to humans), you don't wind up in a situation where you can have multiple differently-capable systems disagreeing on what they're seeing.


Generally, you solve this problem by using different (randomized) wavelengths, modulation (e.g. pulsed in a pattern), or if interference is inevitable, do something like WiFi or BLE does. It's not a big problem in practice.

I can only suggest you think harder about the 'all cameras' approach. Imagine say a snowstorm. Hail. Rain. Ice sheets. Sun in your 'eyes'. Cameras, basically, suck. Elon's game is to use suck-tech and make it un-suck with computons. Bad Choice.


> I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions

The reason is simple: cost. The goal isn't to build a proof-of-concept safe AV, it's to build one that meets the safety bar _and_ is as cost-effective as possible, in a reasonable timeframe.

I happen to agree with the target-then-scale approach, but I also agree with Kyle that it's not a given that this is approach is definitely superior to the one that launches everywhere and tries to improve functionality.


Sure, but we already know that humans are not very good drivers (car accidents are #1 or #2 causes of death for age groups between 5 and 50). If you can do better than humans with more input, then that is compelling reason to use more input even if you can do just as good with a cheaper system.


I agree, but was narrowly addressing the claim " I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions". Cost is the a priori reason, albeit one that is potentially balanced by others.


What’s your view on whether self-driving cars should automatically be 100% liable for any accidents?

I ask this in the context of machines being governed by classical deterministic physics, so there is an argument that there is no such thing as an accident involving a self-driving car: only a design flaw.

This is a genuine question, as I can see that companies with self-driving systems that work, and who do serious fault analysis and rectification, might be in favour of 100% liability. 100% liability would stop cowboys from entering/surviving in the industry and sullying the reputation of self-driving. A company’s system would have to perform well enough that any residual risk of injury could be covered by an affordable insurance policy.


If you listen closely, you can hear Cruise's legal team shouting, "No you can't make a public comment on what level of liability you think we should accept" no matter where you are in the world!


Of course, knowing that self-driving cars are 100% liable would incentivize some people to attempt to be hit by one of these vehicles for a payout. A more realistic level of liability would be for 100% liability for accidents resulting from an "unforced error".


I still think the cars should anticipate risks and behave accordingly. Out and out fraud aside, they should basically never injure anyone.


There's always some kind of risk. The bridge you're on could fall down. Somewhere, someone will have to judge whether a bad outcome was a failure.


Well as a pedestrian or cyclist, I'd like to make that risk judgment, not the vendor of the car that the person passing my bought from.


I agree

For one thing electric cars are too quiet.

There should be a law that someone with a bell has to walk in front of the car to let people know it is coming


Well, a noise making device has already been mandated because the startup car company refused to voluntarily install one. It hadn't been formalized because every other brands basically had it forever.


100% liable as drivers.


The most common cause of motorcycle injuries that make it to the hospital (and statistics) is someone turning left in front of them in an intersection where they have right of way.

Not quite this, but you get the idea:

https://nypost.com/2021/05/20/motorcyclist-rider-survive-hor...

I couldn't find the clip of a motorcyclist patiently stopped at an intersection get taken out by an out of control left turner. Lots of fully legally stopped vehicles get hit.


How does one take into account lack of maintenance by the end user in a strict 100% liability situation?


This problem should be solvable in software. The car can simply refuse to operate in a situation where maintenance is required.


In the general case it's impractical for electronic sensors to accurately measure the mechanical state of the vehicle. How do they tell us the suspension is rusted out and about to break? (In theory you can play some clever tricks with eddy currents or something but that's not going to be feasible for real world sensing.)


That is not much of a problem in practice. A 'rusted out' suspension doesn't happen overnight. There could be regulatory requirements for self-driving cars to be considered 'streetworthy'. Out of compliance, robotaxi disabled.

The tricks you mentioned are already used for some aircraft inspections.

What the software needs to worry about would be other types of failures. Software is much more likely to detect issues before the driver. Say, brake performance is outside the expected range, or appears to be degrading too quickly.


How do you know maintenance is required in a completely automated fashion?


My fear is that car manufacturers will turn cars into a totally dealer serviceable only thing (even more than they are now), like the car version of the glued shut Microsoft surface that gets a 1/10 on the ifixit repairability score.


In the case of Cruise this wouldn't be a problem because you wouldn't own the vehicle. Its a robotaxi service. Your point is still valid, though I'd ask, how do you even solve certain classes of issues? Like lets say you had to replace a camera. You can't just plop one in and have it work. There is a ton of complex calibration work that needs to happen, both intrinsic & extrinsic.


Service intervals based on time and usage combined with certified repair. From a passengers perspective airlines are strictly liable but presumably airlines could then sue the relevant third parties in such a case. I suspect a similar model could work fine for self driving cars.


Put a rfid tag in the tire, store how many rotations that tire takes over time. Once it reaches a threshold, refuse to spin that tire further.


Tire wear is a complex interaction of various factors such as compound, slippage, road surfaces, torque, weather etc and not just wheel revolutions


I wonder about that. The top maintenance issue that comes to my mind is sufficient tread on tires. Bald tires will still work great on dry streets but as soon as it starts raining, you start skidding. I honestly don’t know if software could intervene quickly and reliably enough there.


The software could require a trip to the dealer for a visual inspection of the tires at set intervals. Hopefully free of charge for something so simple. A quick hookup to the computer and the interval is reset.


Tesla vehicles can detect and notify the driver of tires with low tread remaining. It's detected by a delta in rotation speed between other tires and the tire needing replacement. Seems like a software implementation is straightforward.

https://driveteslacanada.ca/software-updates/your-tesla-can-...


That doesn't help with end of service life, it helps with uneven wear.

If all the tires wear evenly this detection won't help.


Was editing my comment while you were commenting (removed service life, as software won’t detect dry rot or other defects undetectable from wheel speed measurements). Assuming tires wear evenly, you could still detect the change in rotation speed over time due to tread wear.


You can detect 3mm radius decrease? I would say no.

Remember that wheel slip depends on surface properties.


3mm is about a third of total tread depth, and 1% of the tire's radius. Why wouldn't this be detectable? ABS sensors tend to have 48-tooth tone rings and there's no reason why you couldn't vastly increase this number if you wanted.

Longitudinal tire slip is caused by thrust in excess of the tire's grip, which is a function of slip speed among many other things. Grip peaks with a mild amount of slip, but slip isn't the norm outside of racing. Mild acceleration produces zero slip.

Lateral slip angle is a different story.


The slip is only 0 when free rolling by definition of the rolling radius.

You also have the rolling radius depend on load (car mass) and tire pressure.

I don't say I know it is impossible, but it feels like there is way to much noise.


I think you can. Remember you have a lot of time to work with. The car already knows when wheels are slipping to discard that data. You just pick a time when you are going straight on a nice somewhat flat dry surface going a consistent speed and measure then, cars do that all the time even on curvy mountain roads you will find plenty of such stretches and you only need to measure every few hours.


It does if you have a GPS speed input. In the race car daq I use I can see speed differentials due to tire wear.

A typical 17" tire has a radius of 318 mm and 8 mm of tread. So a bald tire is 2.5% smaller than a new one.


In a world where most dealers and manufactures what you to pay a subscription for everything it isn't likely to be free of charge.


That seems like bad value when you can look at your own tires. (And should in case they get a sidewall bubble.)


This is what service intervals are for - your car likely requires an service every 12 months or XXXX kilometers whichever comes first. The service doesn't just include actual work on the car it includes an inspection of the lights, tyres, etc and a report to the owner saying "tyres need replacing in the next couple of thousand Ks they're almost at the wear indicator".


I've given it some thought, and I think the SDC manufacturer must be liable for any accidents the SDC causes. Who else is there? The passengers certainly can't be responsible for any programming or manufacturing errors.

There are corner cases and exceptions, but that has to be the rule.

Which should mean that as and SDC owner, you don't have to pay car insurance.


>the SDC manufacturer must be liable for any accidents the SDC causes.

There's the rub... How do you seperate those from the rest?


We already have a system that relies on assigning fault among the multiple parties involved in an accident. This same approach applies just as well to SDC accidents. It would be even easier than the status quo, given the much richer data SDCs could be regulatorily mandated to provide.


If there is disagreement between the relevant parties, through the court system (or through insurance agreements).


The whole event will be recorded by at least the SDC vehicle.


When you say accidents you mean where the robo car erred? Asking because ofc it's possible to get into an accident where you are not at fault and I think this would also be true of robocar


My conclusion from years of self-driving, LIDAR, etc. research is that managing medium to heavy precipitation reliably might be impossible.

Visual algorithms run into the same problem as human brains, and the size of e.g. rain drops interferes with the frequencies employed by radio techniques.

Is anyone aware of any strategies that give us hope in solving this problem?


Though ... how good a job are humans actually doing in heavy precipitation? I know that under normal circumstances our brains constantly do a bunch of work to create the illusion of a comprehensive high res visual field even though we really only have detail at the fovea. When it's raining heavily, and we think we can see "enough" to drive ... are we right? Or are we just lucky and pedestrians and cyclists are more likely to be off the road at those moments and so accidents increase but not to the point of disaster?


Agreed - I think the "but can it drive in a whiteout blizzard" question is best redirected to: "can a human?"

I suspect there are operating conditions the AVs won't solve acceptably - some of those conditions IMO are also conditions where we should not accept a human to solve acceptably. In general I feel we have a lackadaisical culture around driving that encourages/excuses unsafe behavior, and is overoptimistic about people's ability to drive well.


I grew up in a part of Ontario where total whiteouts can happen fairly frequently on both major highways leading in and out of my small town. It is in fact possible to drive dozens of km in near or total whiteout conditions simply by the hazard lights of the car ahead of you. You very frequently will see lines of cars km long, all going <20km/h, white knuckled and crawling home. Maybe one in every couple thousand goes in the ditch.

I don't think vision-based FSV will ever reliably handle winter conditions like this. The engineering and QA effort just isn't worth the cost-benefit when you factor in the very small amount of drivers who are consistently exposed to conditions like that. My father, who spent his career commuting to the city on that highway, was disappointed when I explained this to him.


I was once in the passenger seat in a downpour. My father was driving to the nav, and it seemed like we were traversing Mekong underwater. It was a complete instrument driving condition, except at most a feet of road markings were visible. The car was on local roads. He made cautious turns and drove slow, because it was obviously scary. Suddenly the nav said "Ding! you have reached your destination" in what seems to be middle of a road, and we immediately started making noises at the nav.

Then a person knocked on a window through the brown wall. It was someone we were to meet at the destination. He greeted us, and told us to come out. We tried to explain we can't just walk all perhaps a quarter mile to the place in this heavy rain, leaving the car left at a roadside. He insisted it'll be a short walk, and gave us no choice. Only when we stepped out, we realized that the car is right in the middle of the premise we were looking for, just couple feet from the main door.

This memory surfaces to my mind in the context of human drivers and inclement weathers; I'm still one piece, but maybe that has more to do with my luck, not necessarily due to myself playing every games extra safe.


Humans certainly don't reliably handle these conditions. ;)

It does seem like "something else" is needed for these kinds of low-visibility scenarios -- frankly, when nobody should be on the road.


The reason all that works is people drive to what they expect. In such conditions you might hit a human standing on the road, no human would be there in the first place, only other cars with flashing lights. As such so long as you stay in the correct lane for your direction of travel and go slow you don't need to see because there is no real danger most of the time. Most of the time...


The worst is when one goes into a ditch and the car following them follows into the ditch because their main indicator of where to go was the running lights and tire tracks of the car in front.


Where I grew up people would go together off the side of the mountain this way in very heavy fog.


rumble strips are amazing in whiteouts. its a relief when you hear it because you now know where the side of the road is...


people are actually pretty good at driving in blizzards in locales where they happen often. snow tires (and possibly chains), good clearance, and great caution can get you a long way.

obviously you try to avoid driving in these conditions when possible, but sometimes a moderate storm is much more intense than forecast and you get caught out. pulling off to the side of a snowy mountain pass doesn't guarantee your survival either.


I know next to nothing about lidar engineering but 60GHz band radars can still function out to several hundred meters in rain. It is significantly attenuated as the rain rate (in mm/hour) increases, but it takes a lot of rain to make it completely useless.


This depends on how powerful your antenna are, $200 wigig transmitters will struggle with much range over those distances.


And how directional and narrow the gain pattern is.


Summers in South Florida will put that to the challenge.


People can't see in that weather either


This has been my argument all along. People are driving in unsafe conditions when they should be stopped.


In florida, I will say when it truly pours like that people do tend to drive extremely slowly and turn on their hazards. Also, sometimes these storms just appear out of nowhere. Over the summer I was driving from the Vero beach area to fort lauderdale and on the way to Vero beach, clear skies and on the way back there was an ENORMOUS storm that flooded streets and you couldn't see crap. It just happens


Moreover, contrary to some other commenters:

Unless it truly is "once-in-a-lifetime", or very brief, you cannot just stop and give up. To be a viable replacement, a vehicle must (as a human driver would) continue to make progress even in extremely adverse conditions. The progress might be much slower than usual, might be a re-routing (back up away from floodwater and go elsewhere), etc.


just because you can't write code to handle it doesn't mean a human can't. thousands and thousands of people drive in rain, sleet, snow and hail daily and do it just fine.


Humans do not do just fine - look at the statistics of car accidents. Humans refuse to admit how bad they really are at driving.


Airplanes couldn't fly in inclement weather for decades. Took a while but we solved it.

"Oops it's pouring, can't get a Cruise, gotta fall back to an Uber" doesn't sound that terrible to me, for now.

Cruise could even offer a product that compensates/insures you for that eventuality, if for example Cruise was your primary vehicle.


But if self-driving succeeds then there won't be enough gig-drivers ready-to-go to cover a self-driving outage.


Taxis, available with e.g. a simple phone call, have existed since before telephones and cars really, and they still exist now even as ride-sharing has taken over. They will exist as Cruise rises.


They exist because they're a viable business because there is enough approximately consistent demand. That's not guaranteed to continue.


There might be taxis, but not enough for everyone who wants to catch one. It’s already difficult enough on rainy days.


In addition to going into highly dense cities and inserting autonomous cars into existing driver regulation; an interesting auxiliary strategy could be to partner with a master planned community that was designed from the ground up (physically and regulation wise) to be an autonomous first town where the majority of vehicles were autonomous and the majority of home owners were pro-autonomous cars.

The roads and pedestrian crossings could be much clearly marked with RF transceivers, etc. and the inclement weather could be pre-considered. HOA agreement could have a "I agree to co-exist with autonomous cars" TOS clause and perhaps a built in monthly subscription.

I think a ton of home builders (Lennar, etc.) and senior community developers (Ventas) would be interested if only as a PR concept. I also think a lot of remote Techies/senior citizens would be interested [1]. Sort of like this but replace golf carts with autonomous cars.[2]

[1] https://news.voyage.auto/why-retirement-communities-are-perf...

[2] Tom Scott - City of Golf Carts https://www.youtube.com/watch?v=pcVGqtmd2wM


Cruise acqui-hired voyage, which was basically working towards this. I don't think they've done anything with it since though.


Congrats on your incredible accomplishment! Thanks for doing this the responsible way. Tesla's approach does not inspire confidence. Starting at the high end, with expensive, reliable tech and slowly bringing the costs (and bulkiness of the equipment) down is the right approach!


In my experience, expensive doesn’t necessarily mean more reliable, it could just mean higher fidelity, higher resolution (and possibly less reliable due to the utilization of parts ‘less’ produced on the global supply chain), etc.

This improved resolution don’t necessarily help an AI grok the situation in real time with >20 Hz response time though.


I think the overall sentiment is more the "let's avoid premature optimisation" rather than "let's spend the most money".

If you have pre-sold a 'self driving' capability which you have guaranteed to be backwards compatible on cars you have already sold, then you are effectively cutting out Lidar as an option unless you are going to go back to all those cars and screw it on.

And considering that self-driving isn't solved yet, it seems like a bold move to define both your processing power and your sensing hardware in a way which makes it very difficult to (commercially) change.


"Today it's unclear what strategy will win"

Thank you for saying the honest, obvious answer. I am tired of people claiming to know what the implementation details of a technology that does not exist. As a nobody retail investor, I have long positions on autonomy (Tesla, Nvidia, GM/cruise, Google), not specific takes on it.

I'm fact, I think the radar/vision debate is not going to matter long term, as there can be multiple winners and the tech will likely converge.

https://www.greennewdealio.com/transportation/teslavswaymo/


The challenge I like to bring up is construction zones. How will cars cope when a road is unexpectedly under repair? Traffic is taking turns sharing the left shoulder with a flag man directing you?

Some people I've talked to insist that an up to date map is "all that's needed" and that all such projects will need to be put in the system. Haha, a water main broke and they think people are going to update a database for them?

A traffic light is out and the police are DC directing traffic at an intersection. This will happen inside any given geo-fence eventually.

The list goes on... forever. Tell me how self driving cars don't need full AGI.


A decent chunk of this list can be handled by the car coming to a safe stop and signalling that it is unable to proceed and you need to navigate the situation.

I suspect a lot of these could also be handled by that being a remote connection where a human is given the camera input and can indicate how the car should proceed (i.e. broken water main is a road obstruction that won't clear, and the obvious answer is a manual override to mark the road as unusable so the nav system reroutes).


Let's hope the only passenger has a driver's license and isn't drunk or having a medical emergency.


In both those cases they wouldn't be able to drive anyway, and the result is not more dangerous or worse then the alternative.


>> In both those cases they wouldn't be able to drive anyway...

Well that's not even a robo-taxi then.


Both Cruise and Waymo have remote operators who can direct the cars when they phone home.

Here's an example Vogt discusses a bit: https://youtu.be/sliYTyRpRB8?t=202

...of course this brings up many other problems, like network connectivity and inter-city transport, which the companies have as far as I know not commented on. IMO the sensible solution is obviously to just require passengers be able to take over if given plenty of warning, but for whatever reason Cruise isn't doing this.


Been so impressed with the Cruise approach. No hype, no promises, just keeping quiet, working hard on a very hard problem until it’s ready to launch. Congrats to everybody who’s been a part of this.


Well to be fair they did get acquired and get access to a bunch of resources allowing them to fully execute. A lot of the hype machine is a result of the necessity of getting access to those resources. It’s just a difficult situation.


Congrats on this huge milestone!

So refreshing to see a leader in this field say “we are not sure which one will work out” rather than just hyping their stuff.

Can I get a test ride soon!


Tesla limited themselves to cameras because Musk said “humans can do it with two eyes”. He also didn’t like the look of LiDAR on cars. Such an idiotic decision. Good to see Cruise is not lead by a mega-maniacal CEO.


As of a few years ago, lidar added at least $7500 to the cost of a car. That's a huge price difference for a consumer.


Currently Tesla is charging $12,000 for access to their self-driving package. Even if we assume that the price would increase to $19,500 if they included a lidar (I'm skeptical), it would be the difference between paying $12,000 for a feature that doesn't work versus $19,500 for a feature that might work. This is a luxury option no matter which way you swing it.


Definitely a luxury option for showing off at this point, as a status symbol, lots of people out there daily drive cars that don't have a bluebook value anywhere near $12k for the entire vehicle.


> lidar added at least $7500 to the cost of a car

That was Adam years ago in the Self-driving world. Now, it costs only few hundred dollars only, starting with $99.

https://velodynelidar.com/products/puck-lite/


Problem isn't how much LIDAR used to cost or costs now. Problem is that customers paid for a product, and they still don't have it, many years later. And what is being showed nowadays is nowhere close to what was advertised.


Cruise is a ride service - they don't sell cars. So the actual question is: How much does it cost to pay an Uber driver over the lifetime of a car?


A more accurate summary of Tesla's position is that they beleive that the incoming data from different systems (lidar, radar, visual, etc) must be merged and very often there is contradictory data.

Resolving that correctly takes time (in ms), adds complexity and will sometimes be incorrectly judged.

Since the visual data is the more accurate the vast majority of the time, it will anyways take precedence over the other input. As humans have proven that visual is technically enough, they decided it makes more sense to squeeze the most out of the visual, rather than collecting other data, crunching it, then (in most cases) discarding it.

I am not sure they are right, and am pretty sure that even if so - they need better cameras.

But misquoting them doesn't really help your argument.


I believe those arguments are simply justifications for the fact that most people won't be able to afford Teslas with Lidars


Amazing I’ve been following cruise long time z, those videos were so funny. Keep on going and conquer the world!


Will Cruise eventually be available on existing TNCs and other MaaS platforms? Or is the play here to create a new vertically integrated taxi service?

If you've read Dan Sperling's Three Revolutions, any thoughts on what kind of [transportation future](https://www.planningreport.com/2018/03/21/dan-sperling-three...) you foresee Cruise contributing to building ?


You didn’t need to, but you decided to show up on HN to clearly articulate your strategy, so I applaud you for this.

> Our strategy has been to solve the challenges needed to operate driverless robotaxis on a well-equipped vehicle, then aggressively drive the cost down.

There are broadly two ways to achieve your desired outcome of aggressively lower costs:

1. use money raised from VCs to subsidize the final cost of the product, or;

2. use money earned from customers as a natural consequence of growing demand for your product, in spite of strong competition from established OEMs, to fund your expansion.

Historically, the former has a lower likelihood of success relative to the latter and that’s because the former is really just a cash transfer from VCs to consumers. The latter is how Apple and Tesla have been able to grow into what they are today.

The reason the 2nd kind is so effective is that, when executed correctly, it often leads to a vicious cycle: your growth will lead to steadily increasing order volumes with your suppliers. This will in turn lead to sourcing for more suppliers to keep up with your growth. At a certain point, a supplier will feel confident that you are here for the long haul, causing them to take on more risk by pouring additional capital into their business to expand capacity. This will improve their ability to accommodate your current and future needs quickly, cheaply or both.

In other words, reality is multidimensional. It is rare for an individual company to aggressively drive down the costs of its product single-handedly, unless that company is ready to assume an enormous amount of risk currently being borne by its ecosystem of partners and suppliers.


I expect the media led zeitgeist to slime you on a few fronts:

1. AI/automation/tech bros undercutting the working class.

2. The mortal danger of self driving cars to pedestrians and the public- perhaps with an AI bias/racism zest.

3. The price, location-availability, or otherwise explicit exclusion of people that damage the cars or are otherwise unprofitable being harmful.

4. The proliferation of self driving cars reducing public transit use, thus reducing public transit investment, reducing transit access for poor, increasing pollution, and clogging roads.

5. Something something self driving taxis are subsidized by the government via public investment in roads.

All of these arguments are bullshit and I am not excited to hear people recite them to me in 5 years.


All-electric fleets of safe, non-honking AVs that are fine with whatever routes are required of them and go to designated areas to park and charge are going to make our downtown areas so much better.


The urbanist PoV is that anything car-shaped is bad for a city and it doesn't matter how smart it is. The proper answer is micromobility, aka ebikes and smaller vehicles that don't need to ever go highway speeds.

Full size EVs are still bad for air quality too because of tire dust.


Full sized EVs have their place in cities. However it isn't for mass transit, the train or bus is for getting people around. EVs are for getting goods and maintenance tools around. This is a small minority of traffic in most cities.


Just because you don't want to think about externalities doesn't mean they don't exist. There are a lot of strong ideological assumptions you have to make to handwave all these away as "bullshit".


From personal experiences I do not see how they are ready. They actively avoid the rules of the road and engage in dangerous driving actions because the car “sees” and obstacle or warning.for example when a car is double parked the self driving vehicles will swerve into the opposite lane, and in some cases almost hit another car, bike rider, or person.

When at stop signs they will sit back and wait even thought it is their turn. At times they will slam on brakes causing rear end accidents because the car saw a bird or reacted to steam from the ground.

Please talk with your legal team about embellishments made in insurance claims against other drivers.


How long ago did you see Cruise cars doing these things?


Unless you have a conceptual AI with causal systems understanding reacting in real-time to a spacetime model of the world based on current and recent events, people are going to get injured and or killed by unusual real-world events riding in these autonomous cars. Although cats and dogs have great perception, we don't let them drive our cars for a reason.


> people are going to get injured and or killed by unusual real-world events riding in these autonomous cars

I don't think anyone inside or outside of the AV industry is expecting that there will be zero injuries or fatalities involving AVs. Why would that be the bar, when AV rides displace human drives that already injure and kill tens of thousands?


The difference would be frequency of injury and/or death as AV cars can't think and react dynamically to non-pattern situations.


Sure, and they can't get drunk or fall asleep either. Even for taking for granted that AVs can't find an operating domain in which "non-pattern situations" are covered by failsafes, it's far from obvious that the net advantage goes to human drivers.


Does Cruise plan to try compete with Uber, Lift, etc.?

If feel like this tech could have a massive social impact if you sell it to local goverments so they could offer a highly efficient subsidized robotaxi service to their residents. It would democratize access to transportation and enable so many classes of underserved people to gain access to reliable transportation.


I'm no where near as experienced as you and your team, but that is what I was thinking as I read this. Tesla rather quickly went from the expensive sensors to the camera based setup they have now and it'll be interesting to watch how all this unfolds, safely from my 2006 vehicle with nearly no computers.


There's absolutely nothing stopping Tesla from adding back lidar/other sensors if the technology becomes cheaper or it turns out visual-only isn't accurate enough; Elon has other advantages that no other company is anywhere near competing with though too, and he clearly understands this, he understands his position well - and it's strong. He's also a very agile entrepreneur/engineer and not afraid to pull the trigger on whatever ideas come to his attention as being the best decision. He's also already succeeded in Tesla's mission - which was to get other vehicle manufacturers to transition to EV, so anything else after that is really just icing on the cake; Tesla stock holders however still believe strongly in him - and I'd argue rightfully so.

For now by using the cheapest technology he's arguably selling more EVs and/or making more profit per vehicle. If the market's competition requires a course change, then I don't see why he wouldn't take it - I don't think he'd fall pray to sunk cost fallacy; the reason for decisions may not be obvious to the public either, as we likely don't know details of his nuanced master plan.


what's stopping them is fsd has been promised on all these cars without those sensors. taking that away would likely mean a lawsuit


A promise isn't a contract, so whether it's actually guaranteed in the language in whatever agreements may be signed, will be the determining factor.

And how the automative industry has functioned since its existence is risk-benefit-cost analysis, so if the cost of a future fallout is less than the short-term benefit then they tend to decide for the short-term benefit; most disgustingly in regards to known problems of vehicles, where only recalls happen if the potential harm/death rate and the cost of that is lower than the cost of replacing whatever needs to be replaced; I'd hope that practice has greatly improved, but who knows - most of our government agencies seem captured by industrial complexes.


No, courts look at the spirit of the language and the letter of the law. The letter takes precedent only when it is clear that the two parties are not intending to defraud each other and there is just a misunderstanding. If the court decides both parties had a different understanding of the contract than the letter, then what they understand is what is used. As a lawyer in court your jobs it to make the court believe what you understood the contract was about is what they use - if the letter supports you then you yell that, and since the letter is a easy to prove while a shared understanding that is different from the letter is impossible the letter normally wins.

Marketing is admissible in court as evidence of intended contract. Since marketing is generally easier to understand the legalese, if the court decides the marketing is misleading they will tend to punish you for that and accept the marketing as the shared understanding over whatever the letter of the contract is.

Note that I used a lot of wishey-washy words like tend... Each court case is different, and there is no real rule of what courts will do in any given situation. Consult a lawyer for legal advice about your specific situation.


What does the word “aggressively” accomplish here? You’re talking about a future hypothetical, so why bother?


Does your car stop at stop signs ?

Because at least one of y'all have to for this to work.


I’m assuming well equipped means more capable data processing and bandwidth needs, is this the case?

If so, do you have a sense for how many orders of magnitude more bits of data your sensors are acquiring versus Tesla?


thanks for pushing the frontier of self-driving cars and articulating this strategy.

historically, the pattern in tech is to succeed with strategy 2 -- that is, ride moore's law and achieve exceptional performance by combining commodities into super systems. google server farms are the canonical example.

obviously, this is only a pattern and not a law.

tesla's pathway represents strategy 1: start with super machines then drive costs down.

for non-SDC experts like me, could you share why it felt more compelling to start with super machines then drive costs down?

excited to see cruise help lead society into the future!

thanks again.


Sounds exciting! Are you hiring? I had a recruiter from Cruise drop out on me because I wanted to stay and work from Canada and wasnt in a position to relocate to the US.


Hey Kyle,

Why did your previous CEO Dan Ammann quit just before this launch?


I see a few neighborhoods missing on the signup sheet. Are the crazy Bernal Heights streets a bit too much for this stage? :)

Looking forward to ride from my home there!


As a bike rider, father, and sometimes inattentive human, I would like to say thank you for the safety you are bringing to our cities.


Thank you for sharing! Sharing details on how your system works brings confidence to customers...


Did you watch the videos?

Based on the reply to the question: What sets Cruise technology apart from others like Waymo, Tesla...In other words, how was this difficult technical problem, solved in a way others were unable to do so far... And whose reply you can hear here (video at the correct time):

https://youtu.be/ABto5nqWgc0?list=PLkK2JX1iHuzz7W8z3roCZEqML...

Thank you, but wont be volunteering to ride one of these.


Lol, what answer do you expect from a question like that? At a high level, Cruise is taking the same approach as Waymo, and a different one from Tesla: start with lots of hardware, HD maps, and a targeted operating domain, then try to scale. Answering in any more detail would be a) giving away trade secrets and b) rely on knowing trade secrets about Waymo's cars that they probably don't know.


Will you share your systems safety work? How many fatalities per drive hour do you expect?


You guys ever think about selling your LIDAR data?


No, the different strategies are that Tesla has a vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data, while these demo companies are doing if statements around the block.


> vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data

Throwing data at the problem isn't going to solve it. Only people without expertise in AI think that's how it works.


> while these demo companies are doing if statements around the block.

[citation needed]

I only ever see Tesla fanbase making outrageous claims like this without any supporting evidence.


Well, you are ignoring the point. The whole differentiating strategy between Tesla and everyone else is the incrementally improvement of large amounts of vehicles versus the magic “hey, we came out of nowhere and now just drive ourselves”. This has been repeated though out tech history and the incrementally improving real life one always wins.


The large number of vehicles makes it harder because you can't do hardware upgrades and a regression will kill someone.


And Tesla will just ingest large amounts of data from their fleet and magically dump an L5 solution one day? That's believable?

Elon Musk has been promising imminent L5 self driving every year for the past 7 years; that requires more than incremental improvement. The ones actually doing incremental improvements are companies like Cruise and Waymo, making it work one geography at a time.


The coca cola company sells even more units of non-self driving products than Tesla, and for a fraction of the cost!


They are betting that hardware combination is the fastest to market, given the constraints of today software.

When the ml stack is capable of leveraging purely camera sensors, Cruise and others like them, own the fleet and can swap out the hardware. Tesla does not "own" the fleet per se. So perhaps its different bets on which cars will still be on the road when the ML threshold is crossed.


> when the ML threshold is crossed

If, not when.

Even the most cutting edge research today still pales in comparison to LiDAR.


Is it really an "if"? I think it would be pretty safe bet that in 100 years human-quality CV object detection will be solved (note, we both know that it is possible AND this doesn't require AGI). So then it's really a question of when (presumably you don't need the full 100 years).


As an amateur (non-AI-expert) it seems to me that behind every corner is lurking a sub-problem that is AGI-equivalent. I don't see any reason to believe that humans do human-quality object detection without also deploying tremendous contextual understanding of the world. So perhaps it will turn out that a computer needs something similar?


I think decision making in driving is highly contextual, but LiDAR doesn’t help there either. Purely visual field extraction is something even very simple animals can do (presumably which much weaker abstract context processing capabilities).


> note, we both know that it is possible AND this doesn't require AGI

Not who you're replying to and not saying you're wrong, but how do we know this?


We know in a sense that very simple animals do it and it doesn’t require decision making (in a sense that LiDAR only helps with perception).


So I can teach my dog to drive the car?


Your dog can reliably detect objects, judge distance and avoid them.

That's all the person is saying. Simple animals can use only vision to do what we're using lidar and radar to do. But neither camera, lidar nor radar or any combination of them guarantees that you'll be able to make a computer drive a car in all situations.

For me, intuitively, the problem of reconstructing a distance field from cameras must be way harder than say, trying to predict what a person on a bike will do next, or detect road lines on the road in heavy rain or snow. So it seems very likely that an "AI" capable of driving a car in all situations would be powerful to not need lidar or radar (though I don't see the point of dropping radar, as it gives you some ability to "see" around objects which can make cars better than humans)


Would Tesla be ahead if they had incorporated LiDAR?

Because they already have the data advantage for ML.


Also you can't sell sexy Teslas if they have ugly lidars on top.


I think it's not just that. When Tesla started to with their FSD journey, they had to determine what sensors they can add to the car. Lidars back then were way more expensive than they are now, and it wouldn't have been feasible to add them at that time.

They can't add them now to new vehicles because they promised the vehicles back then are only a software update away from full autonomy [1]. Building on Lidar now would mean developing on 2 heavily differentiating stacks. Going back on the promise of old Tesla's being "FSD capable" would introduce a huge liability.

Long story short, Tesla's stance on Lidar determined 8 years ago, without the option to revise the decision with future developments.

[1] Note this has turned into "we'll only have to replace the computer in the car", which is still doable, contrary to adding sensors to the existing vehicle.


Tesla has really fallen behind. I think Karpathy will be fired this year if his team can't achieve at least L4.


> I think Karpathy will be fired this year if his team can't achieve at least L4.

Ah so Karpathy will be fired this year. Because they're not reaching L4. FSD isn't even L3 yet.


I do expect at least some companies will hit L4 within the decade(?) but it's going to be under limited conditions that won't include urban driving. Which could actually be a very useful capability but isn't the "don't own a car" future that some really are focused on.

------

Level 4 _ High Automation

System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway.

Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.


> "driver might manage all driving duties on surface streets then become a passenger as the car enters a highway."

This is the kind of setup I can't wrap my head around. The car might "require" you to take over when you exit the highway, but it can't exactly "make" you. If you fall asleep on the freeway and the car isn't willing/able to drive at the end of your journey, or in edge cases if you were to pass out, etc. what does it do when it gets to your designated exit, or to the final exit of a designated highway? Are there going to be a bunch of cars all stacked up with flashers on the shoulder by every off-ramp waiting for their people to wake up / quit playing on their phones and engage manual mode?


I'm thinking more like a semi truck. Get onto the freeway on-ramp, pull over and get out, then the truck continues on without you for miles before taking an off-ramp where a driver is waiting to handle the nearby streets. I expect truck stops in rural areas will (with DOT help) get special on/off ramps that are approved (maybe a special stop light?) so that trucks can go to a full service pump for fuel and get back on the freeway.

As you say, city driving is hard, but there are a lot of trucks that cross the US on freeways that are easy to automate.


I'm as skeptical about self-driving as just about anyone. But this seems to be getting into real edge case territory. Person falls asleep/is watching a movie and doesn't respond to increasingly urgent alerts? Is this really a problem? And is it a problem that's greater than fatigued driving today?


Pull into breakdown lane.


What happens when there isn't one? Roadworks, and accidents cause frequent closures of the breakdown lane. L3 has a lot of edge cases where the vehicle is supposedly too dumb to drive, but smart enough to know it shouldn't drive. It may be death by a thousand cuts.


Achieving the “don't own a car” future doesn't require automation, as much as urbanization. No car technology would help reduce the climate crisis we're facing, unless that technology eliminates private transit (as in rides not shared by multiple people, not as in private ownership of said transit)


As someone without a drivers license or a car, automation would still be wildly useful in driving down the cost of occasional journeys to locations inconvenient to serve with public transport to a point where it makes living without a car an option for more people.

You're right, though, that the challenge is that it also makes it easier for everyone to opt for cars over mass transit.


Oh sure, L4 within the decade for companies other than Tesla, totally doable. You could argue Waymo and Cruise are already there with geo limitations.

But Tesla within a year with no lidar? Yeah, no. Not happening.


“I would be shocked if we do not achieve Full Self-Driving safer than human this year. I would be shocked.”

-- Elon Musk

He set the milestone.



He told investors the same thing last year. Elon milestones mean nothing.


What comes first? Tesla FSD or the year of the Linux Desktop


Tesla has not fallen behind, it's rapidly catching up. It's just not that easy to catch up if you are 10 years behind the competition and handicap yourself with inferior hardware. Maybe you are right and Karpathy will get fired, but at that point it's time to sell your tesla stock.


I think you have to be ahead first before you can fall behind. Tesla never had that problem.


What is success?

Does 50 miles of geofenced and daily mapped streets mean Cruise won self driving?

What if Waymo gets to 20k miles of geofenced roads and monthly mapped?

What if Tesla gets to the point of one intervention/crash every 100k miles? 10M Miles?


The human accident rate is about one per 500K miles, so if they were able to get in that range, then yes, they would have succeeded; drivers would be able to stop paying attention to the road without putting themselves and others in danger.

But the current FSD beta's intervention rate is more like one per 10 miles, judging from some quick googling. I see no particular reason to assume that incremental improvement can take us from 10 to 500K.


As it can't solve this in 2022 (video at correct time): https://youtu.be/wTybjJj0ptw?t=238

Or even worst, just managing an empty intersection (video at correct time): https://youtu.be/wTybjJj0ptw?t=280

At which point releasing software so bad, becomes a criminal liability? Another one at correct time: https://youtu.be/wTybjJj0ptw?t=652

There are simply no words...Correct time: https://youtu.be/wTybjJj0ptw?t=722

Should not be allowed out of the labs...


This is from from November 2021, but I'm still highlighting it because it is just terrifying (Correct time, though the video later on also exhibits inabilities of the system): https://youtu.be/9wRRClg_aM8?t=113


After watching your linked videos I'm actually really impressed with it.


It looks great for an early alpha. It needs a fair amount of improvements before it will be ready to be released to end users, though.


Now they just need to draw the Rest of the Owl...

https://www.reddit.com/r/restofthefuckingowl/


That looks like an incredibly stressful way to drive.


> current FSD beta's intervention rate is more like one per 10 miles

Maybe in rural areas? The videos on YouTube are far more than one per 10 miles.

https://www.youtube.com/watch?v=wTybjJj0ptw


On quick watch the driver intervenes at 4min 45sec and 5min 47sec.


A helpful link for your perusal: https://en.wikipedia.org/wiki/Selection_bias


But are you using confirmation bias to find a cognitive bias that fits here.

But, In all seriousness we don't have access to the data across all 60k FSD users to know what the intervention rate is and how it has been changing over time.


We do have previous statements that as they get better they are moving to harder situations. Start with empty roads, and once you can do them well start finding harder and harder situations. When you start you avoid construction zones, once you are doing well you start looking for them.


Dirty Tesla used to track these stats in his testing and gave up because “it’s not changing”


Which could be a sign some drivers are simply overly cautious. Suppose 1/10 of disconnects prevented a crash, reducing the risk of crashing to 0 only reduces the number of disconnects by 10%.

To actually reduce that number you would need to make drivers feel more confident in the vehicle which is a useful metric, but only indirectly relating to safety.


What is the appropriate point of comparison though? All human drivers? Sober human drivers? Sober cautious human drivers? Sober cautious human drivers with driver assistance technology (e.g. auto-braking and blind spot warning, or potentially even more sophisticated LiDAR tech)?


I don't think this question is even meaningfully-defined. There is no "the" point of comparison. The relevant point of comparison is whatever ride it's displacing.

The rideshare explosion has already had a measurable effect on drunk-driving deaths; to the extent that a theoretical lower-cost AV will make rideshare even more accessible, then its effect on drunk-driving reduction absolutely makes non-sober drivers a relevant comparison.

For an average young person who'd get in the car with one of their friends, or drives a bit recklessly themselves[1], an AV at sober-human-driver level would be valuable.

For a guy who needs his kids driven around, a "sober cautious human driver" level of safety may feel right.

For questions like "what should the regulatory bar for launch be", all human drivers seems like an easy answer.

[1] I'm probably guilty to a degree here, on the rare occasions I drive


Is it reasonable to assume AV will be lower-cost than rideshare? The key thing that makes Uber more affordable than a taxi is that vehicle purchase/maintenance/depreciation/liability are all externalised.

In a full-self-driving situation you no longer have to pay your driver, but you do have to pay for all of the above. With the inevitably higher standards of maintenance required for AV fleet vehicles I can't really imagine it being cheaper than it currently is.

Sure the sensor/cv/vision tech will get cheaper, but machines still wear down.


> Is it reasonable to assume AV will be lower-cost than rideshare?

That's what the industry is betting on. I think it's reasonable in the steady-state: labor costs are expensive as hell.

> vehicle purchase/maintenance/depreciation/liability are all externalised.

These aren't 100% externalized with Uber, as they show up in the labor cost. They're only externalized with Uber to the extent that drivers do the math wrong on the costs they're paying[1]. Most of the analyses I've seen of this choose every possible pessimistic assumption, and still end up with net wages that are very high. They're of course low relative to "a living wage", which is what the analyses are focusing on, but that's precisely the point of what we're talking about: even the floor of labor costs is very high, when you're looking at expenses.

[1] Completely tangentially, but also note that this ignores the extent to which people derive value from being able to convert assets around. It's hard to imagine for us SWEs making 1% salaries and sitting on mountains of wealth, but liquidity is a constant and pressing concern for a large portion of the country. See also: payday lenders, where there's a stark difference between the opinions of those who've actually studied the economics of the industry and the midwit affluent John-Oliver-watcher.


> The human accident rate is about one per 500K miles, so if they were able to get in that range, then yes, they would have succeeded; drivers would be able to stop paying attention to the road without putting themselves and others in danger.

Unfortunately, I expect that automation will be held to a higher standard than human drivers, rather than the same standard. When an accident happens, people want to know who to blame, and an unimpaired human driver gets somewhat more latitude for a genuine accident, while a piece of software is always going to be perceived to be at fault (which it may well be, even in a situation where a human wouldn't be considered to be). And conversely, people (somewhat validly) want to have more control: every driver thinks they're above average, and the software won't be as good as their accident rate, and if something happened at least they were in control when it happened.

I don't necessarily even think those are incorrect perspectives; we should hold software to a high standard, and not accept "just" being as good as human drivers when it could be much better. But at the same time, when software does become more reliable than human drivers, we should start switching over to make people safer, even while we continue making it better.

(Personally, I wish we had enough widespread coordination to just build underground automated-vehicles-only roads.)


> Unfortunately, I expect that automation will be held to a higher standard than human drivers, rather than the same standard.

The average driver in a crash is worse than the average driver. Why would we compare FSD with reckless drunks, etc.


I'm expecting that we should compare self-driving vehicles to the average driver, not "the average driver in a crash".


Ye.

Also, I should have written "the average driver in a crash is worse than the median driver".

"* In 2016, 10,497 people died in alcohol-impaired driving crashes, accounting for 28% of all traffic-related deaths in the United States.

* Drugs other than alcohol (legal and illegal) are involved in about 16% of motor vehicle crashes." https://www.cdc.gov/transportationsafety/impaired_driving/im...

If we include recklessness, FSD maybe need better than half the fatality rate of human drivers, to be on par with the median driver.


The real averages of FSD intervention are unknown since some 2,000 Tesla employees also have NDA'd Beta access, and it would surely differ between rural, suburban, and urban roads.


In many areas it's more about how many interventions per mile are necessary. Anything outside of sunny highway driving is on the edge of that.


It also depends on what kind of miles. Are they running at the same speed? Only easy highways or complex urban intersections?


Not accident rate; crash rate.


Yeah, they’re not trying to solve the same thing?

I think Tesla is right that to solve it for real you need to solve the general case which can’t rely on high resolution maps.

The city cab case is smaller and can, so the cruise approach makes sense for that use case. It’s just narrower.


The truth of it is that it’s just not possible (with currently existing technology/ML architectures) to create a truly autonomous taxi without HD maps. Everyone in the robotaxi industry knows this - even Tesla builds HD maps, they just don’t call them that.


My knowledge only comes from Karpathy's talks about this (which are great, worth watching if you haven't seen them).

I found his and Tesla's arguments convincing for the general case. That doesn't mean that the narrow cases aren't super cool or valuable (I signed up for this Cruise thing in SF).

I just think that if the software is unable to make decisions based on visual data alone without up to date high resolution maps it'll never achieve true FSD in the general case (not geo locked). You'll end up trapped in a local max otherwise because there are just too many conditions in the real world that vary (and the world is too large to economically map fast enough for that approach). You have to solve the vision problem.

I don't know enough to comment on the approach differences beyond that, but my understanding was that Tesla did not rely on the same stuff that Waymo and Cruise require (largely Lidar and these high resolution maps).


> I just think that if the software is unable to make decisions based on visual data alone without up to date high resolution maps it'll never achieve true FSD in the general case (not geo locked). You'll end up trapped in a local max otherwise because there are just too many conditions in the real world that vary.

My contention is that there’s no way to actually solve for the general case with currently existing technology. The amount of novelty in the real world is too great for any system to account for it without disambiguating via HD maps or remote support.

>You have to solve the vision problem.

This isn’t a vision problem specifically - even if you had LIDAR and high resolution imaging radar and 8 A100s on every Tesla, “true generalized self driving” wouldn’t be achievable without HD maps with our current understanding of Machine Learning.

>My understanding was that Tesla did not rely on the same stuff that Waymo and Cruise require.

Tesla maps individual traffic light elements, stop signs, and lane markings, but will attempt to drive even if the area isn’t mapped.

Disparities in FSD performance in different areas is largely attributable to some areas being better mapped than others - the mapping data has a huge effect on its performance. There are key elements of the driving task (including recognizing and reacting to every single type of sign other than a stop sign) that FSD can’t do and relies entirely on maps for.


Novelty isn’t nearly as big of a problem as you might think. One of Wamo’s famous videos was someone on an electric scooter chasing a duck in the middle of the street. That’s very odd behavior, but the car followed the rather simple option of just not hitting them and going forward when possible.

Cars really don’t need to identify what something is just it’s location and movement which is a vastly easier problem. A trash can rolling down the street can be treated just like an oil drum doing the same thing etc.


> Cars really don’t need to identify what something is just it’s location and movement which is a vastly easier problem. A trash can rolling down the street can be treated just like an oil drum doing the same thing etc.

You’d think that, until you encounter something like a turn restriction sign with a bizarre conditional restriction that it’s never seen before. At which point the car needs to OCR the text, parse the semantic meaning, and apply to the scene.


Right by my house I have a four lane (on one side) intersection with a traffic signal. Each of the lanes goes straight ahead. However, each lane has its own traffic light, and when the traffic light rotation is in that direction, it alternates the two left most straight lanes red while the right most are green, and then switches (because very shortly after the intersection there is a quick lane reduction to two lanes).

I can't imagine how AI would _correctly_ see four straight arrowed lights in front of it in the intersection, some of which are red, some are green. Humans of course recognize that they correlate to the lanes, but this is a more esoteric case for AI to assimilate.


Or treat that turn restriction as applying 100% of the time.


And now we’re already making concessions about the car’s abilities.

There are 10 MPH speed limit signs on Market Street in SF that specify in incredibly small text “when behind trolleys”. Assuming we take your approach, the car will just always go down market at 10 MPH.

Imagine if it’s a negative turn restriction - IE, it’s permitting turns except for during certain hours and conditions. Now the car is treating it as always permitted and turning into traffic. An edge case, but something it’s going to encounter in the real world.


And now your moving the goalposts. We are talking extreme edge cases in some random small town not common signs in a major city. They can always get updates on what some random sign in some random location means as long as their safe and don’t block traffic that’s all that’s needed.

Also, negative restrictions can again default to full restrictions. Permitting a car to say park in a snow lane doesn’t require a car to park in the snow lane.


I don’t think I’m moving the goalposts - we were discussing whether autonomous driving (which I take to mean L4-L5 driving without the need for a human in the loop) is possible without geofences or HD maps. “Edge cases in some random small town” are exactly the sort of thing you need to worry about without a geofence.

Not to mention these sorts of edge cases are way more common in large cities than small towns - one of the examples I gave was down a central avenue in San Francisco.

>They can always get updates on what some random sign in some random location means as long as their safe and don’t block traffic that’s all that’s needed.

What if it truly fails to parse the sign accurately and does something illegal or dangerous? What does sending an update out look like? Does a human take a look at a crop of the sign and review it? Why not just map it in that case?


> edge cases are way more common

It’s not a question of parsing a known sign, even extremely complex rules can be encoded. Further that process can take place from a photo of the sign uploaded by the car to then be encoded by the rules. The general case is stopping and having a remote driver slowly tell the car what to do.

An unknown sign in a place without cellphone reception is about the only case where it really need to just figure it out on it’s own rather than simply avoid causing an accident.

> What if it truly fails to parse a sign accurately and does something illegal or dangerous?

Not much, people regularly disobey traffic signs especially ones with complex instructions. Don’t hit stuff or jump in front of another car is generally enough.


> Further that process can take place from a photo of the sign uploaded by the car to then be encoded by the rules. The general case is stopping and having a remote driver slowly tell the car what to do.

So you’re now agreeing that you need some level of remote support to handle edge cases like this?

>An unknown sign in a place without cellphone reception is about the only case where it really need to just figure it out on it’s own rather than simply avoid causing an accident.

Yes, and again, this is the sort of thing you actually need to worry about when trying to come up with generalized self driving solution.

> Not much, people regularly disobey traffic signs especially ones with complex instructions. Don’t hit stuff or jump in front of another car is generally enough.

What if it misinterprets a one way sign at night when there’s no other signal that it’s turning on to a one way lane and it suddenly finds itself traveling opposite the direction of traffic for a long period before encountering another car? You have to consider all of these edge cases when talking about a generalized solution.

Maybe you still disagree with me in sprit, but do you see how when we really look at edge cases how you have to fall back to some level of remote operation or mapping?


> So you’re now agreeing that you need some level of remote support to handle edge cases like this?

As a bootstrap step yes, after that no just regular updates for new traffic rules and such. You can’t make a purely offline self driving system that doesn’t get updated for 30 years because laws change. But presumably a non geofenced self driving car is going to be tested by driving on every road either directly or via someone’s mapping project.

> What if it misinterprets a one way sign at night when there’s no other signal that it’s turning on to a one way lane and it suddenly finds itself traveling opposite the direction of traffic for a long period before encountering another car? You have to consider all of these edge cases when talking about a generalized solution.

You mean in some location without maps? There are a finite number of roads in the world and they don’t change that quickly. If you’re worried that the AI is going to say end up on an ice road that melts, sure that’s the kind of thing that happens once. But the threshold isn’t perfection it’s ~30,000 dead people per year in the US. Beat that and you win.


> I think Tesla is right that to solve it for real you need to solve the general case which can’t rely on high resolution maps.

But they do relay on maps. You cannot use FSD without latest, high resolution maps.


Or you solve for a subset of highways in a subset of weather conditions. That would be more useful to a lot of people than city cabs which exist today (with human drivers).


Cruise is interesting insofar as they are not simply looking to sell their technology, but they also want to monetize it as a service. Not only will they not need a driver, they will also be able to buy the hardware (the car) at cost. If it's successful, their margins will be much higher than Uber and Lyft by a long shot.


On the other hand, Uber and Lyft externalize many costs including liability.


Externalizing liability and automated driving seem quite at odds unless Uber somehow manages to bypass laws again.


Is this not what effectively everyone who is doing this (outside of Tesla) is looking at?


As a taxpayer who pays for roads, and suffers from traffic congestion caused by one-occupant and zero-occupant vehicles, I'm eagerly looking forward to reducing the taxes I pay, by taxing those margins, instead.

Ideally, the taxes could be high enough that driverless taxis will operate at barely above break-even. The financial comfort of me and my neighbours are more important to me than the profit margins of a firm that barely employs anyone in my town.

Unlike a factory or a corporate office (that can threaten to move offshore, eliminating jobs and impoverishing a town), the firm in question is a hostage of local politics - not the other way around.


My cynical take: the government is not going to forgo collecting a tax from you that you are already paying. Instead it will tax you and start collecting per-ride fees from Cruise, etc.


Do you think your experience of congestion would be improved by everyone driving private vehicles instead? Not sure I follow the logic here.


Yes, because in the common case, a taxi (driverless or otherwise) drives empty at least some of the time, to pick someone up, thus creating congestion, compared to a private vehicle, which doesn't drive empty.

The cheaper and more convenient you make zero-person and single-person automobile transportation, the more people will use it, and the more congestion they will create.

The more expensive and less convenient you make it, the more trips will use non-automotive, or public transportation, both of which produce far less congestion.


I actually agree with everything here, but on the other hand the decision of whether and how to actually build the massive amounts of non-car infrastructure we need to have transport be efficient and accessible without private cars of any kind, is in a whole different place. At least in the US, it's pretty clear that in most areas there is very limited political will, even in the grass roots, for things like "build good high-speed trains" and "dig new billion dollar subways" etc. So I think pragmatically speaking things like robotaxis are going to be the "solutions" that we'll actually get.

(And yes, I agree that that's dumb since the same politicians and voters have no problem indefinitely subsidizing and expanding the massively money-losing infrastructure called Roads at taxpayer expense!)


On the other hand, once a sufficient percentage of cars on the road are autonomous, couldn’t they use cooperative navigation algorithms to improve throughput a whole lot?

There are so many inefficiencies with human drivers—chaotic merging, unnecessary lane changes, blocking of passing lanes, and so on. I could imagine that optimizing all those away would make a huge difference overall.

You could also probably increase speed limits. And fewer accidents should cause a significant reduction in traffic jams.


Of course. If one assumes relatively inexpensive robo-taxis people living outside cities will definitely come in more often. I certainly would.


Success? Go from point A to point B with minimal incidents. It's not that complicated as most people make out of it.


More importantly, "driverless" means no one at the driver seat. What Tesla has is barely even Level 3. Waymo right now is doing rides without anyone in the driverseat, aka Level 4.

What Tesla is doing is not driverless.


I just drove through the Alps, at night, during a snow storm. This is hardly everyday driving, but it's the sort of experience Canadians are no strangers to.

Success is when I trust the autopilot to handle the weather conditions where I live, not just sunny days in California.


I always wondered why the rejection of lidar by Tesla. My guess is that it is more about profitability/availability than anything else. Just because humans use eyeballs doesn't mean that it is the best bet for a computer. This sort of naturalistic fallacy led people to believe (~130 years ago and beyond) that the ideal flying machines would have flapping wings because, well, birds have wings and that is how they fly.

Maybe I'm just salty my 2021 Model Y had lidar stripped out of it, and to me the more tech toys the better. Not that it matters because Tesla FSD is a scam and I wouldn't use it even if it came for free with the car.


Lidar was expensive back then and would've added huge costs to the vehicles. Not to mention, it looked ugly on consumer cars. Elon conveniently used "humans use only vision" as an excuse and promised every Tesla has "sufficient" hardware for full autonomy. It's that premature promise that doesn't let them add sensors even now (and perhaps Elon's ego) without breaking trust and/or eating big costs for retrofitting.

In short, Elon made a high risk bet that vision-only would be enough and so far has been proven horribly wrong. But I've got to say, it was brilliantly executed because it gave Tesla mindshare as a tech company, drove sales and contributed massively to their insane valuation.


Tesla pivoted to video only. They originally tried with radar as well. So any such statements by musk are already a pivot.


Their radar removal was baffling. There were rumors (perhaps Musk's tweets?) of Tesla investigating 4D imaging radars to replace their really old Continental radars, but they suddenly decided to remove it altogether. Many have attributed it to chip shortage and yet again Musk using vision-only as an excuse for removal.


My money is on chip shortage. There were some real problems with the radar (like reflections from overpasses) though. I think not being able to ship cars because not enough radars is what forced this decision though (vs. trying to fix the radar and/or the way it's fused with the vision).


> It is interesting to me that right now this is sitting on the HN homepage directly adjacent to: "Tesla to recall vehicles that may disobey stop signs (reuters.com)"

Based on https://www.forbes.com/sites/bradtempleton/2022/01/13/a-robo... Tesla's FSD has other issues as well.


In Lex fridmans interview with George Hotz, hots talks about why he thinks radar AI is a non starter and predicted that even though Tesla was still adamant about using their radar, they would eventually realize they only needed cameras.

Hotz is the founder of comma.ai which is doing open source (I think?) Company.


What were his reasons? Searching the web reveals lots of machismo and assorted hero worship, but no actual solid technical arguments. Nobody in the field seems to think a vision-only solution is practical, other than Tesla, who are also not providing solid technical arguments that I can find.

This IEEE item seems to summarize the situation well.

https://spectrum.ieee.org/tesla-places-big-bet-vision-only-s...


It was a discussion about how such a system must function with all the different inputs vs the vision only model.

If you have radar and lidar and vision, then you have at least three different specialist machine learning models running, and then another model running that takes their outputs and decides what the car is going to do. You may have even more than that, some doing specific tasks like localization.

Neural nets and vision only is a more difficult but in the long run straightforward solution. The example he brought up was alphazero vs some other chess engine that has a rook engine, and a knight engine, etc.

Basically he's backing the end to end neural network back approach over some kind of multi-sensor fusion.

https://youtu.be/_L3gNaAVjQ4?t=3801


That argument doesn't make any sense though. Clearly radar + lidar + vision is a superset of vision only. It can perform as well as vision only if you disable the other two.

So the claim is there is absolutely no scenario where the other systems can contribute which seems to be false. E.g. Tesla's cameras are blinded pointing straight on into the sun, the car even tells you the cameras are blinded. If the cameras are splashed with mud they'll also see nothing. Tesla's radar was able to see "through" some obstacles which the vision system can not (and you can see that in the traffic visualizations).

Now how far do you want to fuse, do you just want to overlap the unique sensing abilities of each of those systems? How do you handle conflict when you're in the regions where all systems sense "equally" well. Sure, there's questions. But clearly having more data can't make things worse.

Elon's argument that that's the way to go because that's how humans drive is just ridiculous. And I say this as a proud Model 3 owner (great car, will never ever be autonomous, I don't care). It doesn't pass the sniff test.


> So the claim is there is absolutely no scenario where the other systems can contribute which seems to be false.

No. The claim was that pouring resources into those other systems is better spent improving the vision system, within the context that Tesla and Comma.ai is operating.

> But clearly having more data can't make things worse.

There are several examples of cars hitting stuff because one part of the sensor suite thought there was a problem, or that there wasn't a problem.

In general, the more complex you make a system, the more complex the failure modes get.


The real equation is closer to: Radar + LIDAR + Vision + Cost + Latency - Battery

Doing it with vision only saves on cost, latency & energy consumption.


I agree it saves on cost and energy consumption. Not sure about latency. But it's an inferior sensing system [and] it remains to be seen whether it can do the job or not, so far seems like not. Even if it can do the job (by some measure) whether it can outperform the better sensing systems which seems unlikely. I'd pay a little more for better safety, we know the cost of "human safety" as it reflects in our insurance premiums...


If fusing two data sources is necessary prior to making actuation decisions, how could adding another data source not introduce additional processing & latency?

It's not just about paying more dollars, but also range.

You're assuming that a dual system will be safer, but what if such systems are more prone to perception confusion, or other anomalies?

Even if a dual system were safer, it doesn't make sense to say that you will pay for safety absolutely. For example, you can always add an additional smoke alarm to your house for some marginal improvement in safety, but at some point most people decide they are safe enough.


Hotz makes his own company making self driving using vision only https://comma.ai/


Didn’t Hotz publicly give up on self driving and say driver assistance is all that will ever be possible?


If its what I think you're saying, he mentioned that its a question of liability.

If your car is 'driver assistance' then you don't have liability for accidents. If your car is 'self driving' then now you're going to get sued every accident.

So instead you're always 'driver assistance' just from the risk analysis perspective.


The question is do the economics of self-driving cars work when you have to add and integrate all these additional equipment. Correct me, if I am wrong but aren't LIDAR cars supposed to cost you $200K+ ?

Tesla is imitating humans in a way that removing LIDAR and relying on the compute to make up for them and build more accurate picture.

Also, isn't Cruise owned by GM - who are the VCs here?


Lidar costs have dropped massively and continues to drop. Waymo, for example, claimed 4 years ago they were able to reduce cost of their Lidar by 90% from $75,000 to around $7500 [1]. In the meantime, range and resolution of these sensors have increased. Anyone not making use of Lidar in this day is just hamstringing themselves.

[1] https://www.businessinsider.com/googles-waymo-reduces-lidar-...


200k isn't that much for certain use cases, like shared cars. NYC taxi medallions were significantly more expensive yet the revenue from taxi rides was high enough that people kept buying medallions.


Not exactly apples to apples. A $200k hardware device depreciates in value over time, whereas the taxi medallion was a fairly liquid, appreciating asset.


Yup that's a really good point.


If you search for "bosch lidar" there's a bunch of 2 year old news about them selling one designed for autonomous vehicles for $10,000, with statements that they could likely drive the price towards $200 with mass production.

So $200,000 is probably not correct.


It is correct for current pricing. Advances may drive those prices down. Right now the sensor packages exceed the price of all the other components.


> Tesla is imitating humans in a way that removing LIDAR

They are trying to imitate a fraction of what humans can do. And the state of the art ML research still does not account for issues like whether a photo of a person on a truck is real or not.

You really need LiDAR for accurate bounding box detection.


I recall a few years ago, GM/Cruise acquired a LIDAR manufacturer in 2017. Not sure if it has worked out, but their rationale for the acquisition for vertical integration makes sense.

Strobe’s solution will reduce the cost of making these sensors by 99 percent, Vogt says. “The idea that lidar is too costly or exotic to use in a commercial product is now a thing of the past.”

https://www.wired.com/story/gm-cruise-strobe-lidar/


What makes you say Tesla is imitating humans? Their motion planning is all traditional robotics logic, not anything learned.


Vision-based driving. Humans don't have lidar sensors.


Humans don’t regress per-pixel depth or use convolutions and region proposals to draw bounding boxes around objects. They don’t function on models with fixed weights trained by backprop either. The idea that “vision only” somehow more closely resembles how humans drive quickly falls apart if you inspect the internals of these systems.


The similarity is in the problem to be solved, not the details of the compute pipeline. Depth must be inferred somehow, rather than measured by actively interacting with the surface, as it is in LiDAR.


If you really wanted to make this argument, you wouldn’t even want to bother inferring depth, since that’s not what humans do, not directly at least. If you’re actually trying to obtain a depth map as part of your pipeline, LIDAR (or LiDAR + vision feeding into a denser depth prediction model) would always be a better strategy, cost aside, since determining depth from images is an ill posed problem.


My claim is that humans use their eyes as a primary input for driving. I don't think it's controversial. We don't let eyeless people drive. Eyes do not shoot out lasers.


I think the comparison comes from the fact that humans infer a 3D map from stereo vision whereas LiDAR to some extent gives you the ground truth 3D map. You’re right that it falls apart pretty quickly though.


Except Tesla isn’t inferring a 3D map from stereo vision either, at least not outside of the front facing cameras - they’re using monocular depth prediction.


Neither are humans. Our eyes are so close together there's almost no disparity between the eye images beyond a handful of meters. We do 3D by inference beyond the near-field.


Humans aren't just a stereoscopic camera - we can sense depth by knowing what things should look like, or by moving our head, or refocusing, or…

Did you know we can see light polarization?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528539/


You’re describing vision, and not LiDAR. We are in agreement.


They are wholly owned by GM are they not? Starting this way doesn’t preclude them taking Tesla’s vision only approach in the future. Even Tesla initially had a radar vision combination approach before moving to pure vision. The real question with whatever technique being used is can they drive the hardware cost low enough that it can be widely deployed.


They moved to pure vision because they were constrained by radar suppliers because of supply chain issues and the chip shortage, not because of any ML progress - they were actually investigating using a higher resolution imaging radar before the pandemic.


They also have Lidar cars in Fremont that drive around every so often, that doesn't mean they plan to go into every car anytime soon. It'd be short-sighted to not continuously evaluate solutions that previously had constraints (despite what Elon says, they'd add Lidar if it made economic sense and it showed an improvement over camera-only, as their camera detection is pretty accurate these days in FSD).


I’ve seen their LIDAR cars in SF too - if I had to guess they’re gathering ground truth data to train monocular depth models on.

And even really naive integrations of LIDAR will show big improvements over camera only. You can do something as simple as overlay the returns from the most recent LIDAR spin over a camera image as a fourth channel and feed it into your models and most of your depth prediction/spatial predictions will improve dramatically.


No, they're not. The linked post even mentions Softbank being a shareholder. It would be good to know what % of Cruise is owned by GM, but as far as I know it hasn't been made public knowledge. Hope someone can correct me on that.


This is true but there’s a bit more to it. Going vision-only is a big move that requires innovation in a lot of areas and changes huge parts of the stack.

Tesla was already heavily relying on vision before going vision-only.


I don't know who owns them. (see other discussion). I do know that automakers have aggressively partnered on self driving. Once this is ready for mainstream a most automakers will all introduce it because they all have rights to it.

I don't know the contract, it is likely that there is some ordering where the luxury brands can sell it first. It will be rolled down to cheap brands as needed (depending either on market demand, or legal mandates)


The reason camera only won’t happen is because it won’t work. Elon has been fighting reality for years.


How does a human identify obstacles, VRUs [0], and other cars?

0: vulnerable road users, eg pedestrians and bicyclists


> How does a human identify

With the most complex, context-aware, intuitive computer in existence. In addition to eyeballs that are dramatically more capable than any camera Tesla is using.


Not going to comment on the overall lidar vs. pure camera approaches, but I don't think human eyeballs are more capable than a full 360-view array of cameras looking at everything (including blindspots) at once. For once, human eyeballs cannot see in full 360 or view the environment from outside of the car cabin (aka unrestricted by pillars and other things that obstruct the field of view from inside the cabin).


Have you ever used night mode on your phone? It looks nothing like what you are taking in terms of colours, but it shows a lot more detail than the human eye can pick up. Same thing with zoom cameras. Also there are more cameras than eyes, and in better spots.


Have you ever used night mode on your phone while driving on 60mph at night? It's basically sophisticated long exposure that won't work for driving.


How does a bird fly?

For human flight, we borrowed the wings, but made them fixed and added propellers, to cheat the mechanical incompetence.

I wouldn't be shocked that radar/lidar/sonar/whatever sensors are what it takes to cover the incompetence in matching human brain+vision.

Heck, use multiple "brains" and give each veto power on moving the vehicle. Supposing that stopping doesn't kill you, that would at most frequently annoy the driver, and sometimes save his life or someone else's.


Well also the largest animal to ever fly only weighed a few hundred lbs so we’re also just limited by the kind of flight we’re trying to do.

At least today we probably could build a craft that flies by flapping its wings but what would be the point?


Forget birds, insects do it, with vision and puny brains.


The human visual system works more like a high resolution event camera than a frame-based camera. Event cameras can deal well with glare and other problems that would otherwise require high dynamic range per frame.


Humans would drive better if they could shoot fricken laser beams out of their eyes.


Why would Elon want to model humans when humans are mostly responsible for the problems we want to replace humans for.


Can someone explain whether there is a principled reason to not use all the sensors available and choose just cameras?


From public information it's a matter of price and "appearance." No one who's half serious about understanding how the system work would use cameras alone, but remember of all the companies who's serious about putting self driving cars on the road, Tesla is the only one who designs, manufactures and sells cars. Incidentally, Tesla is also the only car company pushing camera only as a "solution" which again, it is not. All other outfits are tech pure plays, so their incentive is more geared towards a working system, because that's the only thing they can sell aside from the dream. If Tesla fails with a camera-only self driving car system, they can still sell cars. Source: used to work in robotics, all my classmates from grad school work or have worked at a self driving car company.


It sounds very stupid from their side to be stubborn, at least, that just means they have an existential risk to the whole company, if they don't get their fav solution to work.


Think along the lines of:

1. What is necessary vs sufficient.

2. Processing power.

3. Power consumption.

4. Hardware cost. (even if may have decreased now, the cost of initiating the programme years ago)

5. Training cost / volume of data.

The approach that Tesla is using balances all these things. I can't fully explain why it's so controversial, but I suspect this topic attracts folks from autonomous startups who are using very different approaches, people who repeat what they've heard elsewhere, and the usual anti-Tesla folk.

If you want to know whether Tesla's approach can work, first realise that the sensor suites only help with the 'perception' part of the autonomy problem. Then watch some of the Tesla FSD videos on YouTube and check whether the visualisation seems accurate or not. It's certainly not 100% perfect yet, but it's clear to me that the perception part of the problem is mostly solved. The biggest remaining problems seem behavioural.


As long as Teslas regularly crash into stationary objects because they have no depth sensing system and rely only on camera images, I wouldn't call the perception part of the problem solved.



In the end I think self-driving regulations will require depth checking beyond computer vision as it can be tricked on any new situation. Depth checks using LiDAR are extremely efficient up to a football field away down to the direction someone is facing. RADAR is not as good but better than video/flat 2D depth detection though is limited by range, however it does work in weather where LiDAR doesn't and computer vision struggles.

Autopilot and now FSD on Teslas doesn't have depth checking beyond visual/cameras. They removed the RADAR/sonar and have zero physical world depth checking currently. Tesla recently instead of adding LiDAR, they [just removed RADAR to rely on computer vision alone even more](https://www.cnbc.com/2021/05/25/tesla-ditching-radar-for-aut...).

Self-driving cars need cameras and physical depth checking sensors like LiDAR, or at least RADAR. Telsa has only cameras and some sensors but not for depth anymore, that is insane.

Humans have essentially LiDAR like quick depth testing. Humans have hearing for RADAR like input. For autonomous units, depth may be actually MORE important than vision in many scenarios. Humans have inherent depth checking with 3D space, movement, sound, lighting, feel, atmosphere, air, pressure, situational awareness, etc that computer vision converted to 2D flat images for depth checking will never be able to replicate.

A human can glance at a scene and know how far things are not just by vision but by how that vision changes with these distance inputs. Humans are able to detect 3D/2D imagery easily where it is all based on 2D with a camera. LiDAR is faster than humans at depth checking in the actual physical world not just from an image flattened.

With just cameras, no LiDAR OR RADAR, depth can be fooled.

Like this: [TESLA KEEPS “SLAMMING ON THE BRAKES” WHEN IT SEES STOP SIGN ON BILLBOARD](https://futurism.com/the-byte/tesla-slamming-brakes-sees-sto...)

Or like this: There is the [yellow light, Tesla thinking a Moon is a yellow light because Telsas have zero depth checking equipment now that they removed RADAR and refuse to integrate LiDAR](https://interestingengineering.com/moon-tricks-teslas-full-s...).

LIDAR or humans have instant depth processing and can easily tell the sign is far away, cameras alone cannot.

LiDAR and humans can sense changes in motion, cameras cannot and even RADAR struggles with dimension (frame to frame changes).

LiDAR is better than humans on changes in motion, depth, seeing all around always, and much faster at all those things.

[LiDAR vs. RADAR](https://www.fierceelectronics.com/components/lidar-vs-radar)

> Most autonomous vehicle manufacturers including Google, Uber, and Toyota rely heavily on the LiDAR systems to navigate the vehicle. The LiDAR sensors are often used to generate detailed maps of the immediate surroundings such as pedestrians, speed breakers, dividers, and other vehicles. Its ability to create a three-dimensional image is one of the reasons why most automakers are keenly interested in developing this technology with the sole exception of the famous automaker Tesla. Tesla's self-driving cars rely on RADAR technology as the primary sensor.

> High-end LiDAR sensors can identify the details of a few centimeters at more than 100 meters. For example, Waymo's LiDAR system not only detects pedestrians but it can also tell which direction they’re facing. Thus, the autonomous vehicle can accurately predict where the pedestrian will walk. The high-level of accuracy also allows it to see details such as a cyclist waving to let you pass, two football fields away while driving at full speed with incredible accuracy. Waymo has also managed to cut the price of LiDAR sensors by almost 90% in the recent years. A single unit with a price tag of $75,000 a few years ago will now cost just $7,500, making this technology affordable.

> However, this technology also comes with a few distinct disadvantages. The LiDAR system can readily detect objects located in the range of 30 meters to 200 meters. But, when it comes to identifying objects in the vicinity, the system is a big letdown. It works well in all light conditions, but the performance starts to dwindle in the snow, fog, rain, and dusty weather conditions. It also provides a poor optical recognition. That’s why, self-driving car manufacturers such as Google often use LIDAR along with secondary sensors such as cameras and ultrasonic sensors.

> The RADAR system, on the other hand, is relatively less expensive. Cost is one of the reasons why Tesla has chosen this technology over LiDAR. It also works equally well in all weather conditions such as fog, rain, and snow, and dust. However, it is less angularly accurate than LiDAR as it loses the sight of the target vehicle on curves. It may get confused if multiple objects are placed very close to each other. For example, it may consider two small cars in the vicinity as one large vehicle and send wrong proximity signal. Unlike the LiDAR system, RADAR can determine relative traffic speed or the velocity of a moving object accurately using the Doppler frequency shift.

> Though Tesla has been heavily criticized for using RADAR as the primary sensor, it has managed to improve the processing capabilities of its primary sensor allowing it to see through heavy rain, fog, dust, and even a car in front of it. However, besides the primary RADAR sensor, the new Tesla vehicles will also have 8 cameras, 12 ultrasonic sensors, and the new onboard computing system. In other words, both technologies work best when used in combination with cameras and ultrasonic sensors.

LiDAR and depth detection will be needed, no matter how good the pure computer vision solutions get.

The accidents with Teslas were the Autopilot running into large trucks with white trailers that blended with the sky so it just rammed into it thinking it was all sky. LiDAR would have been able to tell distance and dimension which would have solved those issues.

[Even the most recent crash where the Tesla hit an overturned truck would have been not a problem with LiDAR](https://www.latimes.com/california/story/2021-05-16/tesla-dr...). If you ask me sonar, radar and cameras are not enough, just cameras is dangerous.

Eventually I think either Tesla will have to have all these or regulations will require LiDAR in addition to other tools like sonar/radar if desired and cameras/sensors of all current types and more. LiDAR when it is cheaper will get more points almost like Kinect and each iteration of that will be safer and more like how humans see. The point cloud tools on iPhone Pro/Max are a good example of how nice it is.

Human distance detection is closer to LiDAR than RADAR. We can easily tell when something is far in the distance and to worry or not about it. We can easily detect the sky from a diesel trailer even when they are the same colors. That is the problem with RADAR only, it can be confused by those things due to detail and dimension especially on turns like the stop sign one is. We don't shoot out RADAR or lasers to check distance but we innately understand distance with just a glance not just from vision alone though.

Humans can be tricked by distance but as we move the dimension and distance becomes more clear, that is exactly LiDARs best feature and RADARs/CV trouble spot, it isn't as good on turning or moving distance detection. LiDAR was built for that, that is why point clouds are easy to make with it as you move around. LiDAR and humans learn more as they move around or look around. RADAR can actually be a bit confused by that. LiDAR also has more resolution far away, it can see more detail far beyond human vision.

I think in the end on self-driving cars we'll see BOTH LiDAR and RADAR but at least LiDAR in addition to computer vision, they both have pros and cons but LiDAR is by far better at quick distance checks for items further out. This stop sign would be no issue for LiDAR. It really just became economical in terms of using it so it will come down in price and I predict eventually Tesla will also have to use LiDAR in addition.

[Here's an example of where RADAR/cameras were jumpy and caused an accident around the Tesla](https://youtu.be/BnbJvUwbewc?t=262), it safely avoids it but causes traffic around to react and results in an accident. The Tesla changed lanes and then hit the brakes, the car behind was expecting it to keep going, then crash.... dangerous. With LiDAR this would not have been as blocky detection, it would be more precise and not such a dramatic slow down.

Until Tesla has LiDAR it will continue to be confused with things like this: [TESLA AUTOPILOT MISTAKES MOON FOR YELLOW TRAFFIC LIGHT](https://futurism.com/the-byte/tesla-autopilot-mistakes-moon-...) and this: [WATCH TESLA’S FULL SELF-DRIVING MODE STEER TOWARD ONCOMING HIGHWAY TRAFFIC](https://futurism.com/the-byte/watch-tesla-self-driving-steer...). [They are gonna want to fix FSD wanting to drive toward moving trains](https://twitter.com/TaylorOgan/status/1487080178010542085).

In the end I bet future self-driving, when it is level 6, will have computer vision, LiDAR, RADAR and potentially more (data/maps/etc) to help navigate. [Tesla FSD has been adding more of data/maps in which is basically what they said they didn't need to do](https://twitter.com/WholeMarsBlog/status/1488428565347528707), so LiDAR will have to come along eventually. Proof of them [using maps data](https://twitter.com/IdiocracySpace/status/148843350997893939...) and possibly previous driver data.

LiDAR is extremely fast and accurate up to multiple football fields even determining which way a pedestrian is facing, with high fidelity. That is much better than a camera or computer vision. LiDAR can get many reads in the time you could do one complete CV pass.

The best process is to do depth checks across the viewable area, then overly the results of the computer vision tick, then again check the differences and when it comes to depth, go with the LiDAR feedback as video can be wrong/tricked. This is probably how the cars that use LiDAR are doing it (Waymo/Cruise/etc), the equipment is also on the roof for better vision/coverage.

Tesla is going to be massively behind when the regulations come down that self driving needs depth checks beyond CV. You can still base lots of the process on CV. However doing depth solely in CV can be tricked and uses massive amounts of processing power which ends up with faster usage of power. CV will always be limited in distance, dimension and depth to LiDAR and physical world detection/sensors.

To think a Tesla can drive you without intervention or watching it closely, there will eventually be a distance confusion and it won't go well. The name Autopilot was better as it inferred like a plane where you still have to watch it, though planes are much further apart. The name Full Self Driving should be changed immediately even in beta, it is going to be ripe for lawsuits and problems.

Tesla is trying to brute force self-driving and it will have some scary edge cases, always.


Humans don't actually have any direct depth sensing either. I don't think your ears count because most objects you need to detect while driving aren't really emitting anything your ears can pick up...

So in theory if Tesla had cameras instead of eyes, and those cameras were positioned were the driver's eyes would be positioned, and its computer systems were a human brain, and those cameras has the dynamic range, resolution, auto-focus abilities of the human eyes (or better), then the car could drive just like a human. The problem isn't really that humans have direct depth sensing and the car doesn't, the problem is the vision system of the car is inferior in many ways to the human's vision system, and the brain of the car is far inferior to the human brain.

Better sensing can (maybe) make up for that. If a car can sense all objects on the scene that it may collide with accuracy and far enough in advance then avoiding accidents becomes collision avoidance rather than general intelligence. In ambiguous situations just don't crash into anything and don't go too close to other moving objects. The car might not obey traffic rules perfectly but it probably wouldn't crash ;) If the car occasionally goes through a red light, or doesn't stop in a stop sign, but does so safely (for itself and the surrounding traffic) then it's not a big deal. If the car relies on recognizing a stop sign even if it's behind a tree or partially obscured and failure to recognize the sign leads to a side collision with other traffic or running over pedestrians that's a little bit of a bigger deal.


Even with the two camera approach, you are still doing depth checking with a flat computer vision algorithm. Also keep in mind humans can turn their head, and can handle new situations without training. Humans also can tell situations better that includes dimension and movement.

LiDAR is a physical world depth checking system. It will always beat simulated depth checking. It also does dimension and movement better than computer vision. LiDAR is a 360 degree depth check.

Essentially Tesla is trying to do a LiDAR like point cloud from camera inputs only. That may work in many cases, it will be beat by LiDAR in all cases due to the difference of virtual vs physical data.

> The justification for dropping radar does make sense, says Weinberger, and he adds that the gap between lidar and cameras has narrowed in recent years. Lidar’s big selling point is incredibly accurate depth sensing achieved by bouncing lasers off objects—but vision-based systems can also estimate depth, and their capabilities have improved significantly.

> Weinberger and colleagues made a breakthrough in 2019 by converting camera-based depth estimations into the same kind of 3D point clouds used by lidar, significantly improving accuracy. Karpathy revealed that the company was using such a “pseudo-lidar” technique at the Scaled Machine Learning Conference last year.

> How you estimate depth is important though. One approach compares images from two cameras spaced sufficiently far apart to triangulate the distance to objects. The other is to train AI on huge numbers of images until it learns to pick up depth cues. Weinberger says this is probably the approach Tesla uses because its front facing cameras are too close together for the first technique.

> The benefit of triangulation-based techniques is that measurements are based in physics, much like lidar, says Leaf Jiang, CEO of start-up NODAR, which develops camera-based 3D vision technology based on this approach. Inferring distance is inherently more vulnerable to mistakes in ambiguous situations, he says, for instance, distinguishing an adult at 50 meters from a child at 25 meters. “It tries to figure out distance based on perspective cues or shading cues, or whatnot, and that’s not always reliable,” he says.

> How you sense depth is only part of the problem, though. State-of-the-art machine learning simply recognizes patterns, which means it struggles with novel situations. Unlike a human driver, if it hasn’t encountered a scenario before it has no ability to reason about what to do. “Any AI system has no understanding of what's actually going on,” says Weinberger.

> The logic behind collecting ever more data is that you will capture more of the rare scenarios that could flummox your AI, but there’s a fundamental limit to this approach. “Eventually you have unique cases. And unique cases you can’t train for,” says Weinberger. “The benefits of adding more and more data are diminishing at some point.”

> This is the so-called “long tail problem,” says Marc Pollefeys, a professor at ETH Zurich who has worked on camera-based self-driving, and it presents a major hurdle for going from the kind of driver assistance systems already common in modern cars to truly autonomous vehicles. The underlying technology is similar, he says. But while an automatic braking system designed to augment a driver’s reactions can afford to miss the occasional pedestrian, the margin for error when in complete control of the car is fractions of a percent.

https://spectrum.ieee.org/tesla-places-big-bet-vision-only-s...


> They and their VC backers are clearly betting on the concept that radars + lidar + imaging will be the ultimate successful solution in full self driving cars, as a completely opposite design and engineering philosophy from Tesla attempting to do "full self driving" with camera sensors and categorical rejection of lidar.

Tesla's approach is a textbook case of premature optimization.


Tesla's approach is a textbook case of differing constraints.

Despite Elon's commentary, the reason Tesla does not use lidar is sensor cost (waymo/cruise cars probably have 250k+ worth of sensors on them), and reason Tesla does not use radar is supply chain.


Sensor cost is a lame excuse, especially coming from a company that's one of the leaders of EV world. Battery prices are decreasing, but sensors wouldn't?

Tesla put constrain on themselves, because they wanted to sell product years before they existed, and they were super cheap on HW, to optimize their costs.

They optimized their solution, before they even had a slightest idea what's needed to make it work.


core to tesla's strategy is to do massive data collection from consumer-owned cars using beta software (and hardware, that the consumer pays for). that model is not compatible with expensive lidars, which contrary to some other comments in this thread, are still very expensive (just because the entry-level pucks are cheap, does not mean full lidar coverage is cheap). there is no way they could push $100k of sensors on consumers to build out their data collection pipeline. when tesla was first starting out, affordable lidar did not even exist so it's hard to call that a lame excuse.

all that said, I'm still pessimistic about tesla's chances at making camera-only L4 work in any short time horizon. we will see if they pull it off, but it's such a severe disadvantage compared to fully-kitted competitors.


I don't think it's a lame excuse at all. Tesla barely managed to get a profitable car on the market given battery costs, and had to bet their company on a strategy of massive investment and reducing costs over a decade+ period.

They are also ahead of every other auto maker in commercially purchasable L2 autopilot.

With that context, it doesn't make sense to add $50k+ to the price of their cars (e.g. doubling the cost of a Model Y) just to get slightly better autopilot performance. Lidar would help them in some cases (e.g. stationary objects), but it's not a panacea. Their strategy makes perfect sense given they want to sell semi-affordable cars to the public.

On the flipside, Cruise & Waymo's strategy of "geofenced L4 at any initial cost" makes perfect sense for their short term robotaxi ambitions.

Maybe in ~10 years these strategies will intersect, but for the time being they are completely different products.


> Tesla barely managed to get a profitable car on the market given battery costs, and had to bet their company on a strategy of massive investment and reducing costs over a decade+ period.

Numbers came out a few days ago showing that Teslas had some of the highest margins in the mainstream auto segment.


Why do we have to discuss Tesla whenever self-driving comes up? Cruise has this technology. Waymo has it. There are a smattering of niche players out there with various levels of self-driving. Tesla emphatically does not have it. They are not in the race.


I agree, but they sure claim to be. Literally marketed as "full self driving".


Musk is a true car salesman.


Actually, I just noticed earlier that todays' wording is ""full self driving capability".

Sells for 12k USD here, for instance: https://www.tesla.com/modely/design#overview


Where can I buy one?


Yeah, but that's not a tech issue. The few thousand that have the full self driving beta just have the opt-in option to turn on rolling stops. That just has to be removed in their next update.


As you’ll note in the comments on that thread though, FSD has a lot more issues than just that; particularly with stationary objects and at night.


I don't have FSD enabled on my Model 3, but I have the FSD visualization preview. I'd be terrified of FSD at night. During the day I don't see it have any issues registering cars and other obstacles, but during night it barely detects anything.


Seems fairly good at night in 10.9 https://www.youtube.com/watch?v=01QowBvtraE


Tesla will release L4 or L5 self driving this year. Musk said it himself.

Please ignore the fact that this is the 7th (or more?) year in a row he has said this.


I still want a functional autopilot that doesn't phantom brake on the freeway which has gotten worse since they stopped relying on radar. Or have the self park feature not curb the wheels. I won't even get started on the summon feature.


"Use Summon to bring your vehicle to you while dealing with a fussy child", unless you ask Legal, in which "pay attention to vehicle at all times. Do not use distracted" is the use case, you mean?


The few times I've ridden in a Tesla using "self driving" the phantom breaking was really jarring. It was surprising and concerning how often it slammed on the breaks for no reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: