LIDAR doesn’t stop them bumping into anything. LIDAR is a sensor, it doesn’t recognise anything or make decisions about steering, acceleration, or braking.
You need more than the sensor, obviously. The point is that you don't need any AI to make a system this way that is substantially safer than a system based only on camera feeds and AI.
> > While an error rate of 1-in-1,000 seems low, [...], on a task that requires successful execution of thousands of steps in a row, such a system results in inevitable failure.
> This is also why (edit: non-LIDAR) FSD cars are an illusion.
In this scenario, Waymo’s AI is executing thousands of steps in a row. The fact that it uses LIDAR for sensing doesn’t change that. It’s still AI driving you around no matter what its eyes are made of.
Waymo is a counterexample to the point you were making and their use of LIDAR doesn’t change that.
No because safety is guaranteed by the LIDAR and navigation is done by GPS+classical algorithms. Mistakes made by the AI can be overcome by those two non-AI approaches + reiterating the AI-based steps.