Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The vision system is supposed to be able to determine an accurate depth map based on a combination of stereo vision and depth-from-defocus. I've seen demos of the real-time depth map, and it looks high-resolution and accurate to about 5-10cm.

So, if they have the input data, why is it being ignored by autopilot?



Tesla’s website[0] states it’s monocular depth estimation. I haven’t heard of them doing any form of stereo.

[0] https://www.tesla.com/autopilotAI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: