Pushmi-Pullyu
Well-Known Member
In ordinary traffic, a crossing car or truck will trigger a hard brake. But why these are detected and the tractor trailer remains invisible is a mystery.
Because using software in an attempt to recognize objects in video images is not reliable. It might work... and it might not. Even 99.9% reliability isn't sufficient, if that means it fails to "see" 1 out of every 1000 semi trailers. And at best, it's quite slow in terms of computer processing speeds.
Using active sensors -- phased-array radar or lidar -- is much more reliable and much faster. That's why Waymo's self-driving cars use all three systems: Cameras, lidar, and phased-array (so-called "high-res") radar.
Aside from ordinary video cameras, Tesla is only using extremely short-range (within a few feet) radars and extremely short-range ultrasonic scanners, plus a front-mounted low-res Doppler radar.
Here is an attempt to show, visually, what Tesla's low-res Doppler radar "sees":

(I'm linking to the source for the sake of completeness, but I'm very skeptical about a lot of what's said in that article. Looks like a lot of guesswork to me, and I think some of those guesses are wrong.)
Note the size of the radar return (indicated by the size of the circle) shows only distance, not size or shape. Also, note it ignores stationary obstacles such as telephone poles and trees. (A couple of objects, indicated in orange, are labeled "stationary", but my guess is those objects were first detected as moving, and are now identified according to their previously detected position when they stopped. Or perhaps at least one is a false positive, since the larger orange circle on the right appears empty.)
Quite clearly this is entirely unsuitable for 3D mapping. If, for example, there is a tractor-trailer rig pulling onto or across the highway, into the path of your car, then the self-driving system needs to know how big it is... not just how close it is.
Last edited: