(copied from
another thread) :
For the reasons pointed out by
@gooki, if we decide Cadillac is the winner, and they need to lead the charge, we might miss on vital capabilities of Tesla (Again, the evaluation methodology used in the article may be wrong, I am just trying to point out the various usage conditions and how one system may perform better in one condition but not in another).
I think this is losing sight of the forest for the trees.
Tesla and GM/Cadillac and the other auto makers seem intent on fiddling around with Level 2 or 2+ semi-autonomous driving systems. Only Waymo seems to be actually working toward the goal of Level 4 fully autonomous driving.
The reason that cars controlled by Level 2 or 2+ systems keep slamming into vehicles parked on the highway is that their sensor systems are wholly inadequate to the task of creating a SLAM.
SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a 3D map of its surroundings, and orient itself properly within this map in real time.
Current production cars use cheap Doppler radar systems as the primary sensors, which only detects
differences in speed. Those sensors cannot detect stationary objects; any reflections off those are ignored as part of the "background noise". A detailed explanation can be found here: "
Why emergency braking systems sometimes hit parked cars and lane dividers"
Cars will never achieve truly safe autonomous driving so long as they are dependent on Dopplar radar and cameras. Cameras cannot, of course, directly "see" anything; camera images must be subjected to software interpretation. That has various problems which have proven to be insolvable; problems with edge-finding and low contrast with the background and other issues too complex for a detailed explanation here. Suffice it to say that the human brain has a highly developed, complex visual cortex which is the product of billions of years of evolution, and even then it's not perfect -- witness optical illusions.
Computers have no equivalent of the human visual cortex. Computers must rely on much simpler hardware, and software limited by the size of computer memory. Furthermore, relying on cameras would make self-driving cars subject to the same limitations as the human eye: They can't see in the dark. The goal of those developing autonomous driving systems should not be to slavishly reproduce the way humans see and drive; the goal should be to produce a driving system which is much, much safer!
Active sensors (which send out a signal, which bounces off objects and gives a return signal) are a much better technology to use than trying to use software to interpret camera images. High resolution active sensor systems include lidar and phased-array radar. Some (or many) autonomous driving test vehicles have such active scanners. One reason that lidar hasn't been seen in production cars is that lidar scanners have been very expensive. But with new, much lower-cost solid-state lidar tech, hopefully that limitation will soon be overcome.
But altho active scanning with lidar and/or high-res radar is arguably
necessary for fully self-driving cars, it's not
sufficient. Waymo has a special-built fleet of self-driving cars, rather noticeable due to their odd appearance:
...altho my Google-fu indicates that particular type has been retired to museums. But that's wandering off the point, which is that Waymo has developed testing cars which actually have no steering wheel, and according to reports do have SLAM systems onboard. This evidence suggests those cars may be fully autonomous, but they are limited to 25 MPH or less.
Obviously that is still inadequate. So what's the limitation there? Are the sensors not scanning out far enough? Is the software not running fast enough? Do they need more powerful computers to run the software faster?
I don't know. But the point of my argument here is that no one single company has been able to develop self-driving cars on their own. It may be that Waymo is close; they have announced plans to deploy a fleet of what are apparently fully self-driving cars in a suburb of Phoenix. (Reportedly the streets in the area are laid out in a perfect grid, with no odd intersections to confuse the poor car's tiny brain.) But the start of that project has been delayed.
Of course I could be proven wrong, but it seems to me that either auto makers need to band together to create a group project to develop Level 4 autonomous driving, or else they might as well cease spending money on what they're doing and wait for Waymo's tech to advance far enough to be deployed in mass produced cars. Fiddling around with cars dependent on low-res Doppler radar and/or camera images is like working to perfect the sailing ship when what we need is the steamboat. To extend the analogy, that's not to say that there's no point in fiddling around with sailboats; there are certainly lessons to be learned there, in the best shape for the hull and the best mechanism for steering. But no amount of tweaking the sails is going to develop steam power! And in my opinion -- not fact, but opinion -- no amount of fiddling around with autonomous driving systems which don't have a SLAM are ever going to result in reliably safe self-driving cars which will function in most or all driving conditions.
I don't care how often your car's navigational data is updated; that's not going to help it detect the emergency vehicle that parked on the highway two minutes ago, nor is it going to notice when the car 200 feet in front of yours is swerving towards an unavoidable accident. Fully autonomous vehicles need real-time, active, high-resolution scanning of the environment 360° around the car; scanning far enough out from and ahead of the car for the autonomous systems to be able to react in time if and when an obstacle in the vehicle's path, or an impending accident, is detected.
For fully self-driving cars, a SLAM isn't just a good idea -- it's going to be absolutely mandatory. (Again that's an opinion, not fact... but I see that as inescapable.)