Pushmi-Pullyu
Well-Known Member
It astounds me that some other company is following Tesla down the rabbit hole of trying to use cameras as the primary sensors for self-driving cars.
Can cameras see in the dark? No, no better than the human eye can. (In fact, I've seen arguments that ordinary video cameras can't match the human eye's ability to discern fined shades of gray, which means the night "vision" is even worse.)
Can cameras discern the distance to objects, or directly discern where one object ends (that is, where the edge of an object is) and other begins? Heck no! A self-driving system has to use software to interpret those video images, hopefully figuring out distances and figuring out which objects are where. But one doesn't have to do much research on the subject to learn that this is problematic and rather error-prone, despite decades of effort on the part of roboticists.
To clarify: The problem isn't due to any lack of resolution on the part of the cameras. When comparing to the human vision system, the limitation isn't the camera, it's the image processing. The human brain has a highly developed visual cortex, a sophisticated dedicated image processing center which is the result of billions of years of evolution. The computer has maybe 2 or 4 general-purpose microprocessors running some software, with far less processing power. That's hardly a fair comparison!
It truly boggles me that some companies are ignoring the very clear advantages of active scanning using lidar and/or phased-array, high-res radar. With active scanning, you get an instant error-free read on the distance to objects, as well as detecting their shape; no need to rely on unreliable optical object recognition software which may or may not be able to figure out where the objects are, and how far away.
And of course, active scanners don't "care" if it's day or night; if it's bright sunlight or pitch black. Unlike cameras and the human eye, they aren't limited to "seeing" only in the direction headlights point... and the inadequacy of headlights used in nearly all cars is something the IIHS has been talking rather loudly about recently.
Now, there is one legitimate criticism of lidar: It degrades in rain worse than camera images. Okay, so use phased-array high-res radar instead. Problem solved.
Even if optical object recognition could be made reliable, how could they possibly overcome the limitation that cameras can't see in the dark any better (or perhaps even worse) than the human eye? Will they mount dozens of headlights all around the car, so the self-driving system can (as will be required) see in all directions, rather than merely a narrow cone directly ahead of the car? And will they increase the brightness of those headlights, so they will adequately illuminate the landscape for a sufficient distance from the car that the self-driving system will have enough time to react... unlike the Uber car which ran down and killed a woman pushing her bicycle across the road in the dark? And if they do start using much brighter headlights, how much worse will the problem be of human drivers being blinded by the headlights of oncoming cars? And if those self-driving cars use illumination in a 360° circle, the problem will also occur with cars that you're following.
The more one looks at the problem, the worse it looks (in my opinion) for trying to depend on cameras as the primary sensors, day and night.
As I've said many times, the goal of those developing self-driving cars should not be to slavishly imitate a human driver. If human drivers were capable of driving safely, then we wouldn't need self-driving cars. Self-driving cars should be better and safer than humans at driving. One way they can be safer is by using active scanners, and not cameras, as the primary sensors.
Can cameras see in the dark? No, no better than the human eye can. (In fact, I've seen arguments that ordinary video cameras can't match the human eye's ability to discern fined shades of gray, which means the night "vision" is even worse.)
Can cameras discern the distance to objects, or directly discern where one object ends (that is, where the edge of an object is) and other begins? Heck no! A self-driving system has to use software to interpret those video images, hopefully figuring out distances and figuring out which objects are where. But one doesn't have to do much research on the subject to learn that this is problematic and rather error-prone, despite decades of effort on the part of roboticists.
To clarify: The problem isn't due to any lack of resolution on the part of the cameras. When comparing to the human vision system, the limitation isn't the camera, it's the image processing. The human brain has a highly developed visual cortex, a sophisticated dedicated image processing center which is the result of billions of years of evolution. The computer has maybe 2 or 4 general-purpose microprocessors running some software, with far less processing power. That's hardly a fair comparison!
It truly boggles me that some companies are ignoring the very clear advantages of active scanning using lidar and/or phased-array, high-res radar. With active scanning, you get an instant error-free read on the distance to objects, as well as detecting their shape; no need to rely on unreliable optical object recognition software which may or may not be able to figure out where the objects are, and how far away.
And of course, active scanners don't "care" if it's day or night; if it's bright sunlight or pitch black. Unlike cameras and the human eye, they aren't limited to "seeing" only in the direction headlights point... and the inadequacy of headlights used in nearly all cars is something the IIHS has been talking rather loudly about recently.
Now, there is one legitimate criticism of lidar: It degrades in rain worse than camera images. Okay, so use phased-array high-res radar instead. Problem solved.
Even if optical object recognition could be made reliable, how could they possibly overcome the limitation that cameras can't see in the dark any better (or perhaps even worse) than the human eye? Will they mount dozens of headlights all around the car, so the self-driving system can (as will be required) see in all directions, rather than merely a narrow cone directly ahead of the car? And will they increase the brightness of those headlights, so they will adequately illuminate the landscape for a sufficient distance from the car that the self-driving system will have enough time to react... unlike the Uber car which ran down and killed a woman pushing her bicycle across the road in the dark? And if they do start using much brighter headlights, how much worse will the problem be of human drivers being blinded by the headlights of oncoming cars? And if those self-driving cars use illumination in a 360° circle, the problem will also occur with cars that you're following.
The more one looks at the problem, the worse it looks (in my opinion) for trying to depend on cameras as the primary sensors, day and night.
As I've said many times, the goal of those developing self-driving cars should not be to slavishly imitate a human driver. If human drivers were capable of driving safely, then we wouldn't need self-driving cars. Self-driving cars should be better and safer than humans at driving. One way they can be safer is by using active scanners, and not cameras, as the primary sensors.
Last edited: