Self-Driving / Autonomous Cars: General discussion

Discussion in 'General' started by gooki, Oct 9, 2018.

To remove this ad click here.

  1. gooki

    gooki Well-Known Member

    The reason we need multiple efforts on autonomous driving, is not all of them will be successful. Combining efforts could lead to them all going down a dead end.
  2. To remove this ad click here.

  3. interestedinEV

    interestedinEV Well-Known Member

    Right. There is so many technological choices (infra-red cameras, Lidar, communication methods and choices etc) and operating conditions. Mobileeye likes to test in the narrow streets of Jerusalem, Wyamo on the more wider streets of Phoenix. @TeslaInvestors in another thread had talked about how Cadillac was ahead of Tesla in self driving technology. On reading the article, I realized that Tesla did better (even though still erratic) on narrow, winding unmarked roads (which is one use case) while GM did better on marked roads (which is another use case). For the reasons pointed out by @gooki, if we decide Cadillac is the winner, and they need to lead the charge, we might miss on vital capabilities of Tesla (Again, the evaluation methodology used in the article may be wrong, I am just trying to point out the various usage conditions and how one system may perform better in one condition but not in another).

    With more manufacturers, there will be more ideas and innovation. If history is any guide, the many players today will reduce as some fall by the wayside, some will see their technology or capabilities absorbed into another vendors product and some will emerge stronger as they learn from the strengths and weaknesses of others. This is is not the time for consolidation, that will come later and I really don't see a situation where there will be one and only one system in operation.
    Last edited: Oct 10, 2018
  4. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    (copied from another thread) :

    I think this is losing sight of the forest for the trees.

    Tesla and GM/Cadillac and the other auto makers seem intent on fiddling around with Level 2 or 2+ semi-autonomous driving systems. Only Waymo seems to be actually working toward the goal of Level 4 fully autonomous driving.

    The reason that cars controlled by Level 2 or 2+ systems keep slamming into vehicles parked on the highway is that their sensor systems are wholly inadequate to the task of creating a SLAM.

    SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a 3D map of its surroundings, and orient itself properly within this map in real time.

    Current production cars use cheap Doppler radar systems as the primary sensors, which only detects differences in speed. Those sensors cannot detect stationary objects; any reflections off those are ignored as part of the "background noise". A detailed explanation can be found here: "Why emergency braking systems sometimes hit parked cars and lane dividers"

    Cars will never achieve truly safe autonomous driving so long as they are dependent on Dopplar radar and cameras. Cameras cannot, of course, directly "see" anything; camera images must be subjected to software interpretation. That has various problems which have proven to be insolvable; problems with edge-finding and low contrast with the background and other issues too complex for a detailed explanation here. Suffice it to say that the human brain has a highly developed, complex visual cortex which is the product of billions of years of evolution, and even then it's not perfect -- witness optical illusions.

    Computers have no equivalent of the human visual cortex. Computers must rely on much simpler hardware, and software limited by the size of computer memory. Furthermore, relying on cameras would make self-driving cars subject to the same limitations as the human eye: They can't see in the dark. The goal of those developing autonomous driving systems should not be to slavishly reproduce the way humans see and drive; the goal should be to produce a driving system which is much, much safer!

    Active sensors (which send out a signal, which bounces off objects and gives a return signal) are a much better technology to use than trying to use software to interpret camera images. High resolution active sensor systems include lidar and phased-array radar. Some (or many) autonomous driving test vehicles have such active scanners. One reason that lidar hasn't been seen in production cars is that lidar scanners have been very expensive. But with new, much lower-cost solid-state lidar tech, hopefully that limitation will soon be overcome.

    But altho active scanning with lidar and/or high-res radar is arguably necessary for fully self-driving cars, it's not sufficient. Waymo has a special-built fleet of self-driving cars, rather noticeable due to their odd appearance:


    ...altho my Google-fu indicates that particular type has been retired to museums. But that's wandering off the point, which is that Waymo has developed testing cars which actually have no steering wheel, and according to reports do have SLAM systems onboard. This evidence suggests those cars may be fully autonomous, but they are limited to 25 MPH or less.

    Obviously that is still inadequate. So what's the limitation there? Are the sensors not scanning out far enough? Is the software not running fast enough? Do they need more powerful computers to run the software faster?

    I don't know. But the point of my argument here is that no one single company has been able to develop self-driving cars on their own. It may be that Waymo is close; they have announced plans to deploy a fleet of what are apparently fully self-driving cars in a suburb of Phoenix. (Reportedly the streets in the area are laid out in a perfect grid, with no odd intersections to confuse the poor car's tiny brain.) But the start of that project has been delayed.

    Of course I could be proven wrong, but it seems to me that either auto makers need to band together to create a group project to develop Level 4 autonomous driving, or else they might as well cease spending money on what they're doing and wait for Waymo's tech to advance far enough to be deployed in mass produced cars. Fiddling around with cars dependent on low-res Doppler radar and/or camera images is like working to perfect the sailing ship when what we need is the steamboat. To extend the analogy, that's not to say that there's no point in fiddling around with sailboats; there are certainly lessons to be learned there, in the best shape for the hull and the best mechanism for steering. But no amount of tweaking the sails is going to develop steam power! And in my opinion -- not fact, but opinion -- no amount of fiddling around with autonomous driving systems which don't have a SLAM are ever going to result in reliably safe self-driving cars which will function in most or all driving conditions.

    I don't care how often your car's navigational data is updated; that's not going to help it detect the emergency vehicle that parked on the highway two minutes ago, nor is it going to notice when the car 200 feet in front of yours is swerving towards an unavoidable accident. Fully autonomous vehicles need real-time, active, high-resolution scanning of the environment 360° around the car; scanning far enough out from and ahead of the car for the autonomous systems to be able to react in time if and when an obstacle in the vehicle's path, or an impending accident, is detected.

    For fully self-driving cars, a SLAM isn't just a good idea -- it's going to be absolutely mandatory. (Again that's an opinion, not fact... but I see that as inescapable.)

    Last edited: Oct 11, 2018
    Roy_H likes this.
  5. 2020

    2020 Member

    Deep, but very helpful. I've had 3 cars with 3 different manufacturers with semi-autonomous driving capabilities. Their marketing hype is not close to what their cars are capable of at this time. I keep on hearing that these manufacturers have full self driving capability already but are not ready to release it at this time due to regulations. However, reading your synopsis points to the fact they are not close to full self autonomy yet. It will get there but with more time, money, and resources.
  6. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Our two cars have driving aids:
    • dynamic cruise control - one optical the other combined radar and optical
    • emergency braking - alerts but no crashes
    Both improve the driving experience, especially dynamic cruise control. Both require steering which precludes autonomous driving. As for the technology for autonomous driving, I see a mix of passive optical and radar ranging working, eventually.

    Stereo vision systems have a long history in robotics and are necessary for tracking:
    They take a lot of processing power that fortunately can be bought on a chip with some video enhancement chips. This is the type of hardware described in the Tesla system. Add radar for distance and speed ranging, I can see a viable, autonomous driving system. The lane keep assist on the Toyota is OK but not a lane steering, keeping system.

    Although LIDAR sounds great, I am not a fan of moving parts. I also don't care for modules external to the car body. But I'm not in that business.

    For now, I am content to be the 'man in the middle' of our automated speed control cars. But I can tell from lay reports that Tesla's system is a step above and not restricted to specific highway segments.

    Bob Wilson
    Roy_H likes this.
  7. To remove this ad click here.

  8. Roy_H

    Roy_H Active Member

    I have been trying to promote the idea that Tesla needs to implement stereo vision for some time. However it has not happened and I have never received any acknowledgement from Tesla or seen an explanation why not.
  9. Roy_H

    Roy_H Active Member

    Pushmi-Pullyu; I agree with all your points especially SLAM except I believe binocular vision is superior to LIDAR to achieve a solution. We are in disagreement on this issue, but I say again that video has much higher resolution than LIDAR and can have faster update. Elon mentioned that their video will be 10x faster frame rate than now with the new hardware. I believe existing frame rate is 60Hz, does anybody know what rpm LIDAR scanners run at?
    Last edited: Oct 13, 2018
  10. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I completely agree that the rotating lidar scanners seen on many test cars developing self-driving systems...


    ...are not the future of automobiling.

    But recently I've seen proposals for mounting something like 5 solid-state lidar systems on a car. If my understanding is correct (and I'm not sure it is), that would be 4 units aimed at 4 points of the compass, plus a 5th unit focusing a more narrow, longer range scanning beam to the front.

    As far as scanners being external to the car... I confess I have no idea whether or not those will ultimately require some sort of structure top of the car, like this:

    Toyota Research Institute (TRI)'s next-generation automated driving research vehicle, Platform 3.0

    Such cars will look strange or "weird" to our eyes. But fashion and style often follow necessity; if many or most cars start having those, then people will get used to them.

    No doubt at one time, it looked "weird" to people for early motorcars to have the motor under a metal "hood" at the front, to have a steering wheel, and to have pneumatic rubber tires. The early horseless carriages had the motor mounted on or over the rear axle, a steering tiller instead of a wheel, and the same wooden wheels with iron tires seen on horse-drawn carriages.

    On the other hand, other manufacturers of solid-state lidar units indicate that the scanners can be integrated into the existing body of the car:


    Personally I think the active scanners should remain on the roof, for maximum scanning ability; the same reason shorter drivers prefer taller vehicles, to help them see better when driving. But the market is going to have to sort that out over the coming years.

  11. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Binocular vision is only useful for triangulating position with passive sensors, such as cameras. Active scanners give a positive indication of distance to target, so no binocular vision is necessary.

    And you're ignoring my point that trying to use software to interpret visual images is unreliable. Robotics researchers have been working on that problem for decades. Being a computer programmer doesn't make me an expert on the subject, but everything I've read on the subject strongly indicates that this isn't a problem for which there is any easy fix. I'm sure that will a lot of time, effort, and money, optical object recognition software will improve in reliability. But I don't see it attaining the 99.99%+ reliability that we need for reasonably safe self-driving cars within the next few years. Other, and better, solutions exist... and will be used.

    It's hard to give a summary of just how big and complex a problem it is to get software to reliably interpret visual images. This Wikipedia article at least gives an overview of the problem, and the various approaches to trying to solve it: "Outline of object recognition"

    That's self-contradictory. Higher resolution means slower update. Higher resolution for each frame means the computer has to process a lower number of frames per second. Remember we're talking about real-time scanning and the self-driving program reacting in real time, to maneuver a safe course and to perceive potential accidents before they happen.

    Perhaps I gave a false impression by emphasizing the use of "high-res radar". Well, that's only "high-res" as compared to the low-res Doppler radar scanners currently in use in cars with an ABS (Automatic Braking System). It's certainly not "high-res" as in the sort of images we see on the latest TV sets, or even what you see on your cell phone. See images below for examples.

    There's no need for autonomous driving sensors to have as good a visual acuity as the human eye. Cameras in self-driving cars are useful for recognizing and reading traffic signs, but that's not part of a SLAM system. A SLAM needs fast-acting medium resolution images, not what the human eye would perceive as high-resolution images. The SLAM doesn't need to see that's a jaywalking pedestrian with long blonde hair, wearing a full skirt and carrying a Gucci handbag; it just needs to recognize that there's a moving obstacle in the road that it needs to avoid hitting.
    But self-driving cars certainly need better than the sparse amount of data available from low-res Doppler radar, which is what they are currently depending on:


    60 Hz or 30 Hz or probably even 10 Hz would be perfectly adequate. That's not the problem. The computer can react far faster than your nervous system can, once it has reached a decision to take action. The problem is the amount of processing needed to come to a correct (or at least useful) conclusion about where to drive the car. Various reports about self-driving test cars literally hesitating before taking action, lead me to think the software isn't up to the task. If my interpretation of that is correct, they need to speed up the processing quite a bit. And forcing the software to take the time to interpret visual images, is exactly the opposite of what's needed. If visual image processing becomes more sophisticated, that means it will run slower.

    Active scanners give the distance to target instantly. The number-crunching required for processing active scanner data would be orders of magnitude less -- and therefore would work that much faster -- than trying to interpret camera images.

    The data return from active scanning automatically yields such data as the presence and shape of objects, and the distance to those objects. When using data from active scanning, there is no need to use complex and undependable software approaches to find the edges of objects in a visual image, no need to find and compare the matching image of the same object in a stereo image match, no need to triangulate the distance from the difference in stereo images. Also, orders of magnitude fewer errors when relying on the much simpler data from active scanners.

    Bottom line: Using active scanning instead of trying to rely on software interpretation of camera images will yield far faster processing, will be far more reliable at spotting other vehicles, pedestrians, and obstacles in the vehicle's path; and thus will yield considerably safer self-driving cars.

    (And again, this is my opinion based on my understanding of the subjects involved. Altho I've tried to educate myself on the relevant subjects, I am by no means an expert on any one of them.)
    Last edited: Oct 14, 2018
  12. To remove this ad click here.

  13. gooki

    gooki Well-Known Member

    I suspect the explanation is because stereo vision is not required for depth sensing when in motion. Increasing image processing speed increases the ability to accurately judge depth. One of the advantages of Tesla's new dedicated processors.
  14. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Re mono vs. stereo "vision" using cameras, I found this on the Tesla Motors Club forum. I don't know enough to say if this is right or not, but at least it seems plausible to me:

    The way the cameras were used in the Mobileye system to measure distance was by using monocular visual cues (height of camera and bottom of vehicle relative to video frame). There is a minimum accuracy required which is largely subjective and varies by application. Basically if you shift up one pixel in height, that represents a given amount of distance. As the distance is further away, it gets more inaccurate. As an example, in the Mobileye paper I linked, a VGA (640x480 pixel camera) provides 10% error at 90 meters, 5% error at 45 meters. If you use a higher resolution camera, that decreases the error.

    As many or most of us know, Tesla lost its contract with Mobileye and had to develop its own system. I have no idea how closely Tesla has, or has not, mimicked the function of the Mobileye system.

  15. bwilson4web

    bwilson4web Well-Known Member Subscriber

  16. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Hey, Bob! Congratulations on being (I think) the first forum member to get a sig line!

    * * * * *

    InsideEVs news has a new article about a self-driving test car or prototype, which the article claims is "a self-driving Chevrolet Bolt EV sans obtrusive LIDAR":

    "Cruise Automation Chevy Bolt Spied Without Bulky LIDAR"

    Seems to be a lot of assumptions there, though. The article states "GM Authority notes that the driver is clearly using his phone with both hands and not touching the steering wheel at all." Yeah, okay, but couldn't you say the same about Cadillac Super Cruise, which -- like Tesla AutoSteer -- is merely a lane-keeping function which is only a Level 2 or 2+ autonomy, and not the Level 4-5 autonomy of a true self-driving car?

    Anyway, here is the photo they posted of the car:


    Much discussion in comments about the lack of obvious lidar or phased-array radar on the roof, and much discussion about the function of those shark's-fin projections on the roof. Some think those are for enhanced GPS orientation, which I can certainly believe. They are almost certainly not lidar or phased-array radar scanners.

    It's possible this car has some of the new, small solid-state lidar scanners imbedded in the body. But I also think it's possible this car simply doesn't have any lidar or phased-array radar scanners at all. Cadillac Super Cruise doesn't require them.

  17. bwilson4web

    bwilson4web Well-Known Member Subscriber

    I wanted the signature because I'm pleased with how well this forum is managed. Having decided to stay, I have no problem in paying for a better product. I'm not walking away from PriusChat but I have a foot in both camps.
    Elon mentioned that he isn't worried about the EV competition because they are helping to achieve his goal of sustainable, efficient, transportation. We don't have to achieve complete autonomy but 80% is pretty darn good.

    I drive two cars with dynamic cruise control and automatic emergency braking. Just reducing the risk of a frontal collision has made driving city and highway much, much nicer. Sure there are edge cases that could be better:
    • approaching stopped cars at a light - early braking is not aggressive enough so I assist brake.
    • cars turning off into a side street or parking lot - the car aggressively brakes so I assist by adding accelerator.
    Don't read too much into hands doing cell phone stuff. I do it too when I know the automated systems are in the comfort zone. I don't text but I do unlock the screen to change podcasts. Siri could be a lot better.

    Bob Wilson
    Last edited: Oct 18, 2018
  18. gooki

    gooki Well-Known Member

    Car looks parked. I don't keep my hands on the wheel when I'm parked up.
  19. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Elon tweeted today (or yesterday?) that Tesla was suspending the FSD (Full Self-Driving) option from ordering for a week, because "it was too confusing".

    From InsideEVs News: "Tesla Model 3 Full Self Driving Option Is Going Away"

    Now, is that the real story? I think it's pretty clear that Tesla was very premature in offering that; pretty obviously another case of Elon being overly optimistic; much too optimistic about how fast or easy it would be for Tesla to develop Level 4+ autonomy.

    Given that Tesla has virtually stalled out in advancements in the version of Autopilot seen in production cars -- we have seen almost nothing but incremental improvements in existing Autopilot/AutoSteer capabilities for over a year now -- I think it's pretty clear that the capability of current hardware is inadequate to support reliable Level 4 autonomy.

    Neither Elon nor Tesla have yet admitted that cameras are inadequate and that they will have to put lidar and/or phased-array radar into their cars, but it has been my working theory for a few years now that they will eventually have to do this. I've seen nothing to change my mind about that.

  20. gooki

    gooki Well-Known Member

    I think you’ll find it’s a marketing experiment. Same as how the free super charging referral comes and goes and changes, Tesla are actively experimenting on their sales technics and offerings to find what gets the most conversions at the best margins.
  21. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Newest autonomous driving test:

    . . .
    The system does not use laser-ranging lidars like those that Levandowski helped to develop at Waymo, Otto and Uber. This is not because he is afraid of more lawsuits, Levandowski insists, but because he now believes that lidars are an expensive and unnecessary red herring in the quest for robotic vehicles.

    The fact that completely driverless cars do not yet exist is not because lidar technology is not good enough, Levandowski said, but because the software is not good enough.

    Pronto.AI’s driving technology uses only six video cameras, pointing to the front, side and rear of the vehicle, and each with a much lower resolution than those found in modern smartphones. Images from the cameras are fed to the trunk, where a computer is running two neural networks: artificial intelligence systems that can speedily process large quantities of data.
    . . . <more details>

    Bob Wilson
  22. gooki

    gooki Well-Known Member

    I wonder if they’re building on top of
  23. bwilson4web

    bwilson4web Well-Known Member Subscriber

    That web site seems a little confused. Their $500 dash cam is a little pricy.

    Bob Wilson

Share This Page