Why we don't have self driving cars yet...

Discussion in 'General' started by R P, Dec 2, 2019.

To remove this ad click here.

  1. Some very good points made in this article. Seems these quoted experts are a little more realistic than Elon Musk.
    https://www.cnbc.com/2019/11/30/self-driving-cars-were-supposed-to-be-here-already-heres-whats-next.html
    This statement really hit home with me.
    “When you’re testing autonomous vehicles out on public roads, you know, not only are the people riding in that car part of the experiment, but so is everybody else around you. And they didn’t consent to being part of an experiment,” he said.
    Reminds me of BobW's accident where he hit a curb when testing his Tesla autopilot. What if it had turned into the path of an oncoming car instead?
     
  2. To remove this ad click here.

  3. marshall

    marshall Well-Known Member

    1. Bob didn't have the latest hardware in his vehicle.

    2. You seem to have missed the front page accident or just choose to ignore it. Cars without autopilot have accidents, including deadly ones, every day. Perhaps autopilot would have prevented today's front page accident if the vehicle had been equipped with it.

    Anyhow, each time you get into a car or cross the street, you take your chances.

    Everyday, folks drive under the influence, drive while sleepy, drive while being distracted, drive through stop signs, drive aggressively, and we have to deal with road rage. Did we consent to all that?

    3. The statistics provided by Tesla on the accident rate while driving on autopilot don't seem to warrant the hysteria your trying to sell.

    Frankly, I believe Tesla's beta testing is advancing an affordable near, self-driving car faster then without it. I think the small amount of risk is worth it. Honestly, I find it hard to believe that a Tesla on autopilot could be any worse then the idiot who almost ran me over while crossing in a crosswalk two weeks ago.
     
  4. You seem to have missed the point of my post and the experts in the linked article. It is not about whether Tesla vehicles are safe or not, with or without Tesla's "autopilot". It is about the state of self driving systems and the challenges to bring them to fruition and usable by everyday drivers. Like I said it will not happen as fast as Elon Musk and some others predicted, and the article explains why.

    There is no question that the new driver assist features like FCA, LKA and Lane Change Assist help with car safety. These are included in Telsa's AutoPilot, so of course would expect them to lower accident rates. I just recently renewed my insurance (non Tesla) and was happy to see a reduction in my premiums because my car had these driver assist features.

    As for Tesla's live beta testing practices using untrained novice customers on public streets, that is another question. The article clearly suggested that could be better handled in a virtual AI environment with the giga multitudes of data (incl video) now available for computer testing. No need to risk people's lives to do that.

    Believe me, I look forward and welcome the day that true self driving becomes available and mainstream. I just don't know when that will be. Might still be a while by the look of things. Meanwhile, I applaud the car manufacturers (well, most of them) for continuing to bring more and more driver assist features to their cars to dramatically improve car safety. Same with crash worthiness.
     
  5. interestedinEV

    interestedinEV Well-Known Member

    Here is where I have a problem with this statement

    Finally, when will these vehicles arrive?

    “We expect level-four vehicles to be feasible in small quantities within the next five years,” Urmson said. “What that means is you’ll probably see hundreds or maybe thousands of vehicles out either delivering packages or moving people through neighborhoods, or maybe hauling goods on our freeways.”


    In my part of Phoenix, (Tempe, Chandler), Waymo is already offering a driver-less (no driver in the car) taxi service. I had posted the email I received from Waymo about it and I honestly not taken them up on that offer but I see their cars all over the place. I just 5 miles for an errand today and saw 4 of their cars (not sure if they still had a driver or were completely riderless, Waymos posted on this site indicated only some would be totally driver-less".)

    Yes it is a geo-fenced area, we have wide roads, clear weather and well laid out intersections. Waymo is already at level 4 here or so they claim. Are they there everywhere? They do not claim that but it a lot closer than small quantities in 5 years.

    Here is screen shot of the email

    upload_2019-12-2_20-11-56.png
     
  6. marshall

    marshall Well-Known Member

    Clearly, Tesla doesn't agree, and the data seems to suggest that folks are safer driving with a beta version of autopilot then without using it.
     
    bwilson4web likes this.
  7. To remove this ad click here.

  8. You seem to want to disagree with me even when we agree..., LOL. Please reread my post again where I say that driver assist features incl those in Tesla's AutoPiot do make cars safer. Both of my cars have them, too, minus the Tesla type bugs. I wouldn't buy a car now without them.

    Re the previous post about Waymo, yes, there is lots of testing going on, but in a very controlled environment within confined scripts, as it should be, and not on just any public road by untrained novice car geeks playing with technology they don't fully understand. Hyundai is testing self driving too, but with two human operators onboard, That's how it should be done.
    https://www.cnet.com/roadshow/news/hyundai-kona-electric-self-driving-botride/
     
  9. bwilson4web

    bwilson4web Well-Known Member Subscriber

    My understanding is a pending version will keep Autosteer even if the brake is touched. But in reality, the car would have braked on its own to avoid running into the silly car in front. I should have left the car alone.

    My two mistakes: (1) having the audio alerts turned down, and (2) failure to notice autosteer had not engaged. There are a thin sliver where the human and automated systems can screw-up (see 737-MAX). My fault.

    The alignment is OK but I need to check the wheel bearings. A funny thing is when I ‘turn a wrench’, it makes that car mine.

    Bob Wilson
     
    electriceddy likes this.
  10. Francois

    Francois Active Member

    Not sure where I stand yet on whether fully automated driving is the best way to go.

    But I am fully embracing assisted driving. The lane keeping assistance and the blind spot alerts are definitely blessings that help reduce the risk of collisions.
     
    bwilson4web, electriceddy and R P like this.
  11. gooki

    gooki Well-Known Member

    There's a big risk with only testing with virtual environments. You are only testing with what the environment was programed to do/the data you have.

    Realistically you need a mix of real world and virtual testing. Virtual testing is also beneficial to ensure you system hasn't regressed.
     
    Last edited: Dec 4, 2019
  12. To remove this ad click here.

  13. gooki

    gooki Well-Known Member

    Time will tell who will be fastest to market with a FSD solution. My guess is Hyundai will not be first.

    Here's why mass real world testing is is the most social responsible thing to do. My assumptions:
    • Mass real world testing will bring FSD to market 5 years earlier than controlled trials and simulation.
    • Mass real world testing will cause 10 deaths before it becomes safer than human drivers.
    • Year 1 = FSD safe as humans
    • Year 2 = FSD 2x safer than humans
    • Year 3 = FSD 4x safer than humans
    • Year 4 = FSD 8x safer than humans
    • Year 5 = FSD 16x safer than humans
    • Humans cause 1.16 deaths per 100 million miles traveled int he USA (37,133 deaths in 2017)
    • Annual miles driver per FSD vehicle per year 150 miles x 365 days = 55,000 miles per year.
    • FSD fleet size:
    • Year 1: 1,000,000 vehicles
    • Year 2: 2,000,000 vehicles
    • Year 3: 4,000,000 vehicles
    • Year 4: 8,000,000 vehicles
    • Year 5: 16,000,000 vehicles

    The results road deaths in a rapid FSD development scenario:
    • Development phase deaths + 10 deaths
    • Year 1 + 0 deaths
    • Year 2 - 638 deaths
    • Year 3 - 2,552 deaths
    • Year 4 - 10,088 deaths
    • Year 5 - 40,352 deaths

    Lives saved by rapid FSD development 53,630 (it's actually significantly more than this, but I cannot be bothered explaining why right now).

    Do want to be responsible for the deaths of 53,630 people by promoting the slow development of full self driving vehicles?

    If you don't agree with my assumptions, feel free to assume some other numbers and let's see what results you get.

    **edited to correct the results, forgot to multiply deaths saved by annual fleet size increase**
     
    Last edited: Dec 4, 2019
  14. Sorry, but I would not want to gamble people's lives with assumptions that are not based on anything tangible. And that is not what the point of the article is anyway.

    Having said that, I do believe the most responsible and fastest way to bring full self driving capabilities to the fruition is with AI and controlled tests with onboard operators. Once that is perfected, then start making it available still on a controlled basis to the public. And we are a long, long ways from that today.

    It is not unlike developing and testing a new airliner. There is a lot of virtual testing followed by live testing with highly trained and experienced test pilots, and then slowly graduating step by step to full certification and pax carrying. Even then as we know, they are still not totally bug free. But they certainly don't include passenger carrying in their tests before final certification. I think we may need something like this with cars to protect the general public from being duped into testing a car that might not be ready for that.

    And just to be clear, I am not talking about driver assist features, but true self driving with no hands on the wheel and not watching the road for extended periods of time.
     
    Francois likes this.
  15. gooki

    gooki Well-Known Member

    And the most effective way to have a fully developed AI today is by training it with a massive amount of data. And the only way to get real world data is by logging miles.

    If you use advanced driver assist mode with human supervision as the test bed for FSD, you have real data points to know when it's ready to switch from assist to full self driving. This is the Tesla approach. It's controlled testing on a mass scale.

    Controlled testing on a small scale will delay development.
     
  16. Well, I do agree with you about utilizing massive amounts of data, but for the virtual AI learning and testing. There are a lot of cameras (used for the driver assist features) on the road now (incl non-Tesla), and they need to capture data on every possible driving scenario, incl accidents. I know Tesla is working on that as are others. Tesla may be ahead, although I am not sure about that either. I really don't know what/how Waymo (and others) use, but have to assume they are a lot smarter than we are here on this forum, trying to speculate the best way to do this.

    But I can see the challenges more than I can see the solutions, and also the risks with live human testing before the technology has been thoroughly developed and tested virtually. And I don't think you can rush it. And of course that is just my opinion, and feel free to differ.
     
  17. gooki

    gooki Well-Known Member

    Ok, time to explain why the number of lives saved by rapid development is significantly higher.

    When you're delayed by 5 years you are always behind until 5 years after market saturation point. I'll put the saturation rate when 80% of miles driven autonomously.

    So let's carry on some figures.
    • Year 1 + 0 deaths
    • Year 2 - 638 deaths
    • Year 3 - 2,552 deaths
    • Year 4 - 10,088 deaths
    • Year 5 - 40,352 deaths
    • Year 6 - 161,408 deaths
    • Year 7 - 645,632 deaths
    • Year 8 - 2,582,528 deaths
    Globally there are 1,250,000 road deaths per year. So we reach the saturation point somewhere between year 7 and 8.

    So by year 8 we're saving 1 million traffic deaths per year if we follow the rapid development path. If we slow walk development we're only saving 2,552 lives in the same year.

    In the 5 years it takes for the slower development solution to reach the saturation point we could have saved 4.5 million lives.

    Is it socially acceptable to know 10 deaths will occur to save 4.5 million lives? I believe it is.

    Disclaimer at some point the exponential growth will flatten, but the effect is still the same, knowingly scraficing a few to save millions.
     
  18. interestedinEV

    interestedinEV Well-Known Member

    Yes, exactly. The analogy is life saving drugs. You can do computer simulations, you can test on animals, but the only way to know if it works on humans is by testing it on them. And you have the same dilemma, do you want to test it for a decade to be sure or if it is promising, do an accelerated testing. Here is a great article from the Wired magazine on this very subject.


    https://www.wired.com/story/lose-lose-ethics-self-driving-public/

    ......The unfortunate truth is that there will always be tradeoffs. A functioning society should probably create space—even beyond the metaphorical sense—to research and then develop potentially life-saving technology. If you’re interested in humanity’s long-term health and survival, this is a good thing. (Even failure can be instructive here. What didn’t work, and why?) But a functioning society should also strive to guarantee that its citizens aren’t killed in the midst of beta testing. We’ve made this work for experimental drugs, finding an agreeable balance between risking lives today and saving them tomorrow.

    An analysis from the Rand Corporation published last year suggests the living lab will be worth it, finding that more than half a million lives might be saved by putting imperfect tech on the road instead of waiting for it to be flawless before deployment.

    Still, it's uncomfortable. “Society tolerates a significant amount of human error on our roads,” Gil Pratt, who heads up Toyota’s research institute, said last year. “We are, after all, only human. On the other hand, we expect machines to perform much better.”

    Maybe that’s a fair thing to expect—but only if we’re willing to let the things learn, alongside and among us.


    One thing I will add is that there is a certain ethical path that the developers need to do. And I live in the place where a lot of this testing is done. Our Governor decided to loosen regulations and many companies moved their testing from California to Arizona. I have seen Waymo testing for years on the roads I drive on everyday and it is sometimes annoying to be stuck behind a Waymo vehicle: they follow every traffic rule :). They seem to be doing it right. Uber on the other hand decided that they would not comply with California regulations, moved their testing to Arizona even though their technology was not as advanced as Waymo. What was worse is that they rushed things trying to catch up with Waymo. An unfortunate women was killed in my town of Tempe due to this recklessness.

    So yes, we need to advance real-life testing, but we still need to make sure that reasonable precautions are being taken. And while I think Telsa is doing it right, some of Elon's comments may give an impression that he is jumping the gun. You never hear pronouncements from Waymo. (For the record, I have no interest in Waymo, but if you live in the Tempe, Chandler, Mesa, Gilbert suburbs of Phoenix, you cannot miss these vehicles. They are there on the streets 24 hours a day.)

    And, while I am hearing about Hyundai testing in these forums, I cannot imagine that they are ahead of Waymo, Tesla, Cruise, Mobileye and even possibly Uber, who got their technology from Carnegie Mellon. If you notice, the companies that I mentioned have a very strong software pedigree rather than long automotive tradition. I guess that it makes sense that software companies lead the charge (as they know more about AI development and testing) before auto manufacturers either buy or absorb their technology. And yes, I consider Tesla as a Software company that manufactures cars, then the other way around.
     
  19. interestedinEV

    interestedinEV Well-Known Member

    @gooki Here is a great article which sort of backs up your numbers

    https://www.wired.com/story/self-driving-cars-rand-report/

    [​IMG]


    Presumably, though, there will be some moment where it makes sense, public safety-wise, to let autonomous vehicles own the road. But when is that? The RAND researchers used an analytic method called robust decision making to try to put some intellectual rigor into the question.

    Their conclusion sounds clichéd: Don’t let the perfect be the enemy of the good. But it’s meaningful, too. They conclude that tens or even hundreds of thousands of lives could be saved by self-driving cars, even if regulators allow less-than-perfect cars on the road. As Groves puts it, “Even though we can’t predict the future, we found it’s really hard to imagine a future where waiting for perfection doesn’t lead to really big opportunity costs in terms of fatalities.”
     
  20. Yes, fun to guesstimate and speculate about lives saved and how long to get to FSD. No lack of opinions on this subject. But the reality remains there are major challenges to overcome, and we are not going to get there as fast as some (like Elon Musk) have forecast. If we can't get the Tesla Summons to work yet in a parking lot (flawlessly) which should be very, very easy, how can we expect to handle all the possible public roadway situations.

    We agree that FSD can and should save lives. Just not how fast and the best way to get there. Again, just my opinion, based on what I have seen and read so far. I don't expect you to agree, but you have not given me anything yet to change my mind either.
     
    Francois likes this.
  21. interestedinEV

    interestedinEV Well-Known Member

    I agree with you, no question. We are not close to level 4 universally. Waymo is the only one who has talked about level 4 but in a narrow geo fenced area and only under the circumstances they agree to. If there is a dust storm, they can pull their service. A person who has to get home may not have that luxury. So I agree with you totally that Elon is being overly optimistic, we are several years away from that, may be a decade or more.

    That is the 64 million dollar question. There are ethical implications and that need to be taken into account. And people will differ on the ethics. So I can see an honest difference of opinion on this subject. In these cases it becomes a sociteal response, rather an individual response. There is a whole slew of work on the ethical challenges around driver less cars.

    The everyday ethical challenges of self-driving cars

    https://theconversation.com/the-everyday-ethical-challenges-of-self-driving-cars-92710
     
    Francois likes this.
  22. Yes, also fun to philosophize about this. Logic vs emotion vs ethics (whose?). Was it right to sacrifice 100's of thousands of lives in Japan to stop a war? All I can say, is that I am not qualified to make those kinds of assertions.

    Another take on this: Maybe we should also focus more on self driving buses. Seems that might be easier and get more cars off the streets sooner, which would also save more lives. Buses could utilize road sensors in very confined routes (they travel in now anyways). If you could eliminate bus drivers (high cost) that would not only improve safety of passengers, but save costs allowing for more buses.

    And if you think bus drivers are safe, I could tell you a few stories about that. I have a bus driver friend who tells me about how they hire and train them, and then send them out on the road. My wife got hit by a bus (100% bus driver's fault) a few years ago. Luckily, it was at slow speed (wife's car was actually stopped at red light) at an intersection, and no injuries. While no cost to us with our insurance, it was still a pain to go through the accident reporting and repairs process.
     
  23. interestedinEV

    interestedinEV Well-Known Member

    Absolutely good question. British common law (which the US and many other countries follow) follows the English jurist William Blackstone ratio "Better that ten (100 as per Ben Franklin) guilty persons escape than that one innocent suffer,", which is sort of what you are enunciating here, no innocent bystander should be hurt by this.

    To add to what your saying here is parable from "n Guilty Men" by Alexander Volokh http://www2.law.ucla.edu/volokh/guilty.htm#238

    The story is told of a Chinese law professor, who was listening to a British lawyer explain that Britons were so enlightened, they believed it was better that ninety-nine guilty men go free than that one innocent man be executed. The Chinese professor thought for a second and asked, "Better for whom?"
     
    Francois likes this.

Share This Page