Navigant about to look properly and accurately stupid?

Discussion in 'General' started by 101101, Apr 12, 2019.

To remove this ad click here.

  1. 101101

    101101 Well-Known Member

    Now I have a belief about Navigant. I believe that it is basically a fraud in the worst sense of the word. But I base my complete opinion on Navigant on one data point. The map and analysis it recently produced to show Tesla in second to last place and GM in second to first place is that data point. I think that was a straight up lie that GM paid for in order to lie to the public about GM's prospects. But that stupid map has been not only used to inflate GM's stock price with a lie its been used to try to deflate Tesla's stock price. So in this case Navigant is just a paid liar, that is all it is, but I suspect this case is definitive. But of course I haven't researched this company. Alternatively it could have done straight research and then just took a bribe or got blackmailed or both into lying to the public. I don't know, we probably don't know and may never know. But I think its reputation is about to be destroyed by reality.

    Musk in a brand new interview in a Podcast on AI (Lex Friedman) associated with MIT
    stated that he believed Tesla was so far ahead of everyone else on self driving that they have basically won the game "Point, Set, Match." Musk just declared victory (with only a very small reservation.) In that interview he also quipped that G.M. Super Cruise was ADD like attention deficit. That is to say not what you'd want on the road by implication. And I've felt that reading between the lines on Cruz automation Chevy Cruz.. Ted Cruz... GMs effort on self driving as with most of the rest of what it does was a rent seeking line of "fail convincingly."

    Now it seems what is about to happen is Tesla is about to demonstrate that its won the game and that will destroy a lot of stupid narratives on self driving tech and the future and it will probably generate a bunch of sour grapes suits against Tesla as a result. But remember when all this comes out toward the end of the month what a piece of sht Navigant was in deluding and deceiving the public and be on the look out for the next Navigant because such entities just change their name to recycle the same bs or the next dime a dozen firm comes along to replace them in the same task. Does it come back as a critic? Does anyone think David Einhorn is a critic?
     
  2. To remove this ad click here.

  3. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Many articles have been published about Navigant Research's reports on the EV industry at InsideEVs. The purpose of InsideEVs in publishing such articles appears to be entertainment, rather than information. Or to put it more bluntly, I think InsideEVs publishes those articles only so knowledgeable EV advocates can point and laugh at how uninformed Navigant's so-called "research" is. It's especially fun to look back at older Navigant reports and see how often they wildly contradict their previous so-called "research"!

    Navigant is one of many so-called "marketing research" firms whose main income seems to be from creating faux (or at least very poorly researched) "reports" which are written to support the marketing claims of one or more large companies, who the "marketing research" firm hopes will be persuaded to buy the "report" from Navigant.

    The title of this thread asks "Navigant about to look properly and accurately stupid?" Well, I'm not sure what "accurately stupid" means, but as far as "about to" look stupid... I think they have sailed well past that point!
    :cool:

    Elon Musk is one of my heroes, and I greatly admire his futurist vision, his inventiveness, his optimism, and his indefatigable work ethic.

    But unfortunately, he also has a great tendency to hype and to unrealistic aspirations. Tesla is only at Level 2 or at best 2+ at driving autonomy, when the goal is Level 4/5. Furthermore the evidence seems to point to Waymo being substantially ahead of Tesla on progress toward Level 4. Waymo appears to be, at the least, far closer to Level 3 than Tesla is; perhaps Waymo has achieved Level 3, or at least some aspects of Level 3 autonomy. Tesla isn't anywhere close to Level 3.

    Elon declaring that Tesla is "so far ahead of everyone else on self driving that they have basically won the game" is like a baseball coach declaring victory at the end of the second inning of a game.

    As a reminder, back in 2016 Elon said they'd be doing a coast-to-coast driving demo of a Tesla car using full autonomy by the end of 2017. That demo has yet to happen. I think that's pretty clear evidence that Tesla's progress towards Level 4, or even Level 3, autonomy has stalled out. All we've seen from Tesla's self-driving tech development, over the past year or two, has been rather minor improvements in driver assist features they already have in production.

    * * * * *

    What follows here are my opinions, and only my opinions rather than facts:

    It's been my belief for some time now that Tesla will be unable to make substantial progress towards full autonomy so long as Elon sticks to his stubborn belief that they can depend on cameras as the primary self-driving sensors. The limitations of that have quite clearly been demonstrated by accidents, even fatal accidents, where the Tesla car failed to "see" stationary obstacles when driving at highway speed. The solution to that is what is called a SLAM (Simultaneous Location And Mapping) system, which is a real-time 3D "map" of the environment built up in the car's computer software. A map which would allow the car to "see" all obstacles, both moving and stationary, to avoid colliding with any of them.

    Cameras are simply inadequate for that. Computers don't perceive the world like humans do, and cars don't have human brains to process visual data. Autonomous vehicles need active sensors such as lidar or phased-array (so-called "high-res") radar. Cameras aren't adequate sensors for a SLAM system, and aren't going to be, no matter how much effort Tesla puts into it.

    It has been my belief for some time now that Tesla will remain stuck at Level 2+ autonomy until such time as Elon recognizes that cameras always will be inadequate, and Tesla starts using lidar and/or phased-array radar in its driving autonomy testing vehicles, as Waymo has already been doing for some time now. Nothing I have seen over the last couple of years or more has made any substantial alteration to my opinion on that point.

    Again, my comments in this section are opinions -- hopefully informed opinions, but just opinions... and not fact. Differing opinions are invited.
    :)
     
  4. bwilson4web

    bwilson4web Well-Known Member Subscriber

    In the past three weeks, I have driven hours and hundreds of miles in basic AutoPilot and found these edge cases:
    • traffic turn off lane or crossing lane - these are seen as a fix object leading to hard braking. This is a problem shared with the Toyota TSS-P system and the BMW i3-REx. I manually add accelerator to solve the problem.
    • one lane becomes two - if there is a lead vehicle, it mostly works. But if there is no lead vehicle, the car has to 'choose' and this leads to some rapid steering. This happens when approaching an interstate exit or where lanes are added. It is easier to manually handle exits. The Toyota and BMW do not have lane keeping.
    • poor road lane marking - this can lead to 'hunting', not desperate but less smooth driving.
    I do not have a LIDAR test vehicle so I can't replicate the problems I've seen with AutoPilot. Just it isn't clear that a LIDAR detects traffic turning off or lane crossing that a camera also detects. IMHO, it is not the detector but the control computer that needs to handle lane turnoff or lane crossing traffic.

    I have the built-in, Tesla dash cam in and working. If you are interested, I could make YouTube examples showing the problems. Then we can do a detailed analysis where LIDAR advantages could be compared. I am thinking about doing this to share with Tesla these edge cases and perhaps make 'warning Will Robinson' posts.

    Bob Wilson
     
    Last edited: Apr 13, 2019
  5. Harvey

    Harvey Member

    he's also going for better accurate mapping which would make a difference in following lanes. that's how large mining trucks are auto driven.
    they might have a camera for safety, but they are pretty much get out of the way whether auto or human driven. and run no matter the weather. no lines, just follow the gps map.
    isn't there already somekind of radar to guage distances in teslas?
     
  6. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    In formal logic, we differentiate between what is sufficient and what is necessary to reach a conclusion or a solution. If we need an example of the difference, consider a restaurant with a "No shirt, no shoes: No service" sign. Wearing a shirt is necessary to get served at the restaurant, but wearing a shirt alone is not sufficient. You need shoes, too.

    Let me be clear: I think that active high-resolution sensors (such as lidar and/or phased-array radar) will be necessary for Level 4/5 autonomy, but improved sensors alone are not sufficient for that. A reliable SLAM system will likely require more data processing power than is found in any mass-produced car today. Processing the data for a medium-resolution 360° 3D scan of the landscape around the car, for perhaps at least 75 yards in all directions and perhaps 150-200 yards ahead of the vehicle, and updating that every half-second or so, will require a lot of processing power.

    And there will be many, many "edge cases" or "corner cases" that programmers and software engineers will have to deal with, even while recognizing the reality that it's impossible to program for every possible circumstance, and recognize that 100% safety is an impossible goal.

    We will need a lot of work by programmers and software engineers to get to reliable Level 4/5 autonomy, even when the necessary hardware is there.
     
    Last edited: Apr 13, 2019
  7. To remove this ad click here.

  8. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Tesla cars have multiple very short-range radars, but those are only good for detecting what's within a few feet of the car. For longer distances, even the most modern cars have only low-res Doppler radar, which is only intended to detect differences in speed. That is, they can detect moving objects, but not stationary ones.

    For more on that subject, see the Wired article: "Why Tesla's Autopilot Can't See a Stopped Firetruck". Note that despite the headline, the article covers a lot more than just Tesla's cars.

    * * * * *

    More generally on this subject:

    I started a thread entitled "Self-Driving / Autonomous Cars: General discussion". I spent a lot of time composing comments and finding useful and informative graphics to illustrate what I was talking about for that thread, so I hope everyone will understand why I don't want to repeat all that here. Please visit that thread, particularly my post here, to get up to speed about why -- in my opinion -- neither cameras nor low-res Doppler radar are going to be adequate, nor sufficient, for fully autonomous cars.

     
  9. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Autonomous large vehicles restricted to short distance driving, and never on public roads, could well be an interesting subject of discussion, but I'm not sure it has much relevance to the subject of autonomous passenger vehicles operating on public roads in close proxemity to other passenger vehicles, any and all of which may be moving at highway speeds. I think your average Roomba can probably navigate in a more sophisticated manner than such mining trucks, which only follow a fixed route from A to B and then back to A. They're just following a pre-programmed path. Plus, as you say, they are pretty much "get out of the way", so it's not like they need to "see" the environment around them.

    Re "more accurate mapping which would make a difference in following lanes":

    Well now, here I think there is a subject worthy of real roll-up-your-sleeves discussion. What's the best approach for designing safe autonomous cars of the future? Is it Google/Waymo's approach, sending their vans around every city and country road, making detailed scans of the roads and taking pictures all the while?

    Or should it be to design a truly autonomous car, one that needs no detailed scans of individual streets and roads? After all, "autonomous" means "having the freedom to govern itself or control its own affairs." Shouldn't a truly autonomous cars be capable of using its SLAM to "see" for itself where the roads and the lanes are located? Shouldn't an autonomous car be capable of operating without the crutch of the very detailed scans which Google is, apparently, trying to make for every single public road in the USA, and probably EU countries and other places too?

    Of course, in the real world we aren't restricted to a binary solution. It will almost certainly be helpful if the car is provided with data on, if nothing else, certain places where the intersections and/or lane markings are confusing. One of the fatal accidents in a Tesla car under control of Autopilot was at least partially a result of a place on the highway where the lane marking weren't clear; where a lane split into two, with one of the two new lanes becoming an exit lane, with a concrete divider between the lanes. The Tesla car, to put it in anthropomorphic terms*, "got confused" about where the lanes were, and traveled between the lanes directly into the beginning of that concrete barrier at highway speed.

    But altho a detailed scan of every public road in the land would be helpful, it's still not sufficient for safe driving. It's not going to, for example, stop Tesla cars controlled by Autopilot from running into fire trucks stopped in highway traffic lanes. Nor is the detailed scan made yesterday going to help when the road construction crew comes along and puts out cones to temporarily divert traffic into a different lane while it does its work.

    Here's yet another possible factor to consider: One thing that gets a lot of discussion, regarding the difficulties of completely autonomous cars, is what to do about construction zones. Is it reasonable to expect autonomous cars, even with a well developed SLAM and pretty sophisticated decision-making driving and navigation software, be expected to recognize and properly react to, for example, a construction worker standing in the road holding a "stop" sign, one that's not the typical octagonal red sign the car is programmed to respond to?

    Here's an idea: We humans will have to alter our behavior to assist autonomous cars in difficult situations, such as road construction zones. Perhaps in the future, the traffic cones that construction workers set out on roads and highways will have some sort of special reflector that will signal autonomous cars that there's a temporary traffic diversion or detour. Perhaps the "stop" sign the construction worker is carrying will be equipped with a short-distance wifi transmitter which will transmit a "STOP!" or a "GO!" signal to nearby autonomous vehicles. (Altho I can immediately see that would be problematic; it would need to be a highly directional signal. So a special reflector would likely work better there, too.)

    Or, perhaps in the future, traffic will be overseen and individual vehicles routed using traffic computer nodes. The node computers might be located in cell towers, since they'd be using cell phone-type radio communication with the cars anyway. The computer nodes might interact with cars like a cowboy herding sheep, directing them in the right direction, and speeding or slowing them as necessary to assist with smooth traffic flow. Road construction crews would then have the responsibility of providing, to the local traffic node controller, detailed info about temporary traffic diversions or detours.

    Rollin' rollin' rollin'
    Rollin' rollin' rollin'
    Rollin' rollin' rollin'
    Rollin' rollin' rollin'
    Rawhide


    Keep movin', movin', movin'
    Though they're disapprovin'
    Keep them dogies movin', rawhide
    Don't try to understand 'em
    Just rope 'em, throw, and brand 'em
    Soon we'll be livin' high and wide...


    Move 'em on, head 'em up
    Head 'em up, move 'em on
    Move 'em on, head 'em up, rawhide
    Cut 'em out, ride 'em in
    Ride 'em in, cut 'em out
    Cut 'em out, ride 'em in, rawhide


    *the computer programmer side of my brain shudders at the gross misrepresentation, but the more pragmatic side knows it's more easily explained that way
     
    Last edited: Apr 13, 2019
  10. Harvey

    Harvey Member

    i did mean the gps as more of an aid to the computer to figure it's surroundings to better negotiate. like a continuous 5 block by 5 block grid. could work for many areas to also "see" through snow on road and poorly marked roads.
    i think i'm a bit add sometimes, in that i don't quite express exactly what i mean.

    but i can see the city thing, with interactive infrastructure, like lights signalling the cars, etc. and other cars announcing speed/direction.
    once the society is fully autonomous, you could remove all lights, as each car would adjust speed and timing to go through without touching.

    the smart infrastructure is already in use in some small autonomous track somewhere. some phonebooth type vehicle with a central door and seating for about 6 is the image that comes to mind.

    as for construction. that could be broadcast by the smart infrastructure with any lanes blocked or detours.
    as foright now, it is a bit of a dilemma.
     
  11. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Well, regarding Tesla stalling out on its advancement toward Level 3/5 driving autonomy, I may have to eat my words:

    From InsideEVs News: "Elon Musk Announces April 22 As Start Of Tesla Full Self Driving"

    Tesla announced a new event – Tesla Autonomy Investor Day – on April 22 at its headquarters in Palo Alto.

    There are not many details about the event, besides the topic of autonomous driving. According to
    Elon Musk, investors can expect a demo of full autonomous driving...

     
  12. To remove this ad click here.

  13. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Yes, the problem with snow on the road is going to be quite a challenge. And frankly I don't see an easy solution to that problem. Ditto for autonomous cars driving on dirt roads.

    My guess is that the first generation, at least, of Level 4 autonomy will be restricted to paved roads, and quite possibly won't work, or at least won't work reliably, when the roads are covered with snow. But maybe I'm thinking inside the box too much. Maybe some engineer will come up with a brilliant method of using sensors to "see" through snow. If there is some frequency of radar that can easily penetrate snow, that might let the car "see" where the road is, altho it probably wouldn't be able to see the lane markings that way.

    I think it's called being "human". :) Or at least, I'm a member of that club too.

    Yes, that's one thing that has me optimistic about the future of autonomous driving. There won't be any need for stop signs or yield signs, or stop lights. Cars will communicate with each other to assign priority in entering an intersection, so there should never be any need to stop or even slow down abruptly.

    Perhaps I'm being overly optimistic, but I envision a future in which traffic jams will be a thing of the past, except in certain cases such as a parking lots emptying out after a game at a pro sports arena.

     
  14. gooki

    gooki Well-Known Member

    Dirt roads won’t be difficult, as the edge between the road and the “kerb” is easily identified. Lack of lane marking will simply mean the car keeps to the correct side.


    Snow will be interesting. I’d be happy for AVs not to travel in heavy snow in the first instance. It’s unsafe for many human drivers. I expect the AV will have sufficient judgement to stay home, when it’s unsafe.
     
  15. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    This^^

    It amazes me that some people continue to drive in a downpour which is so heavy you can't see clearly more than about a car length in front of you, or perhaps even less. Anyone who's not an idiot will pull over to the shoulder and wait for the rain to abate somewhat, but some continue to drive.

    I feel pretty confident that under those conditions, the autonomous car will be programmed to pull over and wait.

    There's a similar situation with driving on the highway into a dense fog bank. Anybody exercising rational thought would pull over to the shoulder and crawl ahead at slow speed. Yet some continue driving at full highway speed! As a result, sometimes there are multi-car pile-ups in fog banks. I've even seen news reports of as many as 15-30 cars and trucks involved in one single pile-up accident.

    Again, I'm pretty sure autonomous cars will be programmed to drive in a more safe and cautious manner under those circumstances.

     
  16. gooki

    gooki Well-Known Member

  17. Harvey

    Harvey Member

    ones with accurate highway maps and forms of radar shouldn't have a problem. they don't need to "see" as much as sense.
    similar to the tarsand truck example above. although radar on the truck would be just secondary safety. even the human drivers run over things with them that fall out of sight.:D


    couple different ways it's going now. each relying on different things. which is good. it'll get production numbers up enough to get cost down some. and in a few years the best tools will finally get together in packages for autonomy. kinda like airbags and how many different makers there are of them. (one it almost seems).

    well, mondays announcement might have something from tesla.

    just a thought. i wonder if the glitches in autopilot were from trying to defeat certain aspects of the self driving until they can be fully enabled?
    although i think full self has to do like caddy and on highways, with the on/off choice already.
    especially like i said about accurate maps and gps mapping.
    car knows where to go, just need to look out for obstacles.
     
    Last edited: Apr 18, 2019
  18. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I guess I need a better term. I was using the word "see", in scare quotes, to indicate detection and location. I guess we could say "sense". Speaking as a computer programmer: The computer doesn't perceive the world, in any sense, not even as dimly as (say) a tree frog does. A computer is just running software, just a bunch of binary bits being flipped back and forth. A self-driving car is just executing its pre-programmed instructions. To say that it "sees" or "senses" or "perceives" anything is a useful oversimplification to explain things in human terms; but strictly speaking, that's an anthropomorphic fallacy.

    At the moment, Tesla's AutoPilot + Autosteer lane-following program (and GM's Super Cruise, and other similar systems) are just following the lane. Coupled with using Doppler radar to watch the movements of the car in front, and to some extent to follow the actions of that car, the average human driver is fooled by the illusion that Autopilot and/or Super Cruise can actually perceive the world around the car. This is a dangerous illusion, and leads to complacency on the part of the human driver. The average driver doesn't understand that Autopilot and Super Cruise are not even designed to react to the presence of stationary obstacles, at least not at highway speed. (At very low speeds, let's say parking lot speeds, the very short range ultrasonic detectors and radars may be adequate to prevent any collision.) With the current state of development, trying to detect stationary obstacles when traveling at highway speed would result in so many false positives that the car never would go anywhere. That's why such semi-self-driving systems are designed to ignore the presence of stationary obstacles.

    * * * * *

    I look forward to the time when AutoPilot actually includes a SLAM (Simultaneous Location And Mapping) system; that is, a real-time 3D scan of the area surrounding the car, which would enable AutoPilot to detect and avoid stationary obstacles. That is lacking in the current Level 2+ autonomy that Tesla and GM and other auto makers are using. But development of a functional SLAM system will be required to get to Level 4 autonomy, and quite possibly even to Level 3.

    From what is widely reported, it certainly appears that Waymo has developed a SLAM system that it's using in its experimental self-driving cars. I don't know how well that's developed; from various reports, it seems pretty clear that it doesn't work everywhere. But at least Waymo has a prototype SLAM system, and they are working to improve the functionality of that. In my opinion, that puts them at least one or two rather large steps ahead of anybody else working to develop a system for self-driving cars.

    I doubt that has much to do with it. Elon talks a lot about "edge cases" and "corner cases", which I think is what's going on with what appears to be erratic behavior on the part of Autopilot, such as Bob Wilson has highlighted in his "Testing Autopilot" thread. (Altho to be fair, as a human driver, I find traffic circles, or "roundabouts", to be challenging too!)

    As a computer programmer, I certainly understand the problem with "edge cases"; it's a case where a behavior or action seen in the real world exceeds the parameters of what the programmers allowed for in the self-driving software program. Since the program wasn't designed to handle that case, the action taken by the semi-self-driving car won't be relevant to (or appropriate for) the circumstances, and therefore the action of the car will appear erratic to a human observer.

    To some extent, this problem won't ever go away. No team of programmers could ever possibly envision everything that could ever happen when driving a car. But certainly we should expect to see improvement in the level of reliability to the extant that cars should be able to drive down normal roads, following normal lanes, without exhibiting behavior which human drivers consider "erratic". If there are "edge cases" and "corner cases", they should happen only rarely, and only under unusual circumstances. The fact that some people are seeing what appears to be erratic behavior in circumstances which any human driver would find completely normal, is a sign that the self-driving software needs more development.
     
    Last edited: Apr 18, 2019
  19. gooki

    gooki Well-Known Member

    Thanks for the thoughtful response.
     
  20. interestedinEV

    interestedinEV Well-Known Member

    Right: There are too many variables that could cause problems, an accident, road repair, weather, a dust storm that obstructs lane markings etc........... At this stage, it is not possible to envisage all these scenarios. Even if they can be enumerated, we do not have capability to solve all of them today. The question is, is the vehicle capable of recognizing that it struggling and demands that human take over or it parks itself in a safe spot till human can take over. In my opinion, we still would need humans capable of driving cars for a quite a while, it is not going to become a lost skill at least for now.
     
  21. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Absolutely! I wholeheartedly agree. I find it rather naive of people to suggest that we'll start seeing cars made without driver controls (steering wheels, brake pedals and "go pedals", etc.) shortly after we get to Level 4 autonomy. Hmmm, no... I'm pretty sure most people who have learned to drive are gonna want to be able to take over driving in some circumstances. If nothing else, for offroad driving. Even ordinary sedans and hatchbacks occasionally need to be able to be driven offroad into a yard or an empty lot for one reason or another. For example, event parking often involves parking on the grass.

    Waymo is using experimental self-driving cars without driver controls, but that doesn't mean they would be practical for the average person to use as their daily driver.

     
  22. interestedinEV

    interestedinEV Well-Known Member

    There are times when I am driving through a dust storm (we have those in Arizona) and I cannot see the road markings very clearly. In those circumstances, I do not know if I would trust onboard cameras and the software to drive my car. Planes are extremely automated, yet pilots need to be there for the unforeseen.
     
  23. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Tested in dense fog, the Toyota TSS-P radar worked perfectly to detect traffic in front. It is so far untested in our SR+ Model 3 in fog. However, CAVU incidents with crossing traffic show ~300m detection range.

    Bob Wilson
     
    Last edited: Apr 20, 2019

Share This Page