Musk Right on Robots, Right side Welfare Petrol Advocates Wrong!

Discussion in 'General' started by 101101, Feb 22, 2018.

To remove this ad click here.

  1. 101101

    101101 Well-Known Member

    These robots in the links below are scary but you can see from them that the ability have robots that repair robots (instead of robot repair jobs) and robots that begin to move so fast wind resistance becomes a limit- you can see that this is very real.

    Petrol welfare advocates talk this down because it highlights how obsolete their infrastructure and retail arms are only compounding the dawning reality of their permanent stranded asset problem for reasons of physics based fundamental economics even more than politics. It means their too big to fail delusion itself fails. Never would there be a better advantage from replacing all that right side welfare (including the fake truely profits) as fast as humanly possible.









    This last video link below (link does not have associated picture) becomes very interesting at about the 1 minute mark.

    https://www.digitaltrends.com/cool-tech/boston-dynamics-new-atlas-robot-best-humanoid-yet/
     
    Roy_H likes this.
  2. To remove this ad click here.

  3. Feed The Trees

    Feed The Trees Active Member

    wut
     
  4. Martin Williams

    Martin Williams Active Member

    I will start worrying about robot intelligence when they are autonomous and display volition. They'll need to carry a lot more energy than they do, too, if they are to be of any practical use.
     
  5. 101101

    101101 Well-Known Member

    Narrow AI is enough to utterly disrupt the economy. And in a factory they can be wired if need be. Could even do wireless power in various ways i.e., inductive lines in the floor of the plant. Or if you want to get exotic there is wi-power- could be a like a billion crystal radio power type transmission but Tesla himself long ago and recent research and products show various ways to do it, but again even a hanging wire or overhead isn't that much of a limit. That Handle robot has current power for about 15 miles of transport. It could easily swap out battery bricks through out the day- human workers need lunch breaks it just grabs a new brick.
     
  6. Martin Williams

    Martin Williams Active Member

    The point I am making is that you can displace men by machines up to a point. You can do this easily if the task is simple and better still repetitive. Much of this has already been done without the need for much if any 'AI' You design the product and the factory so that production of it consists of easily mechanised repetitive tasks.

    The point where it becomes much more difficult is where independent volition is needed. You need a robot which will recognise unexpected problems and act appropriately, and I believe that requires a degree of consciousness. You cannot really expect the designer of a robot to anticipate all possible problems which might occur. Expected problems are easy to handle, and usually don't require AI to solve. You can allow a really dumb robot that just carries heavy loads around to get through doors by just removing the latch and designing the door so it can be barged open for instance.

    Machine consciousness is something that we are a million miles away from. We have yet to define in precise unambiguous form what 'consciousness' is, let alone how to achieve it in a machine. For this reason, I suspect 'driverless' cars will remain as they are now and will always require a human to be on standby for situations they cannot handle. Unfortunately, in this world **** happens, and machines are hopeless at handling it. People ARE quite good at handling it

    I don't take 'wireless power' very seriously either except for very tiny amounts of power. In the end, for any radiated power you are up against the inverse square law. The best that can be done is many orders of magnitude less than is needed to power a robot. Swapping batteries is better, but I can easily envisage a number of events that would strand the robot away from his stash of charged batteries. So, I am sure, could you.
     
  7. To remove this ad click here.

  8. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    This whole thing with robot boogeymen is just silly. The only place robots want to take over the world, or even have a desire to preserve their own existence, is in movies and TV shows. It's a projection of human fear onto machines, not anything that need concern us.

    The only way robots will ever become a threat is if humans program them to be. That is a very real possibility, but the robot is just a tool, and like any tool it can be used for a good purpose or a bad one. Fear the person who controls the robots, not the robots themselves!

    Watching a DARPA challenge in which a robot was supposed to perform a few simple tasks, such as walking up a few stairs, opening a door by using a doorknob, and picking up a power drill and drilling a hole with it, left me embarrassed over the very poor state of the art of practical robotics. Several of the entrants failed or even broke during the challenge, and the few which did succeed did so with excruciating slowness. The idea that one of these sad-sack robots could ever be a threat is almost laugh-out-loud ridiculous.

    Of course, robots and "expert systems" computer programs are already putting real people out of work, and that is a real threat. Or rather, not merely a threat, but an ongoing economic destruction of good paying jobs. But don't blame the robot; blame the super-rich business owner who destroys good paying jobs wherever and whenever he can, in the name of "efficiency" and maximizing profits, while at the same time calling himself a "job creator" and claiming that should give him some special privileges that ordinary mortals are not entitled to.

    The super-rich -- by no means all of them, but most of them -- would re-write the preamble of the Declaration of Independence to read:

    “We hold these Truths to be self-evident, that all Men are created equal -- unless they are rich job creators, in which case they are entitled to special privileges -- and that they are endowed by their Creator with certain unalienable Rights..."​

    I admire Elon Musk for a lot of things. One thing I definitely do not admire him for is his plan to make manufacturing an entirely automated process, with no human workers on the assembly line at all. If he succeeds with that, how many millions -- eventually probably scores or hundreds of millions -- of people around the world are going to lose good paying jobs? Of course, if Musk can do it then eventually someone will even if he doesn't, but we don't need Musk speeding up the process -- in both senses of that phrase!
    -
     
    Last edited: Feb 25, 2018
  9. Martin Williams

    Martin Williams Active Member

    The central problem in AI is not a technological one but a philosophical one. In simple terms it can be expressed "What is consciousness?"

    You can do a lot of clever things once you define what you are trying to achieve, but you really need to be able to define what you want before starting to achieve it or you will get nowhere.

    I happen to have a four-month-old granddaughter. At that age, she cannot do very much. She struggles with even simple tasks like reaching to grab something. But the wish to do it is clearly there. She is conscious of her existence and has - without any external programming - clearly developed the will to do things and investigate the world around her. She is capable of pleasure and pain and reacts to others according to how she feels about them You simply don't see anything like that in any robot. Tricks like somersaults or walking or door opening are just that. Tricks. In time, no doubt, my granddaughter will master the machinery of the body she finds herself in. The important thing that makes her a human is her awareness of her existence.

    Trick may well prove necessary to a human-replacing robot, but they are far from sufficient. For that you need consciousness and to the best of my knowledge no sensible 'roboticist' is even attempting that, for the excellent reasons that they have no idea what it is let alone how to achieve it.

    To replace the full flexibility and common sense of a human being, you need to have some form of consciousness, and if you can't do that, then the 'disruption' will be minor if it happens at all.
     
  10. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    The last time I made an effort to survey the field of robotics, researchers had succeeded in making a robot with what I'd describe as the approximate intelligence of a not terribly smart insect, and were trying hard to make a them as smart as a rather smart one. I have not seen anything since to suggest that much progress has been made on that front.

    So it astonishes me that, for example, we see people writing comments to InsideEVs news articles asserting that robots and computers must be self-aware, and capable of understanding and reacting to the world as humans do, before we'll ever get proper self-driving cars.

    Speaking as a computer programmer, I see this as a complete failure to understand both the current state of the art of robotics and the difficulty of programming a car to drive at least as well as a human. They are vastly overestimating both.

    I have absolutely no doubt that we'll get reasonably reliable Level 4 or even Level 5 autonomous cars long, long before we ever get robots which are capable of even dimly and inadequately understanding the world on the human level -- or, as Martin puts it, robots which display conscious behavior.
     
  11. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    The last time I made an effort to survey the field of robotics, researchers had succeeded in making a robot with what I'd describe as the approximate intelligence of a not terribly smart insect, and were trying hard to make a them as smart as a rather smart one. I have not seen anything since to suggest that much progress has been made on that front.

    So it astonishes me that, for example, we see people writing comments to InsideEVs news articles asserting that robots and computers must be self-aware, and capable of understanding and reacting to the world as humans do, before we'll ever get proper self-driving cars.

    Speaking as a computer programmer, I see this as a complete failure to understand both the current state of the art of robotics and the difficulty of programming a car to drive at least as well as a human. They are vastly overestimating both.

    I have absolutely no doubt that we'll get reasonably reliable Level 4 or even Level 5 autonomous cars long, long before we ever get robots which are capable of even dimly and inadequately understanding the world on the human level -- or, as Martin puts it, robots which display conscious behavior.
     
  12. To remove this ad click here.

  13. Martin Williams

    Martin Williams Active Member

    My bet is that this will take a lot longer than you think. Nobody trusts these things on the road without people inside, at the wheel, and ready to take over when trouble appears. They are right. We are already seeing near-accidents, accidents, even a death from these things. They cannot handle unexpected events. Often, they cannot handle EXPECTED events such as these:



    These are all due to failures in detectors, the electronics the software or something. But ask yourself whether any human being not intent on homicide, blind drunk, or deranged would dream of involving themselves in any of these collisions? Why anyone in their right mind believes these things are SAFER than humans baffles me.

    We are seeing the same thing in the arthritic behaviour of robots falling over, trying to walk through walls etc. I don't believe its possible to build a machine programmed to handle everything - every possible event; just think of that! - that the world can throw at them, even in the limited environment of an open road! I doubt very much whether those attempting it have understood the problem, let alone how to solve it.

    Even when you DO know how to solve the problem, you have the quality of software to worry about.

    In every project I have ever been worked on involving hardware and real-time software, (dozens probably) the software has (as well as being late) been less than perfect. When debugging these programs the software engineers reach a stage where there are known faults. But they also realise that attempting to put them right would introduce further, possibly more serious faults. If it can be tolerated it will be accepted. Of course there are also faults they have not found, and they have no idea how serious they are. All that can be said is that if it hasn't been found perhaps it won't show up!

    I don't blame the people writing this code. It is - in practice - impossible to test how a computer - a machine capable of existing in a truly astronomical number of states will react when it has - say 200 digital inputs from sensors, which could be in any of 2 to the power 200 states (about 1.6 times ten to the power 60). It simply cannot be done.

    The problem gets worse and worse with increasing complexity, but this is the sort of technology you propose to trust with your life.

    Well, I strongly advise you to think carefully before doing so. The goals of the exercise are ill-defined or non-existent and the technology to get there fundamentally flawed.
     
  14. 101101

    101101 Well-Known Member

    @Martin Williams Philosophically, especially post digital physics (new field) that idealism has won out where its all consciousness in one form or another. So issue of consciousness flies out the window for me. Also presume the same function behind man is behind patterns associated with man or tech. So to put it bluntly then entity doesn't have to be alive or conscious or be able to feel or be creative or very intelligent or adaptive to be totally disruptive. Daniel Suarez is a programmer that worked in defense that is now a Sci fi writer that does a good job in my opinion of addressing your points in his fiction works "Daemon," "Freedom" and "Kill Decision." The books go back about a decade now but I would have raised similar issues but those books mooted those issues for me.

    You two seem to seem to be about a decade behind in your take on things. A lot has happened. You seem to be making the kinds of arguments that show up in books like the "The Second Machine Age" (MIT Press) or Martin Ford's "Rise of the Robots."
    Middling books that are only 3-4 years old but read like they came from a decade ago. Erik Brynjolfsson and his coauthor at MIT kept making foot dragging arguments at Davos. Ford makes this asinine assertion that GAI's shoudln't be indexed kind of putting him on the bad guy side of things. The Second Machine Age book was all about going tofree style chess type set ups for the foreseeable future of work and society, but in retrospect they were wrong or politically motivated.

    Published around the same time as those books was Nick Bostrom's Super Intelligence which scared the **** out of policy makers and kicked off global arms race because of the notion that who ever has S.I. will rule and can do stupid things like wind nuclear wars. That is a frightening book. It isn't A.I. that scares people its S.I. If you look at that book couple it with Tom Campbell's My Big Toe from 2003. By 2011 there were hints of what Campbell was talking about in popular books like Brian Greene's "Hidden Reality" but 15 years later most peple have still never heard of digital physics.

    Something bizarre has happened in tech. All that neural net stuff never used to work and emergentist arguments about throwing more calculations and memory the problem were clear dead ends but now the stuff seems haunted and for no reason works. Science runs back ward now. Show the machine the data set and get answers to questions we couldn't even ask but some how can recognize the corresponding answers to. Its like the script is changing on the world- digital physics for instance is a sign that we are in a post science world. Science doesn't survive digital physics.

    Boeing Dream liners are already drones they fly essentially automated- don't trust self driving cars but trust self driving jet liners?
    Machines are better at recognizing animal breeds than humans now.
    You can snap a photo of a scene and a machine can instantly tell the story of what is going on in the picture.
    Machines can beat not just he best chess player but the best Go player now- a much taller order.
    Machines seem to be increasingly creative now, designing novel circuit designs and quickly deriving Newton's laws.
    Watson beat the jeopardy champs in 2011 and could have done it in real time in speech but they didn't want to scare people to death
    Google Translate is in may ways the best translator in the world (no human has remotely the scope)- of course a human would trounce it
    right now across a couple languages- but look at the Duolingo app for teaching language- no human is going to perceive the patterns
    that allow that magic to happen.

    Some years back a former Chair at one of the big Ivy Leagues who moved on to head up one of the big tech firm's AI programs said at a bar after a conference: I had said it would be 20-30 areas that had to be integrated and it would take decades but just the other day I realized all the areas were present now to my utter shock...

    Now they want to match the Quantum Computer's strength in brute force (more power for a small one in important ways than converting the entire physical universe into computronium) with the new pattern recognition powers of a neural nets. Just solving the Hamiltonians which the Q computer can do would revolutionize our world. Imagine what happens to materials with that level of optimization (I know optimization is not a cure all.) But its more bizarre than this. There is for instance fundamental research again from the early 2000s showing that it is possible to store information in an electron. That research seriously suggested per its authors that the the amount of information storeable in a single electron might be infinite. This is the digital physics angle slipping in behind the scenes even back then. Its as if a particle like an electron were not only programmable matter (energy) but like a software object or more like a software interface itself. Look for the bus is quantum computer- if the qbits are entangled at a distance that at least a chunk of the bus. Nothing says the neural net has to stay on the classical side of the machine.

    In order the revolution we are face involves at a minimum newly haunted neural nets, quantum computers (DWave has some stuff that works to a degree even if not full general purpose,) S.I. and digital physics. S.I. is run away logic improvement like the logic equivalent of a fission chain reaction leading to speed, parallel and qualitative improvements that could just run away from us. S.I. was predicted by one of the people who worked with Turin- think it was in the 40s. Feynman's quantum computer paper was 36 years ago. Neural nets are from the 50s. Only digital physics is new but there were papers on it going back to 1997 but it just might be the driver behind this stuff- every human discovery since the beginning rolled together might not be as radical as digital physics and its influence is being felt now for at least 20 years.

    But back to the argument. Yes those early DARPA challenge robots sucked. But not anymore. Coordination was a huge problem as was training such robots. Now you can just walk such robots through the tasks manually and they pick it up and refine it. A few years ago Kuka had this ad series where they had their robot beating the worlds best ping pong player at ping pong. That appeared to be some ad hyperbole but the point was to say its plausible. It is very likely possible now.

    The argument Martin was making essentially about DMFA or design for manufacturing and assembly is something musk has picked up on heavily. But to me what Musk wants to do seems to be to automate auto manufacturing to the level Intel has done with its US plants. You kind of obviate the labor differential cost when you do that. Intel builds a lot of its chips in vacuum chambers- how many people are walking around inside that set up or in their most automated plants generally? But this is not alien, this is what silicon valley has always done. When silicon valley was cutting its teeth on aerospace (sound familiar) how many people did it want to populate its early vehicles with and wasn't the limiting factor the weight of stuff that could be made electronic? Fully automated factories were doable in the 70s, that's just the reality of it- the might have been gold plated then but not now. The cost on doing that has probably price reduced 17000x just like with solar cells.
    Machines smart enough to drive across the country largely unaided are good enough to fix other machines and fully automate a plant.

    It takes more skill to drive to work than to do 90% of jobs. Think about machines 10x better than human drivers. Think about a Q powered Watson in the could time slicing questions.

    But you know what would be radical beyond belief? Simple open honest non sponsored search. Sponsored search as we've moved to is stupid beyond belief but honest accurate search would kill the attention, time stealing privacy stealing ad industry. It would reduce what is left to good voluntary product education and align buyers and sellers interests instead of letting the sponsor contingent pit them against each other with sponsor captured media and its resultant perversion of law. But even look at Amazon- how pathetic it incredibly foolishly is accepting sponsor bribery in its own internal search and sowing the seeds of its own destruction. How simple for it to recognize who its actual customers are instead of trying to pimp them out and reap the unconscious cumulative wrath of its own customers and permanent loss of goodwill. Guess that's what happens when they hire psychopaths from places like AOL and Amway.
     
  15. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Human drivers also can't handle everything -- every possible event -- that the world can throw at them. If they could, then nobody would ever die in an automobile accident. Demanding that self-driving cars must be able to handle "every possible" event, before we start using them, would be a profoundly foolish. It would be an extreme case of "The perfect driving out the good."

    We should start using self-driving systems, even partially functional systems, just as soon as they would start saving lives by reducing the overall accident rate. Arguably that situation already exists, since the NHTSA says Tesla cars with Autopilot + AutoSteer installed have an ~40% lower accident rate than Tesla cars without that combination installed. Of course, that one statistic does not mean that every semi-autonomous driving system from every auto maker will make driving safer, but it does show how remarkably fast improvements have been made in the field.

    Compare with the airbag situation. We know that the air bags installed in millions of cars are defective, and might explode at any time, injuring or even killing someone. Yet we continue to leave those systems active in cars, because they can't all be replaced at once. Nobody is calling for all older air bags from the manufacturer to be disabled, pending replacement. Why? Because even with the (small) danger of explosion, you are still safer in a car with defective air bags than in a car with no operational air bags!

    And even before self-driving systems become 99.99%+ reliable, they will be saving lives every day, even if -- just like malfunctioning air bags -- they do cost a few lives now and then. It's true that self-driving cars will never be able to prevent 100.0000% of accidents. That's an impossible, unrealistic goal. But they can already be used to save lives, so they be put into general use now -- not at some nebulous time in the future when EV bashers like Martin finally, grudgingly, admit that there might be some value to them!

    “The thing to keep in mind is that self-driving cars don’t have to be perfect to change the world. They just have to be better than human beings.” -- Deepak Ahuja, CFO of Tesla Inc.​
    -
     
  16. Martin Williams

    Martin Williams Active Member

    So many points have been raised by 101101 that I will pick out only what I think are the most important ones, and chuck in one or two others of my own. The issue of consciousness may not worry you, but I assure you it worries many others. One well-known AI researcher is reported recently as saying he worries about AI taking over as much as he worries about overpopulation on Mars.

    I read 'Superintelligence' some years back as I like some of Bostrum's ideas. (His paper on "Are we living in a simulation" for instance) I thought it rather over-optimistic in terms of actual progress but its biggest failure for me was the underlying assumption - which seems common with a lot of writers on the subject - that intelligence can be increased without limit. Like the assumption that robots will be better drivers than you or I, there is absolutely no proof of this. Bostrom all but ignored it. There may well be a limit and for all we know we may already be bumping up against it. This is nothing to do with speed of computing. It would be a natural limit imposed by the random influences which affect things. No matter what computing power you may have it is not possible to forecast weather very far ahead, for instance. Randomness - which by its very nature cannot be predicted - screws you up after a day or two.

    The idea of a more-than-human intelligence promptly developing an even more intelligent one which does the same depends on this assumption of course, and it also depends on the new entity possessing curiosity. I find it hard to conceive of a machine or anything else possessing curiosity without a degree of self-awareness. I have a dog who is as thick as two short planks, but she certainly has curiosity and from that one deduces she has self-awareness too. One might also ask if a machine - having replaced a species less intelligent than itself - would want to see itself replaced in the same way too. It would have to possess curiosity greater than its wish to survive, certainly.

    A far better, albeit older book, in my opinion, is Roger Penrose's "The Emperor's New Mind". I won't spoil your pleasure in reading this if you haven't done so, but he makes an excellent case against machine intelligence comparable to ours ever emerging. I'm not entirely convinced, but it is a powerful argument.

    A further interesting consideration is whether human beings would recognise artificial intelligence (or more accurately non-human intelligence) were it to exist. I suggest that such entities may well have existed in our society for a very long time in the form of institutions like the churches, the law, banks, etc. These organisations survive and breed, and seem to be far more long-lived than you or I, adapt to a changing environment, and fight for survival and growth. The internet may well prove to be another of these 'creatures'.

    A third book which is even older is Immanuel Kant's "Critique of pure reason" although I warn you it is a very hard read! He makes the case, however, that first, what we can think about is governed by what we can sense. (we were unable to think about bacteria before the microscope was developed) but more importantly, our way of thinking is also governed by what we can sense. In other words, the underlying reality of the universe - the 'sich an ding' (the thing-in-itself) may be bizarrely different from anything we can conceive of. Concepts such as time and causality may turn out to be not real at all, but mere artefacts of our imagination. Reason itself, he argues, may be an illusion which means intelligence may well be an equally false notion.

    Coming back to self-driving cars, there are a number of unsolved ethical problems in the design of these things - even if they work. Whose safety do you prioritise in an emergency for instance? The driver or pedestrians? Should you prioritise the young rather than the old, or simply minimise deaths? Are three likely serious injuries worse than one almost certain death? There are no clear answers to these questions for human drivers, but we can ignore them in people. When you have to build these decisions into software, however, you need clear answers. There may be different answers in different states, and who takes responsibility if someone gets killed anyway. The driver? The car manufacturer? The software engineer?

    Also, I would point out that although manufacturers seem keen to build self-driving cars, I wonder how enthusiastic the public is for them. Speaking for myself, I quite like driving and would find just sitting in the car for an hour pretty boring. I don't like reading in cars even as a passenger and many people find it causes motion sickness anyway. Much of my driving is just local pootling to the shops or a restaurant, and I tend to make instant decisions as to where to park. It would be more trouble to communicate this to a smart car than to do it myself so I wouldn't bother unless I wanted to get into a tight spot and the car could do it better than me. Nor am I convinced that it would be particularly 'disruptive' if everyone just sat in their cars rather than driving them. I don't really see this as 'world-changing'.

    Finally, I think it's worth pointing out how good people are at driving. In my case, I haven't had even a minor bump for about eight years, (After I backed into an unseen tree-trunk!) and in the past when I was doing a great deal of driving I did a million miles without a scratch and have never been in a serious accident. Much of it was done in countries driving on the 'wrong' side of the road. I am not boasting, by the way. This is by no means unusual, and I don't consider myself a particularly brilliant driver. To make a self-driving car that does as well is going to be a huge challenge and we are nowhere near it as the performance of self-driving cars in the real world so far demonstrates. Designing a vehicle that 'only has to be better than a human driver' sounds easy, but if you think about it a little more carefully you realise it isn't.
     
    101101 likes this.
  17. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Thanks for raising the point about self-awareness in animals. It was a point I was tempted to dive into, in a previous comment, but when we try to cover too much in a single comment, we lose our audience. (I'll probably lose a lot of readers here, since my comments were so lengthy I had to break it into two posts! :eek:)

    I have an interest in animal behavior, and I've been fascinated by all the studies over the past few decades about whether or not various animals exhibit self-aware behavior. I think the answer is pretty clear: To a limited degree, some of them do. Perhaps not dogs or cats, but smarter animals such as primates do, and some animal behaviorists argue that some birds do too, as well as some other animals such as dolphins.

    So I'm convinced that self-awareness and consciousness are not binary, either/or situations. They are a matter of degree. Therefore, the question of machine intelligence, or machine consciousness, doesn't particularly bother me. Roboticists are trying very hard to get robots to mimic human behavior and thinking, so I don't doubt that they will eventually achieve some degree of mimicking human thought, just as they have already gotten some robots to mimic some limited amount of human behavior. Again, it's not going to be an all-or-nothing thing. They will gradually get machines to exhibit some behaviors (but not others) which we associate with consciousness; and beyond that point roboticists will continue to gradually improve the ability of machines to mimic human thought and human consciousness. And someday, altho probably not until several decades from now, someone will ask a highly advanced machine "Are you actually thinking like people do, or are you just programmed to imitate human thought? And the machine will be able to truthfully respond "A difference which makes no difference, is no difference." That's the ultimate answer, the final answer, to the Turing Test!

    That's a rather outdated way of looking at things. I'm far more interested in recent studies (or philosophical reasoning; in this subject, there's no clear line between the two) which suggest that what we think of as consciousness is a five-or-more-sensory "movie" -- let's call it a gestalt -- which our brains build up of the outside world, but one which ignores much or most of the sensory input, concentrating on just a small fraction of it which is mostly familiar. This process of filtering out all but the most relevant sensory input allows the brain to build up a gestalt which is mostly based on the individual's previous experiences. Thus the well-known phenomenon that people often see what they expect to see, so sometimes we don't notice what is really there.

    This means we as individuals are not actually interacting with the world as an objective reality, but rather we're interacting with this gestalt, which has some relation or correspondence to reality, but only to a certain degree. Obviously, since we can function in a reality which we don't perceive directly, that degree of correlation to reality must be reasonably strong, at least in certain respects related to our survival, our ability to feed ourselves, and our ability to reproduce and so continue the existence of our species. But our brains do reject an awful lot of what our senses bombard it with, to avoid sensory overload. The gestalt we see is a simplification of the real world. (It may be that some people who suffer from Autism have brains which are not so adept at this sort of simplification; whose brains don't reject the majority of sensory input in favor of small amount which seems most familiar to the brain. It may be that some Autistic people suffer from constant sensory overload. But it would be a mistake to suggest that everyone who suffers from Autism has the same underlying cause.)

    However, designing and/or programming a machine to perceive the world using the same gestalt we do... well, I suspect that like many or most engineering goals, that will be an ideal to work toward, rather than something which can be achieved with 100% satisfaction.

    As a computer programmer, I regard discussions on this topic to be about as useful as arguing over how many angels can dance on the head of a pin. It has no relevance to any practical matter, such as programming self-driving software.

    The goal of a team designing software for a self-driving car will be, in order:

    1. Use the car's sensors to build up in the car's computer software a real-time virtual reality 3-D "picture" of the environment (aka SLAM), which the car can use for such things as detecting obstacles (both stationary and moving) and detecting where the traffic lane of the road are. (SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a map of its surroundings, and orient itself properly within this map in real time.)

    2. Using the SLAM, to avoid collisions with anything larger than (let's say) a small adult cat. (We could of course choose to limit this to larger objects, but ideally the car should not ignore anything large enough to be a fairly small human baby. Running over squirrels may be a small tragedy, but we have to draw the line somewhere.)

    3. To obey the rules of the road, both written and "unwritten". (Minimizing abrupt actions which would be unexpected by other drivers will help prevent accidents.)

    4. Within reason, to maximize the safety of the journey. (People will of course argue over what is "within reason". They're already complaining about self-driving cars stopping for taco trucks! :p)

    5. Within the constrictions of #3, to get the vehicle (along with its passengers and cargo) safely to its destination in the shortest amount of time possible. (Speed demons won't like self-driving cars. That's a good thing; we need to get rid of drivers who are speed demons.)

    All those armchair philosopher questions, such as asking whether the self-driving car should prioritize the safety of a bus full of children over the lives of the people in the car itself, are utterly irrelevant to the software engineer or computer programmer. Again, we're not dealing with a self-aware machine capable of recognizing what a child (or even a school bus) is; we're dealing with a machined programmed to do the things on that short list above, as reliably as possible. The self-driving car, even a very sophisticated one capable of driving with a 10x or 20x lower accident rate than the average human being, cannot recognize what a human being is, nor reliably distinguish it from non-human objects! And it certainly won't be able to distinguish between a school bus full of kids and one that is empty (except for the bus driver).

    Oh sure, the team could design a self-driving software which would recognize the shape of a typical adult pedestrian walking normally. But what about an amputee on crutches? What about a child on a tricycle? What about a person in a costume which changes the outlines of his/her body? And would a car be able to recognize that a mannequin isn't a real live person? Of course not!

    I submit there is no point to trying to prioritize the safety of a general class of "human beings", since self-driving cars won't be able to reliably recognize that class of objects. And anyway, what difference does it really make? Whether it's a deer or a child or some idiot concentrating on his cellphone rather than looking where he's walking, any large object (living or otherwise) which moves out into lanes of traffic is something the car should be programmed to avoid colliding with. It would be ridiculous for the programming team to, for example, program the car to avoid running over a man (or a child chasing a ball) but ignore colliding with a deer. Collision with a deer (or any other large and quite possibly massive) obstacle in the road could result in severe injury to the vehicle and its passengers.
    -
     
    Last edited: Feb 26, 2018
  18. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Back to the philosophical problem of the school bus: The correct answer for the programmer is to avoid collisions with other vehicles, pedestrians, or any other sizable object in the road, period. Doesn't matter if it's a tiny motor scooter or a school bus or a kid chasing a ball or the neighbor's cat. Since the self-driving car won't be able to tell how many people are in the other vehicle -- school bus or otherwise -- there's really no ethical or moral question which the software team needs to consider. The idealized behavior of the self-driving car should depend on the layout of the environment, not the exact type of vehicle causing a danger of collision. The self-driving car should be programmed to prioritize avoidance of collision, period. In an emergency situation that does mean active avoidance, such as steering into an empty lane, or if necessary off-road onto the shoulder or even into a ditch, so long as the ditch itself is free of obstacles. If this is physically impossible, then the correct solution would be to sound the horn while braking toward a stop as quickly as possible, in the hope that the other driver will recognize the danger and avoid collision; or if not, to minimize the speed (and thus the danger) of the impending collision.

    While important questions, this is something the self-driving car designer or programmer should not concern himself with. The courts will sort out these questions. And let us face facts here: The proper question isn't "Who takes responsibility if someone gets killed anyway?"; the proper question is "Who takes responsibility when someone gets killed anyway?" It's going to happen sooner or later -- arguably, depending on your viewpoint, it already has happened at least once. We can reasonably hope self-driving cars will significantly reduce the accident rate, but nothing short of suspending the laws of momentum and inertia is going to make hurtling down the road at 60-80 MPH entirely safe.

    From the practical viewpoint, the auto maker will have to stand behind the self-driving car's systems and software, accepting responsibility in case of litigation. Refusing to do so would be a strong argument for any car buyer to avoid that car. And in fact, more than one auto maker has already declared it will accept liability in such cases. However, from long experience with previous product liability cases, we can be pretty sure that some lawsuits will name the software developer(s) and/or the manufacturers of the car's self-driving sensors right along with the vehicle manufacturer. IMHO this is why MobilEye pulled out of its deal with Tesla; MobilEye did not want to be exposed to such lawsuits.

    And in a previous transportation revolution, many people chose to stick to their horses and/or their horse-and-buggies, rather than to buy an automobile. The fact that you have convinced yourself that you don't want one, won't stop progress.

    You're not thinking things through. The self-driving car would drop you off at the door of the shop or restaurant, and then go find a place to park. You wouldn't need to park the car yourself. Of course, you would summon the car via smartphone when you wanted to be picked up, so no need to go hunting for it.

    Maybe not for you. But it certainly would be world-changing for that elderly man or woman who has had to give up driving because of advanced age, or the blind person who is now dependent on others to play chauffeur for him or her.

    And it would definitely be world-changing in that those who now suffer the loss of a loved one or close friend, because some idiot was driving drunk or texting while driving or driving while half-asleep... tens of thousands of people every year would no longer suffer that loss, if self-driving cars reduce the accident rate by 10x or better!

    If people were that good at driving, then we wouldn't be having this discussion. And is your lack of accidents over the past eight years actually skill, or just a combination of perhaps not driving that much, plus either luck or using mass transportation a lot? You live in the UK, right? And mass transportation is pretty widespread and dependable there. Not so much in the USA, "the land where the automobile is king".

    Maybe you actually do enjoy driving in all conditions, Martin. Personally I get stressed out by driving in rush hour traffic, especially when I was still commuting back and forth to work every day. From many comments online, I know that I'm not the only one. And since I suffer from poor depth perception, it's a real nightmare for me personally to drive at night in the rain, when I can't see the road and find it almost impossible to tell how far away oncoming cars are. I'm no longer driving, but if I was, I'd be very grateful to turn driving in such conditions over to a "robot driver"!

    Just because you don't think self-driving cars are a worthy goal, Martin, doesn't mean that most people will agree with you. You're free to keep riding your horse as long as you want to, altho you may be disappointed to see the hitching posts your horse needs disappear from in front of stores in town, replaced by EV "hitching posts". ;)
     
    Last edited: Feb 26, 2018
  19. Martin Williams

    Martin Williams Active Member

    http://www.jdpower.com/cars/articles/car-news/study-says-minority-drivers-want-autonomous-cars
    http://analysis.tu-auto.com/autonomous-car/what-if-no-one-wants-driverless-car
    http://www.wbur.org/bostonomix/2017/05/25/mit-study-self-driving-cars
    https://finance.yahoo.com/news/tesla-google-cadillac-self-driving-cars-160441498.html
    http://fortune.com/2018/01/24/aaa-drivers-fear-self-driving-cars/

    I look forward to any evidence that anyone wants self driving cars 101101.

    Incidentally, I am not just a decade behind the times. I am at least 2,000 years behind the time. I still use theorems due to Euclid - shamelessly - because it was the best way of solving a particular problem. I also use the Chinese Remainder Theorem, and that goes back even further I believe. Again, it was the best way to do large number multiplications fast. I have even built these ancient tricks into a 5 million gate microchip where they worked happily in conjunction with tricks developed only six months earlier.

    You may dismiss Bishop Berkeley who claimed nothing exists until it is perceived (esse est percipi), but using nothing but his brain managed to anticipate weird quantum mechanical phenomena by 300 years! Or James Clerk Maxwell whose famous set of equations made radio communication including all the current whizz-bang stunts like wi-fi, Bluetooth, GPS etc possible a hundred years before the technology to do it existed.

    You seem to think because something is new and fashionable, it is automatically better, or that because something CAN be done is sufficient reason to justify doing it. Good luck to you. I wish you well. But I suspect you are well beyond listening to people like me who use any clever ideas they can find to solve hard problems irrespective of their antiquity. I think we should all tip our hats in respect to all the long-dead geniuses who have given us what we have today, and the best way of doing so it to put their developments to good use.
     
  20. 101101

    101101 Well-Known Member

    @Pushmi-Pullyu you But on the collision topic, if utilitarians get involved they will want to signal how many lives a vehicle contains, just like signals on a car now, even if done unobtrusively. The basic argument for self driving cars is already utilitarian.
     
  21. 101101

    101101 Well-Known Member

    @Martin Williams. I agree with pretty much all you said- but more pro on self driving cars because while I like to drive also I also want to be free of it and would rather not rely on the person behind me to bother to depress the brake peddle.

    I am generally a lot more interested in what poets and mystics have to say than more rational thinkers. Not sure we can even say science is a "map of a dream." To me its consciousness all the way down. Consciousness is irreducible as Penrose seemed to be getting at in that likely to be classic book- or that was the point Chalmers followed up with. What Doug Hoffman and others seems to be getting at as referenced by @Pushmi-Pullyu doesn't seem to be that different or seems to align with the new Gestalt type view point which almost seems Kantian in a way- its some sort of miracle that correspondences can survive the leaps its like a bunch of mirroring going on everywhere.

    If as the Vedas suggest we aren't ultimately body or mind but something that can't be pointed to what would something like AI be? Kant was pointing at something like this in his Critique and I buy it even if Nietzche and others were merciless with the "thing in itself" type perspective. To me science is just another story of consistency. But as the contemporary mystic Byron Katie has been saying when you believe a story you suffer. Supposedly per the buddha (to paraphrase): my teaching is an accurate representation of what is but it is neither true nor not true- and don't bother writing down what I have to said because you have to discover it for yourself, and when you do throw it away as soon as it has served its purpose.
     
  22. Martin Williams

    Martin Williams Active Member

    Well, it seems your enthusiasm for driverless cars is not generally shared by the public if you accept the findings of various polls across the world. It may change of course. People get used to their ways and resist change unless there is an immediate benefit in doing so. Speaking for myself, I get a lot less bored driving than I do being driven, whether this is by a human a robot or the train driver I never even see.

    One of the benefits of having a human driver is that - like all of us - he fears pain, death, and disabling injuries. If he crashes the car, he is as likely to suffer as anyone else, so there is a strong incentive to safe driving. A machine, lacking consciousness, also lacks this fear of injury. But even if it works perfectly safely and never develops a fault, you are putting yourself in the hands of a programmer who will not suffer personally if things go wrong. You may have more faith in software than I do, but I don't have much, having seen the sheer impossibility of producing millions of lines of code with any guarantee of it being error free. It is worth noting that passenger aircraft generally get round this by using several computers, built to different designs with code produced using different high-level languages, written by separate teams kept isolated from each other. Some even insist on the team members having been educated in different universities in an attempt to eliminate common errors. A 'majority vote', in effect, is taken from them all to control safety critical systems. This is horrendously expensive, of course, and will not be done for cars.

    You are, in effect, putting yourself in the hands of a complete stranger, of unknown competence, who is designing a system which will be full of bugs and who will not suffer the consequences if it goes wrong! Such a system is unlikely to handle the unexpected well either. You are welcome to trust it. I would prefer not to!

    Having an interest in the philosophy of AI (Should it ever emerge) and safety critical systems, I recently attended a workshop on the Ethics of self driving car control systems given by a professor of philosophy. I thought she got it quite wrong in wanting to solve these moral and ethical questions a priori - well before a line of code had been written.

    My personal preference would be to treat such systems (which are likely to contain unanalysable features such as neural networks and genetically derived algorithms) as we do people, and - after they have been designed - to put them through a 'driving test' This can be done quite safely in simulation and at high speed so many thousands of hours of unexpected situations can be thrown at the system in a relatively short time and the performance of it examined and assessed by human examiners. This at least gets over reliance on a distant code-writer. His work will be examined by independent critical examination.

    Unfortunately, it would appear that what has been produced has been the result of a rather simplistic view of the problem, and any testing has been done only by the designers of the system, which - without criticising the integrity of these people - can lead to an overoptimistic view of what is safe and what isn't.
     
  23. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I "accept" that people tend to dislike change.

    A few years back I read an account (wish I had bookmarked that in some way) from a reporter who took a ride in an experimental self-driving car. He said that he suffered from some anxiety at first, at not being able to control the car, but after a few minutes he got over his fear, and he said that after that it was actually more relaxing to let the car drive itself than to be in control.

    That's just human nature, and I think the various surveys you're pointing to are a reflection of the same thing. That is, many or most people will not like the idea of giving up control to a self-driving car until they have actually tried it out, and after that they'll be fine with it. No doubt a small percentage still won't like it after they try it. But then, no doubt back at the end of the horse-and-buggy era, there were people who refused to exchange their horse for a motorcar, too.

    People swimming against the tide don't actually change the tide.
    -
     

Share This Page