Ethical challenge

Discussion in 'General' started by bwilson4web, Mar 21, 2019.

To remove this ad click here.

  1. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Self driving cars, a technical and/or ethical challenge?

    Source: https://www.automotive-iq.com/autonomous-drive/columns/the-only-way-is-ethics-for-self-driving-cars?utm_campaign=AUIQ-NL-190321&utm_medium=email&utm_source=internalemail&MAC=|1-OKOHSCL&elqContactId=13876107&disc=&elqCampId=44241

    . . .
    However, the issue of ethics – and exactly who should take responsibility for what’s right and wrong in a given situation – remains an area that’s up for discussion, and one that shouldn’t be downplayed just because the technology has the potential to drastically reduce accidents. Humans have never allowed an artificially intelligent machine to take the decision as to whether a person lives or dies, and cars are likely to become the first to do it – so ethics can’t be ignored.
    . . .
    So while today’s engineers might develop a specific NVH tune for different markets, they could also tweak algorithms that decide whether the old or the young; fat or thin pedestrian takes the hit. Given advances in in-car sensor tech, the passengers of the vehicle aren’t immune from such decisions either, as the vehicle could decide that you’re the one who has the least to contribute to society.
    . . .
    The response to the fatal crash in Florida involving a Level 3 autonomous Volvo equipped with Uber’s self-driving sensors shows how that opinion might be factually correct, but hard to justify to an end consumer. If customers have no confidence in the decisions that artificial intelligence makes on their behalf, the self-driving vehicle revolution can only reach part of its full potential.
    . . .

    Two Boeing 737s recently crashed killing all aboard. Yet Boeing has proposed a software fix. So too, Tesla is sued because of some teenagers in Florida at a speed in excess of 100 mph, crashed, burned, and died.

    Bob Wilson
     
    Paul K and interestedinEV like this.
  2. To remove this ad click here.

  3. interestedinEV

    interestedinEV Well-Known Member

    Good topic. Let me add a wrinkle to it
    https://www.nytimes.com/2019/03/21/business/boeing-safety-features-charge.html
    Doomed Boeing Jets Lacked 2 Safety Features That Company Sold Only as Extras
    As the pilots of the doomed Boeing jets in Ethiopia and Indonesia fought to control their planes, they lacked two notable safety features in their cockpits.

    One reason: Boeing charged extra for them.


    The idea is to make the cars as safe as possible. There is a limitation in technology. More can be done, but it is also costs money to develop such systems. Manufacturers may be incentivized to keep some safety features as premium products and charge more for that. Is the manufacturer responsible if that feature could have saved a life but the was not purchased due to excess costs? On the other hand would the manufacturer be responsible if the user was not reasonable and prudent? It is not always black and while, there are many shades of grey. Clearly if the manufacturer knew of a flaw and which they could correct but refused to, they could be held responsible (it took a lot of prodding to get the big 3 to adopt seat belts). If the user did something that was unreasonable and imprudent (like driving at 100 miles per hour when it is not safe) , then the blame should be directed towards the user. What happens if it is a bit of both or things are not that clear? That is going to become more important as such cars become more common place. Level 5 driving is where there is no steering wheel or controls that the user has, they just give a command and the car does everything. But at level 3, there is a warning which when ignored, can lead to disaster. If the user chooses to ignore it, then they need at the minimum to share part of the blame. What are the responsibilities of the manufacturer to prevent stupidity? It could be an ethical debate but I would personally side with the manufacturer.

    This discussion reminds me of the movie "Sully". From what I hear the movie was not completely factual as the NTSB was made to be a villain, when they were actually supportive. However the issue there was that the simulations showed that if a decision was made to return to La Guardia immediately after the bird strike, they could have probably made it. Once a 30 second delay, to allow for decision making by a pilot, was introduced, Sully's decision appeared to be the right one. So even with the most sophisticated controls, you still need humans for now. People need to understand and appreciate that and factor that when they do something. They cannot wait for the last moment, hoping the systems will prevent you from doing something stupid.
     
  4. bwilson4web

    bwilson4web Well-Known Member Subscriber

    What were the safety features?

    I'm aware of an angle of attack 'mismatch' display or error light.

    Bob Wilson
     
  5. interestedinEV

    interestedinEV Well-Known Member

    Boeing’s optional safety features, in part, could have helped the pilots detect any erroneous readings. One of the optional upgrades, the angle of attack indicator, displays the readings of the two sensors. The other, called a disagree light, is activated if those sensors are at odds with one another.
     
  6. 101101

    101101 Well-Known Member

    The premise is untrue the US has been using unmonitored drones (situations where the remote pilot is away) to kill people in other countries, the drone AI make "kill decisions," with predictably bad results.

    Notice how its always Florida where these claims are made? That's the state where the judge ruled it was ok for media firms for profit to be paid to lie the public about matters of public health and the public interest and also ok to fire journalists who tried to expose it.

    What is the point of what is being said here? So insurance companies that not only insure stranded asset petrol firms but pass along the costs of their the regular losses petrol firms incur as part of terrible business dynamics for fossil fuels (even when massively subsidized) because they don't want self driving electric autonomy we are supposed to let only China build it and leave us in the stone age? I don't think so.
     
  7. To remove this ad click here.

  8. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    The discussion about developing self-driving cars should be about the difficult technical challenges of making them reliable enough to be as safe as possible, to avoid as many accidents as possible, without making them so overly cautious that people will get so impatient waiting for them to make a decision that they will shut off the self-driving system.

    This isn't the first time I've seen an attempt to turn the discussion away from the very real technical challenges into a pointless, time-wasting ethical or philosophical discussion, as if the car (or the programmers designing the software for the car) will be faced with moral dilemmas, rather than technical challenges. That's about as meaningful as the proverbial argument over how many angels can dance on the head of a pin.

    However, the example cited above is perhaps the most absurd case I've ever seen! One of the most serious challenges with current self-driving or semi-self-driving cars is to get them to "see" pedestrians at all. And we're supposed to worry about the software making "decisions" based on which pedestrian appears to be older or younger?

    I think we should give whatever source that quote came from a "Doofus of the year" booby prize!

    One Uber semi-self-driving car has already run down a pedestrian who was walking her bicycle across a road late at night, in complete darkness. This was referred to in the OP above, but wrongly described as if it was an ethical problem -- rather than the technical challenge of a self-driving car "seeing" a low-reflection pedestrian (she was dressed entirely in dark clothing, I think?) with a lidar scanner soon enough to react properly by stopping.

    Speaking as a computer programmer, the challenge is to get self-driving cars to reliably spot obstacles (either moving/living ones or stationary ones), and avoid colliding with them. Nobody is trying to design a self-driving system sophisticated enough to be able to differentiate between human obstacles and not-human obstacles, let alone differentiating one type of human being from another! Practically speaking, fat pedestrians have a larger cross-section, so will be easier to spot. So I suppose if one wanted to take this absurdity to even higher level than it's already at, one could claim that self-driving cars will "prefer" to hit skinny pedestrians rather than fat ones.
    :rolleyes:
     
    Last edited: Mar 22, 2019
  9. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    But if we are going to discuss the problem of ethical decision making by machine intelligence, let us consider Isaac Asimov's classic "Three Laws of Robotics":
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    As a computer programmer, working with the real-world computers we use today, and not the much more advanced "positronic brain" of Asimov's classic science fiction, I come to the phrase "...a human being", and stop right there.

    How does a robot (and a self-driving car is a robot) recognize what a "human being" is? Can it reliably recognize a human being when it's in an weelchair, or an amputee hobbling along on crutches? How about a child riding a tricycle, which will certainly change the outlines of the shape it's seeing? How can it tell what objects (including stationary vehicles) do or don't have humans inside? Will a robot be able to recognize there's a human inside that "funny animal" costume, or the pedestrian wearing an advertising signboard? How about on Halloween, how can it tell the difference between a child dressed as a witch, and a dummy of a witch placed in someone's yard? How could a robot possibly be able to differentiate between a living, breathing human being and a mannequin?

    Thinking about the problem, I soon came to the conclusion that it's impossible for any robot or machine intelligence to be 100% reliable in identifying what objects are humans, or contain humans... and which objects aren't and don't. Given that reality, it's pointless to argue about which type of human being should be given "survival preference" in programming self-driving cars. The challenge isn't in making moral choices; the challenge is in reducing the probability that the car will hit an obstacle. Any obstacle that's large enough to have a reasonable possibility of being a baby big enough to be crawling around on its own... and perhaps even a bit smaller, for safety's sake.

    Self-driving cars need to be designed to come as close as possible to eliminating the chance of colliding with anything large enough to be (or to contain) a human being, and if that collision is unavoidable, to reduce speed as much as possible before that collision happens.

    No moral or ethical choices need to be considered. The need to make moral or ethical choices would require the robot to have a near-human level of perception and understanding of the world around it. That is far, far beyond the level of sophistication possible with today's robots, which at best are about as "smart" as a middling-smart insect. Maybe if roboticists work very hard for a few more decades, they can get them up to the level of a really smart insect, such as a honeybee.

    Anybody who thinks that robots (and again, that includes self-driving cars) have the perception and true thinking ability of even a tree frog, let alone the perception and thinking ability of your pet cat or dog, have been watching far too many science fiction movies and TV shows. I personally am a science fiction fan, but I know the difference between that and reality.

    [​IMG]
    Smarter than any robot in existence today!

     
    manybees likes this.
  10. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    101101 has once again been reading websites dessiminating conspiracy theories, with predictably bad results.
    ;)

    I the real world, drones don't make "kill decisions". The people operating those drones, or the officers overseeing them, make kill or no-kill decisions.

     
  11. bwilson4web

    bwilson4web Well-Known Member Subscriber

    I'm surprised no one mentioned:
    [​IMG]

    [​IMG]

    The Will Smith character hated robots because one saved him instead of a drowning child. Throughly entertaining movie.

    Bob Wilson
     
  12. To remove this ad click here.

  13. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Call me a curmudgeonly stick in the mud, but I prefer thinking man's science fiction to not be turned into a movie with comic book physics, in which ordinary mortals perform superhuman acrobatics and jump around as if they are Spider-Man. I can enjoy movies about comic book superheroes on their own level, if done well; but this wasn't one of those.

    My favorite character from the original stories, Dr. Susan Calvin, wasn't in this movie at all. Even worse, they gave her name to a character who was just another pretty face, in a mere supporting role.

    Sadly, Asimov's classic science fiction series of Robot short stories were turned into just another Will Smith action vehicle.
    :(
     
    Last edited: Mar 22, 2019
  14. gooki

    gooki Well-Known Member

    I agree with Pushmi. There is no moral dilemma for self driving cars.

    Human drivers don't seem to face this dilemma, so why would a robot? The goal is to minimise accidents, and impact.

    This whole topic smells like academics trying to secure funding, so they can stay employed.
     
  15. bwilson4web

    bwilson4web Well-Known Member Subscriber

    A retired engineer, our ethical failures are too often documented in blood.

    Bob Wilson
     
    interestedinEV likes this.
  16. interestedinEV

    interestedinEV Well-Known Member

    I respectfully disagree. When humans are there, there are always going to be some areas of grey. Let me give you a few hypothetical situations. I am looking at it holistically, for example medical ethics covers patients, doctors and providers, pharmaceutical companies, insurance companies, social media etc. For example, Facebook is being asked to regulate the anti-vaxxers, which some claim is violation of free speech even though there is no scientific proof that vaccines cause autism.

    What could be some ethical dilemmas? Here are some examples

    1. Manufacturers trying to influence regulators or going around rules. Uber for example decided that they would not comply with rules of the state of California and the Governor of my state of Arizona welcomed them with open arms and no regulation. An old lady died, killed by a self driving Uber car. What is Uber's culpability? It is a legal question, but also an ethical question. Uber was in compliance with Arizona laws, where there was no reporting requirements, but there would have been red flags based on the performance in California, where Uber had the highest level of human intervention of 1 incident every 1.3 miles compared to Wyamo who was much lower, and Uber might not have been allowed to continue testing. Uber deliberately left California as they did not want to comply with the CA rules or report on their performance, which would have shown up like a sore thumb. Uber is known to push the envelope and take risks and they did take a risk, which turned out badly for an old woman. Yes. Uber was found not to be criminally liable in Arizona, because they were in compliance with Arizona laws. The driver of the car may be charged, but the question is did Uber act ethically?
    2. Case of Boeing above, were manufacturers try and capitalize on some features that could save lives but are considered premium. This can very well happen to cars, where the manufacturer says "I have standard safety features and this additional one for which I will charge you an arm and a leg". Should manufacturers give up their stuff for free, just because it is a safety feature?, No in which case they will not invest in developing safety features. But should they price it in a way way that more people buy and there is a reasonable profit or should they maximize profits? That is the ethical dilemma, as for non-safety features that would be a pure business decision. (May I remind readers of Lee Iaccoca who argued against airbags for many years and then made a U-turn)
    3. Should a person who has the controls and ability to take over the car, drive a car when they are under influence. Let us say I get into my car with a BAC of 0.20 and then tell the car to take me home and as a Level 3 or even 4 self driving car, it should be no big deal, right? What happens if the car has a warning and is asking for human intervention but the human is incapable of intervening due to BAC of 0.20? What happens if the driver is not impaired but passes out due to a medical problem? In that case we would say, that is an "act of God" but in the first case, there could be some ethical issues as the driver was impaired. In an non self driving car, if your BAC is greater than 0.08, you do are responsible if you start driving, even if there is no accident. In a self driving car, you are not technically driving and the chance you would have take over could be very small.
    4. Also a broader question, who should develop rules and laws for self driving cars? Government officials, an independent body or should the manufacturers self certify. Boeing was allowed to self certify. With so many manufactures and technologies/protocols etc. how are you going to regulate self driving cars? There are some ethical questions here that might not be obvious like a conflict of interest. And many of them are not unique to self driving cars, they could apply to many situations including self driving cars.

    I can say with certainty that we are going to continue to run into ethical issues (others may disagree but to me the Uber and the Boeing case above had clear ethical dimensions). Whenever there are human interests (money which is important for business, power for politicians, self preservation for users etc. etc.) you are going to have ethical questions. You cannot stop progress because ethical questions may arise later. It is important that we be open to the fact that there were a conflict of interest between the various stakeholders and we should discuss it as it happens.
     
  17. interestedinEV

    interestedinEV Well-Known Member

    The more I think of it, @Pushmi-Pullyu and @gooki are trying to answer the question, "Is there something ethically wrong in the development of self driving cars?" I do not think so, just as there nothing wrong in advances in medicine. However as we advance in medical science, different types of ethical questions come up. In theory, if I can get into the car drunk and direct my car to go to a bar, and the car may sense my drunken state and overrule my request and take me home. That would be a case of machine deciding what is best for man, justified or not. At least I am not going there. That is a different discussion.

    @bwilson4web (If I can speak for you) and myself are approaching it for a more tactical level as to what are some current and future ethical considerations that come out of the development and use of this self driving technology, including those issues that may not be applicable today but may be applicable in the future. There are different paradigms. There are decisions that humans can make that affect the performance of these self driving cars, and there may be ethical considerations in these decisions. At least for me, that is where I am coming from. Let us say, an engineer has to decide if they want to put a second sensor, at the cost of $5 per car or $5 million a year, and the simulations show that the second sensor will only help in 1 out 1 billion simulations, should s/he insist on it? You may argue this the situation is here today anyway, but with more software these decisions become more complex and difficult. The stakes are higher in a self driving car as you are trusting the machine more than in a regular car.
     
  18. bwilson4web

    bwilson4web Well-Known Member Subscriber

    I would observe that a single AoA sensor would have been safer. If it failed, it would have been unambiguous. The two AoA sensors gave a false illusion of redundancy. Triple redundancy with electronic voting is what mission quality systems use.

    Bob Wilson
     
  19. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Well of course there are ethical/ moral considerations to humans deciding whether or not to use "robot drivers" instead of human drivers for cars, and you cite some. My point is that those considerations are utterly irrelevant to those designing self-driving cars, and the software needed to control them.

    Here's a hard fact that many aren't willing to face up to: Self-driving cars will never be 100% safe. We can expect them to reduce the accident rate, and in fact I expect regulators won't allow them to be used by the general public until they have been proven to have a lower accident rate than human drivers.

    But since fully autonomous cars will never be 100% safe -- that's physically impossible -- we must accept the reality that some people are going to be killed by them. That follows as surely as 1 + 1 = 2. And even worse, it will happen that a person will be killed in circumstances when it's very unlikely that a human driver would have caused the accident, because autonomous cars won't drive exactly like human drivers do. Not that we'd want them to -- we want them to be far safer -- but it's, again, physically impossible to program them to operate exactly like a human driver.

    So let's accept the reality that neither human nor robotic drivers will ever reduce the accident rate to zero, and face the fact that the legality and the liability are going to have to be worked out by society and the courts.

    May I respectfully suggest this is waaaaaaaaay off-topic.

     
  20. interestedinEV

    interestedinEV Well-Known Member

    Again, there are best practices in engineering as in everything else. I was being hypothetical to show that there are ethical decisions that engineers can make that could affect EV safety. And there are things that management, politicians, regulators, community, social media, press etc can do to influence EV design, safety and operation using unethical behavior.
     
  21. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    But it's not the engineer who makes that decision. That's a decision for the bean-counters, the marketers, and ultimately the executives running the company. The engineer may develop, or will be part of a team which develops, a feature or bit of software code to improve the safety of self-driving cars. But it's not the engineer who will make the decision about whether or not to include the feature in mass produced cars, and whether or not to make it an option that the buyer has to pay extra for.

    Nor is this in any way a new ethical/ moral dilemma. For example: There was a time when seat belts were optional in cars, not required by law. Was it an ethical consideration for auto makers to charge extra for installation of seat belts? Of course not. Extra safety is worth paying for. We can't expect auto makers to make their cars as safe as possible, regardless of cost. If they did, we'd all be driving tanks or armored cars instead of automobiles! That is, those few who could afford them; the rest of us would have to do without.

    Well here is something that I think is on topic, so I'll respond directly.

    If the car is equipped for at least Level 3 self-driving, then it should be able to safely pull to the side of the road and call for help if the human (non-) driver doesn't respond. If it's Level 4 self-driving, then it shouldn't ever need human intervention. If it does need such intervention, then by definition it has failed to reach Level 4 autonomy.

    Of course, you can think of an edge case where pulling over safely and stopping isn't an option. For example, on a bridge where there is no shoulder or parking lane at all. But one can always come up with edge cases where things are going to go wrong. The same is true of human drivers, too. And again I think it's wrong to approach that as an ethical/ moral problem, but rather as a technical problem which will be a challenge for self-driving system designers and programmers to deal with. You can argue 'till you're blue in the face just who has responsibility, and who should be making decisions about what safety features should or shouldn't be in self-driving cars (or in modern airliners), but that's not going to reduce the accident rate with self-driving cars (or the incidence of airplane crashes) in the slightest.

    In fact, this entire subject seems to be a symptom of the unfortunate human obsession with finger-pointing and finding someone to blame when things go wrong. Well, after you have agreed on who to is to blame, which may or may not be a case of finding a scapegoat, you still have to deal with the actual problem.

    For example, in the case of the Uber car which ran down a pedestrian, we can argue 'till doomsday how much of the blame should be assigned to the victim and/or the operator of the Uber car and/or the designer of the car's semi-self-driving system, but figuring out "who is to blame" won't stop another Uber car from running down another pedestrian in similar circumstances.

    The rational approach would be to concentrate on the problem of using sensors to "see" a low optical reflectivity object in the roadway at night. Perhaps more reliance on phased-array radar would help? Or using multiple lidar scanners with different wavelengths of light; would dark clothing have better reflectivity with certain wavelengths? Or perhaps the solution would be merely to use higher power lidar scanners, which would scan the road further ahead and thus give the self-driving car more time to react.

    * * * * *

    I really appreciated it when I was working for Sprint, and the team leader talked about the Japanese approach to resolving the situation when something goes wrong. According to what my team leader said, there should be an assumption that the situation was 80% the fault of process or procedure, and only 20% the fault of any person or persons involved. I think that's a much healthier and much more productive way to approach significant problems: To focus on what went wrong, rather than trying to pin all the blame on one person or a small group of people.

     
  22. interestedinEV

    interestedinEV Well-Known Member

    You have missed the whole point. Robot drivers was just one hypothetical issue. The Uber example was a different ethical issue. Uber was determine to catch up with Wyamo, and they went to the place with the least regulation, taking a risk that resulted in a fatality. Engineers taking short cuts is another one. There could be issues similar to Boeing. I will elaborate further below




    I agree with you 100%. However here is a possible situation which I had posted earlier and will put it in software terms. In a set of simulations, an engineer came across a situation that could cause a fatality and the probability is less than 1 in 100 million. The cost of fixing that defect is 3 months delay and $3 millions extra as I have to retest the entire software again. I decide to fix it in the next release so there are no delays and that this tested along with other fixes. I know of the defect, I know the fix and I know the cost. Hopefully, nothing happens before the next release. What if something happens? What if the probability was 1 in 5 million, where do you draw the line. That is the ethical question. I have been involved in situations where software cannot have a single known flaw. A medical device manufacturer that we worked with, decided to postpone launch as they found an situation that could cause the software to fail due to a user error. The management felt that it was important to fix the software to prevent user misuse for liability reasons. Lawyer could go after the device manufacture sating that they knew about this and did nothing. I do not think is wise to generalize that there are no decisions at the software design level that can affect safety.

    Again, my friend, here is a fact of life. Lawyers can and will ague that some decisions in software development, put human life behind profits. People have sued Tesla already, even when facts seem to support user error rather than Tesla negligence. There may be a case where negligence is the culprit. Take the simple case of VW and emissions scandal. VW altered the software to show compliance, when there was no compliance. Could a situation where software is used to cover non conformance happen in self driving vehicles? Yes it could


    Possibly but this is to illustrate that social media could influence the usage of autonomous vehicles. What is someone started a group that started disseminating wrong information about self driving cars on Facebook. Should they stopped or are they just exercising the first amendment rights. That to me could be an ethical question.
     
  23. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Well okay, fair point. I forgot the thread title was "Ethical challenge."

    Mea culpa.
    :oops:
     

Share This Page