Ethical challenge

Discussion in 'General' started by bwilson4web, Mar 21, 2019.

To remove this ad click here.

  1. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Okay, but how many people could that medical device have saved if it had gone into manufacture sooner, even with a software flaw which would fail in rare conditions?

    This is very relevant to self-driving cars. A large number of comments have been posted to InsideEVs News articles asserting that self-driving systems should not be enabled in cars sold to the public until they are 100% safe. But as I pointed out, this is an impossible goal.

    My argument is that waiting for 100% safety, or even 99.99% safety, is counter-productive. It's a particularly bad case of "The perfect driving out the good." It's like arguing that since some air bags explode and injure or even kill passengers, we should remove all air bags from cars. That's not an ethical or moral argument; it's merely stupid. It's the human brain's lizard hindbrain responding with fear instead of reason.

    But the situation with semi-self-driving cars isn't that black-and-white. There is IMHO a real issue there which -- unlike most issues raised in this thread -- is actually worthy of discussion. Semi-self-driving cars lull the driver into a false sense of security, allowing him/her to relax, losing focus and not watching the road properly. This can actually raise the probability of an accident under certain circumstances, as has already been shown with Tesla cars using Autosteer.

    Should we quit using systems such as Tesla Autosteer or GM Super Cruise; automated lane-keeping systems which allow the driver's mind to drift away from watching the road? Arguably, if Tesla Autosteer and GM Super Cruise reduce the overall accident rate, then that additional risk is worth accepting.

     
  2. To remove this ad click here.

  3. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    There will always need to be some limits to free speech. It's rather difficult to refute the entirely valid observation that free speech doesn't give you the right to shout "FIRE!" in a crowded theatre when there isn't any fire.

    Is the situation with anti-vaxxers spreading their science-made-supid fear-mongering on social media the equivalent of yelling "FIRE!" in a crowded theatre? I won't claim that's as clear-cut, but I can see a pretty good argument that it's a similar case. The anti-vaxxer movement does represent a clear and present, easily demonstrated, risk to public health. As was pointed out in a discussion of the subject I saw recently on a TV news/commentary show, those supporting vaccination tend not to see it as a high-priority issue. People generally don't get all that excited about the benefits of vaccination. So those shouting loudest and most persistently about vaccinations on social media tend to be the anti-vaxxers, who unfortunately have made a surprisingly high number of converts to their cause, to the point that a respectable percentage of the population in some areas of some States now have people refusing to vaccinate their kids.

    Personally, I think the rational response to such actions would be to designate some islands somewhere, where families refusing to vaccinate their kids and adults refusing to vaccinate themselves against infectious diseases, would be invited to move if they want to continue an activity which endangers everyone around them. And they shouldn't be given the choice of refusing such an "invitation."

    Those who live in mainstream society have to abide by the rules of society, even when they find it inconvenient or have some invented or pretended "ethical or moral" objection to following the rules. That's why we have policemen, and it's why vaccination should be mandatory everywhere.

    And there will come a day when people will be forced to use self-driving cars, because riding in a human-driven car is probably the most dangerous thing which most people now do on an everyday basis. There will come a day when a human driving a car on public roads is illegal, alto quite possibly with some exceptions.

    * * * * *

    Here's an excerpt from Larry Niven's short story "Flatlander", set on a future Earth where everybody travels by teleportation booths or, for sightseers, robotically piloted flying cars.

    * * * * *

    I remember the freeways.

    They were the first thing that showed coming in on Earth. If we'd landed at night, it would have been the lighted cities, but of course we came in on the day side. Why else would a world have three spaceports? There were the freeways and autostradas and autobahns, strung in an all-enclosing net across the faces of the continents.

    From a few miles up you still can't see the breaks. But they're there, where girders and pavement have collapsed. Only two superhighways are still kept in good repair. They are both on the same continent: the Pennsylvania Turnpike and the Santa Monica Freeway. The rest of the network is broken chaos.

    It seems there are people who collect old groundcars and race them. Some are actually renovated machines, fifty to ninety percent replaced; others are handmade reproductions. On a perfectly flat surface they'll do fifty to ninety miles per hour.

    I laughed when Elephant told me about them, but actually seeing them was different.

    The rodders began to appear about dawn. They gathered around one end of the Santa Monica Freeway, the end that used to join the San Diego Freeway. This end is a maze of fallen spaghetti,great curving loops of prestressed concrete that have lost their strength over the years and sagged to the ground. But you can still use the top loop to reach the starting line. We watched from above, hovering in a cab as the groundcars moved into line.

    "Their dues cost more than the cars," said Elephant. "I used to drive one myself. You'd turn white as snow if I told you how much it costs to keep this stretch of freeway in repair."

    "How much?"

    He told me. I turned white as snow.

    They were off. I was still wondering what kick they got driving an obsolete machine on flat concrete when they could be up here with us. They were off, weaving slightly, weaving more than slightly, foolishly moving at different speeds, coming perilously close to each other before sheering off -- and I began to realize things.

    Those automobiles had no radar.

    They were being steered with a cabin wheel geared directly to four ground wheels. A mistake in steering and they'd crash into each other or into the concrete curbs. They were steered and stopped by muscle power, but whether they could turn or stop depended on how hard four rubber balloons could grip smooth concrete. If the tires lost their grip, Newton's first law would take over; the fragile metal mass would continue moving in a straight line until stopped by a concrete curb or another groundcar.

    "A man could get killed in one of those."

    "Not to worry," said Elephant. "Nobody does, usually."

    "Usually?"

    The race ended twenty minutes later at another tangle of fallen concrete. I was wet through. We landed and met some of the racers. One of them, a thin guy with tangled, glossy green hair and a bony white face with a widely grinning scarlet mouth, offered me a ride. I declined with thanks, backing slowly away and wishing for a weapon. This joker was obviously dangerously insane.
     
    Last edited: Mar 23, 2019
  4. bwilson4web

    bwilson4web Well-Known Member Subscriber

    In 1980-81, I wrote a VAX/VMS device driver for the Philco-Ford "A Channel", a NASA network device that handled packet communications for satellite missions. One part of the code, error handling, was nearly impossible to test because you had to "break" either the hardware or trick it into reporting an error. So I missed something.

    About 6-8 months later, I was testing an unusual configuration of two VAXes that involved sharing disk drives between the processors. A VAX crashed and the dump pointed to my device driver. I saw where the error handling routine was not saving the processor context properly. I figured out the code to fix the problem and took it to the Configuration Change Board ... they rejected it because 'it was not a normal operational configuration.'

    Ten years later, I was was changing companies and my hiring manager asked, 'Do you know anything about VAX channels?' I took a busman's holiday and visited with the tech team looking at the problem who had the source code and dump. Sure enough, it was my device driver and the error appeared to be associated with the error handling routine. I proposed a fix and the next week, started my new job ... silence. After a couple of days, I contacted my technical peer on the tech team and they had implemented the fix which worked.

    I was and remain truly sorry that I'd missed that problem in the original device driver. But when the Configuration Control Board turned down my fix, there was nothing more I could do. Ten years later, resolving the problem 'made my reputation'. It was not by design nor deliberation ... an accident of fate. But ethically, I never felt right about it. Of course it made my reputation.

    Bob Wilson
     
  5. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Doesn't seem to have anything to do with self-driving cars. As I recall, Tesla Autopilot + AutoSteer won't allow the speed to be set more than 5 MPH above the existing speed limit. So if they were indeed driving at 100+ MPH, then the car couldn't have been under control of Autopilot + AutoSteer.

     
  6. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Sometimes chance screws you over. Sometimes chance favors you. Don't feel guilty when good luck happens to you; it will likely be balanced by bad luck at other times!

    Finagle's Law: The perversity of the universe tends towards a maximum.

     
    bwilson4web likes this.
  7. To remove this ad click here.

  8. gooki

    gooki Well-Known Member

    I thought we were addressing the above.

    Those scenarios aren’t really unique to self driving cars, and our society handles them today, and will handle them in the future.

    To be clear I’m coming from a functional perspective, where I do not believe there are ethical problems to overcome for self driving vehicles when in operation. I.e. in the event of a predicted positive impact should I hit this person/vehicle or that person/vehicle because ethics? This problem doesn’t exist. The problem is which impact has the lowest risk probability- then do that action.
     
  9. interestedinEV

    interestedinEV Well-Known Member

    Exactly they type of experience I am talking about. I had a similar experience where there was a question as to delay or not delay. The general consensus was no, do it next time but there was a business person who insisted on taking it someone in finance who got legal involved and the it was the potential liability issue that swayed the decision to delay.

    @Pushmi-Pullyu as @bwilson4web states even though the decision was made by someone else, he felt regrets for it. Again, when I say Engineer, I mean the whole collective organization that makes a decision. Problems that GM had with the ignition switch were known to a lot of people, they failed to act till forced.



    Q.E.D. or "quod erat demonstrandum", something I learned in a math class many years back. Exactly my point, this is the type of ethical decisions that could come up. If the company sent out a product with a known flaw and something went wrong, you can imagine the business consequences including possible criminal charges if that were to be known. Lawyers could dramatize the situation by talking about how corporate greed lead to that decision. But there is a cost to delaying. So do you err on the side of caution or do what might be the best overall? This company felt that they would rather be safe then sorry as their investors would want it that way. Uber could have been less aggressive in trying to catch up with Wyamo and proceeded more cautiously. Would that have avoided an accident like what happened. Your guess is as good as mine. They tried to get to the market faster and their product was not at the maturity level of their competitor (their rate of human intervention was much higher at that point in time compared to Wyamo but they had more cars on the street when compared to Waymo. One can do the math about the probabilities of who is likely to have an accident).



    Pardon me if I am wrong but you seem to argue that software cannot be blamed if there are accidents. My take is that there are cases where wrong decisions are made knowingly that can cause death and other liability and some of these decisions may have
     
  10. interestedinEV

    interestedinEV Well-Known Member

    I believe we are arguing the same thing. You cannot hinder progress just because there may be problems down the road. As @gooki quoted below says, we have to handle them in the future. I would add it should be handled in a fair and just manner.


    In all fairness, I did point that not all are exclusive to self driving cars. That said, there could be nuances that are particular to self driving car technology that will crop up in future.


    I think we are saying the same thing. There is no ethical problem in developing and implementing self driving car technology. At the spur of the moment, man or machine has to be make a decision based on the facts on hand and their judgement. It has been done for centuries (I am sure you had the same issues with chariots and horse drawn carriages but may be less intense due to lower speeds). There is no ethical issue (IMHO) in putting out a product that has been well tested and safe based, on reasonable interpretation, even if it is not 100% safe or cannot be proven to be 100% safe. The ethical issues come when a group of people make a decision where safety issues take a deliberate back seat to other considerations. It does not mean that the decision is wrong, it just points to an ethical dilemma that needs to be solved. I am all for self driving car technology and would like its widespread acceptance.

    And just to point out, in the town I live, I have to share the roads with the Wyamo cars every day (and had to with the Uber cars and the Intel Cars for a short time). So I have a healthy respect for them and have never seen them behave in a manner that would cause concern like some of the other drivers. They have neither cut me off nor have I seen them not follow posted instructions. That does not mean that I do not get frustrated at times when you are behind a Wyamo car, and they follow all the rules to a T, even when there is room for some leeway.:D
     
  11. gooki

    gooki Well-Known Member

    Cool, so we're in agreement.

    1. The software that controls self driving cars doesn't need to make ethical judgements.

    2. The ethical debate is around human aspect if creating it, and social implications of self driving vehicles.
     
    Walt R likes this.
  12. To remove this ad click here.

  13. interestedinEV

    interestedinEV Well-Known Member

    Correct. May be a day will come that we have robots that can make decisions using ethical considerations, but today and for now, there is no expectation that the software in cars will make ethical judgements or are even capable of doing so. The ethical issues revolve around the people who create, manage, regulate, or use of, not just software, but all aspects of the self driving vehicle.
     
  14. interestedinEV

    interestedinEV Well-Known Member

    That is what happened with the Uber vehicle that killed the lady in Tempe. The driver was distracted, watching a movie or something like that. The argument is that if that driver had been watching the road, she could have reacted and prevented the tragedy. The software could not perform effectively in low light conditions, and Uber had not been cleared for hands free driving. There was a back up driver for exactly these types of situations. However Uber knew that their drivers were not being careful and so while they may not be criminally liable, in a civil trail, the results could have be different (the case was resolved out of court). So the ethical question is "Should Uber have continued testing when they had an accidents every other day (see reference below), though all but one were fatal. You might argue, as much as it was tragic, Uber was right to continue testing and without testing you cannot improve further. Others might disagree, saying Uber was not right to continue, especially when they were aware of this very issue. This is the essence of the ethical debate. Even I if I said Uber was wrong, I am not saying self driving vehicles should not be tested. Wyamo is also in my town, and they have been there much longer and they have not had such problems. There is a right way to do things and Wyamo seems to be doing it the right away. I am sure others are also doing it the right way.



    http://fortune.com/2018/12/11/uber-self-driving-car-accident/

    Just days before a self-driving Uber vehicle killed a pedestrian earlier this year in Tempe, Ariz., a manager in the ride-sharing company’s autonomous vehicle division sent executives an 890-word email warning of safety concerns, TechCrunch reported.

    The email was sent to the division’s head, along with other top executives and lawyers. The warning was made public by The Information, which verified the claims after speaking with current and former Uber employees.


    Robbie Miller, who was an operations manager for the unit’s testing operations at the time, wrote in his email the unit needed to “work on establishing a culture rooted in safety.” Miller warned that “cars are routinely in accidents resulting in damage” and added that backup drivers, who sit behind the wheel in case of an emergency, were not properly trained to handle the vehicles when there were safety issues.


    According to Miller, “a car was damaged nearly every other day in February” and was “usually the result of poor behavior of the operator or the AV technology.” He also said repeated infractions would rarely result in terminating backup drivers, several of whom “appear to not have been properly vetted or trained.”


    Just five days after Miller alerted executives to these safety concerns, an autonomous car operated by Uber that also had a backup driver, killed a woman while the company was testing the program in Arizona, the New York Times reported. Following the incident, Uber suspended its driverless car testing operations in Tempe and Pittsburgh.
     
  15. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    So why don't companies go ahead and ship the product with a clear warning about the flaw, with a promise it will be fixed as soon as possible?

    This ethical problem has implications way, way beyond just the medical device market. How about prescription drugs? How many potentially beneficial drugs has the FDA blocked from entering the market just because a small percentage of the population has an allergic reaction to it? We don't ban food from the market just because some people have food allergies. With proper labeling, I see no rational or logical reason that a drug should be blocked from the market just because some people are allergic to it. Wouldn't it be better, wouldn't it relieve far more suffering and possibly even save lives, to issue such medicines with a warning to try it initially with a small dose to check for allergic reactions?

    Heck, some people even have an allergy to penicillin. Should we ban all varieties of penicillin just because a small percent of people are allergic to it? Ridiculous.

    The fatal accident under discussion here involved a homeless person walking her bicycle across the road in the dead of night, while wearing dark clothing. I'm not sure, but I think she was a "person of color"; that is, she had dark skin, so again that was a low reflectivity situation, making her hard to spot.

    If a human had been driving the car, we wouldn't be having this conversation. Anyone walking across the road when there is traffic, in the middle of the night, wearing dark clothing, is risking being hit and killed by a car. So who is responsible? The victim is!

    The only part of that situation that I think is worthy of discussion is the failure of the lidar system to detect the pedestrian in time for the car to stop. I'm shocked how the Politically Correct Police have tried to make this about how society treats the homeless. I'm not at all defending how our society refuses to provide housing for everyone; I'm just saying that the fact she was homeless is utterly irrelevant to the debate about self-driving cars. It is completely and entirely beside the point.

    Waymo has certainly been far more cautious about exposing "civilians" to risks from their semi-autonomous vehicles. I applaud Waymo for taking more care in using tests on closed courses, and being more incremental in making advancements in their test vehicles.

    But the flip side of that is that Waymo's advancements are not significantly reducing the accident rate, are not saving lives. Waymo's self-driving systems are seen only in Waymo's fleet of test cars, not in any mass-produced cars. Tesla's (and GM's, and other auto makers') are reducing the accident rate by a meaningful amount every day, and thus are saving lives, because they are being used every day by ordinary drivers; by "civilians".

    Which approach is actually more beneficial to society? Perhaps in the long run Waymo's is, if their system is eventually licensed out to major auto makers, and is actually used. But in the short term, Waymo's advances are not of any help to society at all.

    The only time I think an ethical question should be involved is when a company intentionally hides a safety-critical problem, or if it fails to properly test things which need to be tested. The classic example is, of course, GM hiding the ignition switch problem which it knew was causing accidents and even costing lives. But it's unfair to single out GM; Ford was just as bad in not making changes or even acknowledging a safety issue in the Pinto, simply because their bean-counters said it would be cheaper to pay off liability claims (including wrongful death lawsuits) than to correct the problem in production.

    As for arguing that software either can or cannot be "blamed"... the thing is that "blame" is an expression of irrational, non-scientific thinking. It's not looking for the cause of an event; it's merely looking for a scapegoat. If you are looking for someone or something to "blame", then you're doing exactly the same thing they did in medieval times when there was a witch hunt. "Something bad has happened, so we need to find someone to blame!" That's not a rational, cause-and-effect way to look at the world. It's a primitive, irrational, lizard-hindbrain way to look at the world, as if everything that goes wrong is the result of someone doing something bad. "Oh, a self-driving Uber car ran down a poor homeless pedestrian; who can we find to blame?" BZZZZZZZZZ! Wrong!

    The question should not be "Who or what is to blame?" Unless criminal activity (or criminal negligence) was involved, the question should be "What is the cause, and how can we prevent this from happening again?" As I said earlier: In an investigation following some sort of accident or unfortunate incident or catastrophe, the focus of the investigation should be on the procedure which lead to the bad outcome, not the people involved.

    Inadequate software may be the cause of an accident with an autonomous or semi-autonomous vehicle. But asserting that software is to "blame" for the accident assumes that there was "something wrong with" the way the software was designed or coded. Wrongly assumes that it was a mistake on the part of some coder or some team. That might properly be a conclusion that an investigation might sometimes come to, but can never properly, rationally, or logically be a premise for an investigation.

    No computer program as complex as is needed for operating an autonomous car can ever possibly be designed to include every possible contingency that could ever happen in real life. Driving a car on public roads involves far, far too many variables. Trying to plan for all possible contingencies would be (a) impossible, because things happen every day that no one ever predicted in advance, and (b) would cause so much code bloat (to include logical decision paths in the program) to encompass even the most rare of possible events, that the program as a whole would wind up running much to slowly for real-time decision making in an autonomous car.

    Or to put it another way: Attempting to design self-driving software which could properly react to every possible situation would wind up making the operation far less safe, because it would react far too slowly to prevent accidents. The objective of the software design team should be to find the happy medium which would provide the lowest accident rate. Note this means accepting the reality that with people riding on highways in vehicles moving at high speed in close proximity to other vehicles also moving at high speeds, accidents, sometimes fatal accidents, are inevitably going to occur. That's not an ethical/ moral problem; it's just physics.

    That is the hard reality which far too many people are not going to be able to accept. Especially not in a society in which people increasingly see themselves as "victims" rather than adults who are responsible for their own behavior. A society in which a jury will award a woman millions of dollars when she put a hot cup of McDonald's coffee in between her legs in a moving car, and pulled the lid off so it would cool faster... then sued McDonald's when her almost insanely foolish actions lead to her being severely scalded.

    The Politically Correct Police characterize this as "blaming the victim", as if that was inherently wrong. Well, in the real world, "victims" are quite often the cause of their own misfortune or death. Just like that "poor homeless woman" walking across the street and into oncoming traffic in the dead of night, while wearing dark clothing. She could have waited until there was enough of a gap in traffic for her to safely cross. But she didn't; so why "blame" that on Uber, or the Uber driver, or the Uber car's lidar system, or the software controlling the Uber car?

    If you must waste time finding someone to "blame", rather than rationally, logically, and practically looking for the actual cause of the accident, then blame the victim. Because if blame must be assigned, it should be assigned to her.
     
    Last edited: Mar 24, 2019
  16. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I'm glad that we have a consensus on that, at least!
    :)
     
  17. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Yeah, I thought it was inevitable that there would be frustration expressed over autonomous cars scrupulously following traffic laws. Heavens, Waymo's self-driving taxis actually follow the speed limits, never going even 1 MPH above the posted limits! How infuriating!
    ;)

    But seriously, in the real world that can in rare cases lead to hazardous driving conditions. For example, in the Greater Kansas City area where I live, there is one section of freeway (part of the Interstate loop going around the city) where the average speed of traffic is 15 MPH over the speed limit! And if you think I'm exaggerating... then all I can say is, you haven't driven there, and I have. I used to drive it every day. If you try to drive the posted speed, or even 5 MPH above the posted speed, then almost everybody -- and I mean, like 98-99% of the vehicles on the road, including tractor-trailers -- will have to go around you, often at a fairly high difference in speeds. That's not safe.

    It think one can rationally argue that this indicates the posted speed limit on that section of Interstate is far too low, so that's the situation which actually needs to be corrected. But that doesn't address the fact that self-driving cars aren't going to go above the posted speed limit, or at least not much above it. (Tesla allows cars using AutoSteer to go 5 MPH above the posted limit, or at least they did last time I read a report on that.)

     
  18. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Well, there certainly is an ethical quandary there. Should civilians be exposed to test driving of semi-autonomous vehicles? Is it better for society as a whole to accept the inevitability of an occasional traffic accident resulting in permanent injury or death as part of developing self-driving cars?

    But I don't think the case in question is the right case. In the circumstances, Uber should not have been continuing the tests under those conditions. Either they should have restricted the testing to closed driving courses, without any risk to "civilians", or else Uber should have installed cameras inside the cars to monitor the (non-)drivers who were supposed to be watching the road, but weren't... and started firing them until they got some drivers who were actually doing what they were paid to do, and not texting or surfing the internet on their cell phones.

    I note that pilots are required to undergo hundreds or thousands of hours of training before they are allowed to fly solo, and that part of that is training them to maintain situational awareness, to keep "watching the skies", even when an autopilot is controlling the plane. Arguably Uber should be doing the same with those hired to monitor their experimental self-driving cars. I strongly suspect Uber doesn't want to take the time or spend the money for that level of training... and that most certainly is an ethical issue.

    It might be interesting to expand that particular ethical question to a more general one. Some people call Tesla's semi-self-driving system a "beta" system, and argue that ordinary (or "civilian", as I'm calling them) Tesla car drivers are being used as beta testers. They say that this is exposing the general public to unwarranted risk.

    I think there is a valid debate there. Is that risk to the public "warranted" or "unwarranted"? I can see reasonable arguments on both sides.

     
  19. interestedinEV

    interestedinEV Well-Known Member

    The answer my friend is simple, some businesses may decide the risk is not worth the "vagaries of the jury system". Others may be ready to push the envelope. It depends on the company, the management, the regulators and their past experience. I remember several years ago a lawyer convinced a jury that Mcdonalds was a fault, when a lady put a cup of coffee between her thighs (her car did not have a cup holder in her seat) and the coffee scalded her.
     
  20. interestedinEV

    interestedinEV Well-Known Member

    I think we have beaten this topic to death but at least we had an insightful, and respectful conversation
     
  21. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Or to put it another way: Because their lawyers won't let them, and/or because we live in a litigious society.

    So we could debate tort reform, if you like. ;) But again, that's nothing unique to the automotive industry or to the subject of self-driving cars.

     
  22. bwilson4web

    bwilson4web Well-Known Member Subscriber

    I would add that the Volvo accident avoidance system had been turned off. There were reports that the Volvo system might have avoided the fatal impact.

    Not meant as a last word, I appreciate the quality of this thread. I'm not sure it rates a sticky here but it has value. Perhaps @Domenick might review this suggestion?

    Bob Wilson
     
    Last edited: Mar 25, 2019
  23. interestedinEV

    interestedinEV Well-Known Member

    Just when I thought this discussion was done, I chanced to see this article. The European Union has guidelines on Ethical AI (which would include driver-less cars)

    https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

    While I think it is a start, and it does raise some interesting questions which could become relevant. For example, what if a driver-less taxi is programmed not to pick up a person because of race. It is good that someone is thinking about these issue.


    According to the guidelines, trustworthy AI should be:


    (1) lawful - respecting all applicable laws and regulations


    (2) ethical - respecting ethical principles and values


    (3) robust - both from a technical perspective while taking into account its social environment


    The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements: (I have shortened the article, so you may want to read the original)


    • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights.
    • Technical Robustness and safety: AI systems need to be resilient and secure.
    • Privacy and data governance: besides ensuring full respect for privacy and date protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
    • Transparency: the data, system and AI business models should be transparent.
    • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination.
    • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly.
    • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
     

Share This Page