If the company sent out a product with a known flaw and something went wrong, you can imagine the business consequences including possible criminal charges if that were to be known.
So why don't companies go ahead and ship the product with a clear warning about the flaw, with a promise it will be fixed as soon as possible?
This ethical problem has implications way, way beyond just the medical device market. How about prescription drugs? How many potentially beneficial drugs has the FDA blocked from entering the market just because a small percentage of the population has an allergic reaction to it? We don't ban food from the market just because some people have food allergies. With proper labeling, I see no rational or logical reason that a drug should be blocked from the market just because some people are allergic to it. Wouldn't it be better, wouldn't it relieve far more suffering and possibly even save lives, to issue such medicines with a warning to try it initially with a small dose to check for allergic reactions?
Heck, some people even have an allergy to penicillin. Should we ban all varieties of penicillin just because a small percent of people are allergic to it? Ridiculous.
Uber could have been less aggressive in trying to catch up with Wyamo and proceeded more cautiously. Would that have avoided an accident like what happened. Your guess is as good as mine.
The fatal accident under discussion here involved a homeless person walking her bicycle across the road in the dead of night, while wearing dark clothing. I'm not sure, but I think she was a "person of color"; that is, she had dark skin, so again that was a low reflectivity situation, making her hard to spot.
If a human had been driving the car, we wouldn't be having this conversation. Anyone walking across the road when there is traffic, in the middle of the night, wearing dark clothing, is risking being hit and killed by a car. So who is responsible? The victim is!
The
only part of that situation that I think is worthy of discussion is the failure of the lidar system to detect the pedestrian in time for the car to stop. I'm shocked how the Politically Correct Police have tried to make this about how society treats the homeless. I'm not at all defending how our society refuses to provide housing for everyone; I'm just saying that the fact she was homeless
is utterly irrelevant to the debate about self-driving cars. It is completely and entirely beside the point.
They tried to get to the market faster and their product was not at the maturity level of their competitor (their rate of human intervention was much higher at that point in time compared to Wyamo but they had more cars on the street when compared to Waymo. One can do the math about the probabilities of who is likely to have an accident).
Waymo has certainly been far more cautious about exposing "civilians" to risks from their semi-autonomous vehicles. I applaud Waymo for taking more care in using tests on closed courses, and being more incremental in making advancements in their test vehicles.
But the flip side of that is that Waymo's advancements
are not significantly reducing the accident rate, are not saving lives. Waymo's self-driving systems are seen only in Waymo's fleet of test cars, not in any mass-produced cars. Tesla's (and GM's, and other auto makers')
are reducing the accident rate by a meaningful amount every day, and thus are saving lives, because they are being used every day by ordinary drivers; by "civilians".
Which approach is actually more beneficial to society? Perhaps in the long run Waymo's is, if their system is eventually licensed out to major auto makers, and is actually used. But in the short term, Waymo's advances are not of any help to society at all.
Pardon me if I am wrong but you seem to argue that software cannot be blamed if there are accidents. My take is that there are cases where wrong decisions are made knowingly that can cause death and other liability and some of these decisions may have
The only time I think an ethical question should be involved is when a company intentionally hides a safety-critical problem, or if it fails to properly test things which need to be tested. The classic example is, of course, GM hiding the ignition switch problem which it knew was causing accidents and even costing lives. But it's unfair to single out GM; Ford was just as bad in not making changes or even acknowledging a safety issue in the Pinto, simply because their bean-counters said it would be cheaper to pay off liability claims (including wrongful death lawsuits) than to correct the problem in production.
As for arguing that software either can or cannot be "blamed"... the thing is that "blame" is an expression of irrational, non-scientific thinking. It's not looking for the
cause of an event; it's merely looking for a
scapegoat. If you are looking for someone or something to "blame", then you're doing exactly the same thing they did in medieval times when there was a witch hunt. "Something bad has happened, so we need to find someone to blame!" That's not a rational, cause-and-effect way to look at the world. It's a primitive, irrational, lizard-hindbrain way to look at the world, as if everything that goes wrong is the result of
someone doing something bad. "Oh, a self-driving Uber car ran down a poor homeless pedestrian; who can we find to blame?" BZZZZZZZZZ! Wrong!
The question should not be "Who or what is to blame?" Unless criminal activity (or criminal negligence) was involved, the question should be "What is the cause, and how can we prevent this from happening again?" As I said earlier: In an investigation following some sort of accident or unfortunate incident or catastrophe, the focus of the investigation should be on the
procedure which lead to the bad outcome, not the people involved.
Inadequate software may be the
cause of an accident with an autonomous or semi-autonomous vehicle. But asserting that software is to "blame" for the accident assumes that there was "something wrong with" the way the software was designed or coded. Wrongly assumes that it was a
mistake on the part of some coder or some team. That might properly be a
conclusion that an investigation might sometimes come to, but can never properly, rationally, or logically be a
premise for an investigation.
No computer program as complex as is needed for operating an autonomous car can ever possibly be designed to include every
possible contingency that could ever happen in real life. Driving a car on public roads involves far, far too many variables. Trying to plan for all possible contingencies would be (a) impossible, because things happen every day that no one ever predicted in advance, and (b) would cause so much code bloat (to include logical decision paths in the program) to encompass even the most rare of possible events, that the program as a whole would wind up running much to slowly for real-time decision making in an autonomous car.
Or to put it another way: Attempting to design self-driving software which could properly react to every possible situation would wind up making the operation far
less safe, because it would react far too slowly to prevent accidents. The objective of the software design team should be to find the happy medium which would provide the lowest accident rate.
Note this means accepting the reality that with people riding on highways in vehicles moving at high speed in close proximity to other vehicles also moving at high speeds, accidents, sometimes fatal accidents, are inevitably going to occur. That's not an ethical/ moral problem; it's just physics.
That is the hard reality which far too many people are not going to be able to accept. Especially not in a society in which people increasingly see themselves as "victims" rather than adults who are responsible for their own behavior. A society in which a jury will award a woman millions of dollars when she put a hot cup of McDonald's coffee in between her legs in a moving car, and
pulled the lid off so it would cool faster... then sued McDonald's when her almost insanely foolish actions lead to her being severely scalded.
The Politically Correct Police characterize this as "blaming the victim", as if that was inherently wrong. Well, in the real world, "victims" are quite often the cause of their own misfortune or death. Just like that "poor homeless woman" walking across the street and into oncoming traffic in the dead of night, while wearing dark clothing. She could have waited until there was enough of a gap in traffic for her to safely cross. But she didn't; so why "blame" that on Uber, or the Uber driver, or the Uber car's lidar system, or the software controlling the Uber car?
If you must waste time finding someone to "blame", rather than rationally, logically, and practically looking for the actual
cause of the accident, then blame the victim. Because if blame must be assigned, it should be assigned to her.