Part 1
The tricky part about morality is that it varies from person to person. You and I definitely have different standards for it. Not only does it vary from person to person, it also varies from country to country. In fact, there are many types of morality such as cultural relativism, ethical egoism, altruism, and many more that I have never heard of. So how would a computerized car handle all of this? Will it have an option to toggle which morality it will follow? Do you pre-set the car to a specific moral code before you ship it to a specific market country? The problem is that there is no universal moral code that us humans follow, and we surely won’t all agree on one in the next few years. A while back, a professor in computing and information science by the name of IYAD Rahwan set out to understand more about what people have to say about the ethics of cars. After conducting a survey, he concluded that people did want self driving cars to protect pedestrians first but they also said that they would not purchase a vehicle which was programmed to do so. This seems pretty obvious. You don’t buy a car knowing that there’s a good chance your car will kill you on purpose.. If you wanted to die, you wouldn’t buy a ticket to your own grave. That’s just absurd. You’d buy a ticket to Switzerland. But for real, you wouldn’t want to buy something that you know is going to kill you instead of a highway frogger. Think about it this way: I give you 100 Skittles and tell you that 3 of them are poisonous. Surely you wouldn’t eat them now. So the problem about car ethics ultimately falls into the hands of the car manufacturers. My thoughts on this are as follows: Any time humans are involved in moral dilemmas, there is always some level of subjectivity. So if a computer was in charge, it would need to be purely objective. Looking at the facts, and in most cases it would be the number of people involved to minimize damage. To me, this whole car morality thing still sounds like a logistical nightmare though. Each company will have their own programming and ethical standards in the end. Suppose the president is walking down the road and in the other lane is a van filled with five criminals. If the A.I were to be as objective as possible, it would kill the president and the criminals would live to see another day. This is just an awful compromise. You could also say the car’s morality is chosen by the programmer responsible for designing the ethical dilemmas. All that can be said is that you simply can not please everyone. The aforementioned professor also had an experiment called the Moral Machine where users would participate in ethical dilemma surveys from all around the world. This survey attracted more than a million people from everywhere. Unsurprisingly, people from each continent had different choices when it came to saving the elderly or children. You simply can not please everyone and so, this is the major roadblock that is slowing the progress of autonomous vehicles being mass produced.
Part 2
The video starts off outdoors in a mall parking lot where a couple of car enthusiasts are messing around with the summon feature of their Tesla. Unbeknownst to the two giving the car meaningless orders, Johnny Law sneaks up behind the vehicle with its lights of justice flashing red, white, and blue and proceeds to pull it over for running a stop sign. As the sheriff steps out and approaches the driver’s seat, he is baffled to see no one in the car. He checks again under the seats to make sure he is sane. This got me thinking for a second. What if that car hit someone right then and there? Who gets the blame? The manufacturer? The owner? The unlucky victim? Or perhaps Elon Musk?
I already commented on your draft, so I’ve nothing to add here. Great writing!
LikeLike