An Algorithmic Morality: The Future of Driving
A critical step in mass producing cars with self-driving technology is overcoming the objectionable ethical dilemmas.
About 1.25 million people die each year because of car accidents, according to the World Health Organization. The production of autonomous cars seeks to diminish car accidents completely, thus it could be a pivotal technological advancement for our generation.
Cars with self-driving capabilities are already cruising streets around the world. As this technology is constantly progressing, self driving cars could have many impacts on the automotive and transportation industries in coming years.
Self-driving cars will be safer, more efficient, and cleaner. However, they may be safer than humans, but they will never be perfect.
Many would agree that a car should prioritize human lives over parked cars and not hesitate when choosing its passenger’s life or a small animals. For this reason, developers must determine an answer to ethical questions such as: how should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs?
Although there is no definite right or wrong answer to these questions, public opinion will play a strong role on how self-driving cars become accepted in our society.
Consider the following scenario: an adult is in a self-driving car and two children run out into the street, only a few yards in front of the car. There is no time for the car to stop and it is forced to choose whether to hit the kids or swerve out of the way and potentially harm the occupants.
In reality, most human drivers will never face such an agonizing dilemma. Nevertheless, “with many millions of cars on the road, these situations do occur occasionally,” Leon Sütfeld said to Nexus Media.
Sütfeld is a researcher in the Institute of Cognitive Science at the University of Osnabrück and lead author of a new study modeling ethics for self-driving cars.
In this scenario, the logical decision would be for the car to swerve out of the way and risk injuring the occupants, considering crashing with the cars protection is much safer than being hit with nothing to brace the impact. Though the issue within that is, who is going to buy a car that puts them, and possibly their family and friends, in danger? This concept is much like airbags but more substantial. It is an imperfect technology that would save more lives than it would take, yet from time to time would have no choice but to do so.
“It would be rather simple to implement, as technology certainly isn’t the limiting factor here,” said Sütfeld.
“The question is how we as a society want the cars to handle this kind of situation, and how the laws should be written. What should be allowed and what shouldn’t?”
Some may think it is ethical and reasonable to have the car sacrifice the occupants, yet will not get the car for themselves due to this factor. Therein lies the paradox. This concept can be put in perspective when extrapolated to unsolved problems our society has faced for years such as global pollution or world hunger. People acknowledge that these are major problems that needs to be solved, yet few are actually working to do so.
Considering these similar problems have been left unresolved, self-driving cars could be on the same path.
Even with a solution to this ethical dilemma, more problems arise. Should different decisions be made when children are on board? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
As companies prepare to endow millions of vehicles with autonomous technology, algorithmic morality has never been more prevalent.