Go to contents

Ethical questions about self-driving cars

Posted June. 25, 2016 07:21,   

Updated June. 25, 2016 07:31

한국어

The brake of a train running at the speed of 100 kilometers per hour is broken. If it continues to run at the fast speed, it could hit five workers working on the railroad. In this situation, what will you do? One option is that the locomotive engineer changes the railroad, killing one pedestrian instead of saving five workers. The other option is pushing a big pedestrian beside you to the railroad to stop the train. This is the “trolley problem” suggested by Philippa Foot, a British philosopher, in 1967. It was widely known after it was included in Michael Sandel’s bestselling book “Justice.”

As self-driving cars with artificial intelligence are developed, the ethical issue of the cars has emerged. For example, a self-driving car is about to make a car accident. If it keeps running, the passengers will die. If it takes a turn, it will save its passengers while killing 10 pedestrians. What is the right decision in programming autonomous cars? The journal Science announced the result of the survey of 1,928 people by a joint team of U.S. and French researchers.

Most people agreed that it should be programmed in a way that can save as many lives as possible. If the situation is related to them, however, they had a different view. They said they do not want to buy a self-driving car if it intends to sacrifice them and their family to save more pedestrians. Then, can carmakers ever produce autonomous cars that protect only passengers over pedestrians? It is unlikely because it will under heavy criticism.

Does it sound unrealistic? Companies such as Google and Uber have already created an entity to lobby for laws related to self-driving cars in the U.S. The CEO of the Korea unit of auto-parts-maker Bosch declared that the company would release a completely automated “self-driving parking system” by 2018. But it is worrisome that artificial intelligence could have ethical intuition of humans. Even if it could, it is still fearful to us. If artificial intelligence with an intellectual ability surpassing human beings equipped with ethical intuition, it could create a bright future but also could cause an unexpected disaster at the same time.