Go to contents

Ethics of a killer robot

Posted July. 29, 2015 07:16,   

한국어

Tony Stark, the main character of the Hollywood blockbuster "Iron Man," has built an iron man suit that automatically flies and attaches to the owner. The millionaire scientist has built intelligent robots, which can fight against enemies on behalf of human beings. These robots exist not only in Hollywood movies but also in the reality. The U.S. Department of Defense has developed a robot named "Big Dog" in collaboration with a robot manufacturer. In line with its name, the robot can walk on four feet and carry heavy loads, going up and down mountainous areas for a search mission.

Killer robot refers to a robot equipped with artificial intelligence, which can make decisions to trace and attack a target. Such robots are being developed in the U.S., Israel, the U.K., and Japan, and Korea is now joining the ranks. An American robot expert has recently drawn attention by saying that killer robots were used in a military mission in the Demilitarized Zone that divides the Korean Peninsula. The robot in question, developed by Samsung Techwin (currently renamed to Hanwha Techwin after acquisition) for search missions, can identify a moving object with four surveillance cameras. Reportedly, the robot is equipped with weapons of offense.

Some argue that killer robots are the best weapons, which can save sacrifices of soldiers while performing imperative duties such assassination or overthrow of a state. But some oppose citing that the robots are dangerous as it cannot distinguish a child from a soldier. The biggest concern is that a massacre may occur when such killer robots fall into hands of a terrorist or a dictator. In the movie "Terminator," the terminator robot having artificial intelligence has come to understand human beings. But this is only a dramatic plot for a blockbuster movie.

Around 1,000 great scholars, entrepreneurs and philosophers including British physicist Stephen Hawking, Iron Man’s actual person and Tesla founder Elon Musk, and Apple’s co-founder Steve Wozniak called for a global ban on killer robots. Isaac Asimov has written science fiction series titled "Foundation," which describe the Three Laws of Robotics. First, a robot may not injure a human being. Second, a robot must obey the orders given by human beings, except where such orders would conflict with the First Law. Third, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. It is ironic that human beings have created a killer robot and think about the ethics of robotics. It seems that the issue at stake is not robots’ ethics but human beings’ ethics.



shchung@donga.com