Go to contents

Killer robots and robot’s murder

Posted November. 13, 2017 07:26,   

Updated November. 13, 2017 08:41

한국어

Robot soldiers are on duty at the inter-Korean demilitarized zone. It is SGR-A1, which was developed by Hanwha Techwin in 2010 and has been deployed at the DMZ. The robot monitors a radius of 4 kilometers, determines whether an approaching object is a human or animal, and asks the object to speak parole, if it is a human. If it determines the object to be an enemy, it could open fire with machine guns. It is one of typical killer robots that have been singled out by the BBC, but the ultimate order to open fire is only placed by a “human soldier” after he is debriefed on the situation. The reason cited is that AI should not go as far as to make decision to open fire.

Russia recently introduced an unmanned gun firing system that automatically identifies the target for attack through artificial intelligence. “The Sea Hunter,” the unmanned U.S. naval battleship, searches enemy submarines while freely navigating the sea through automated cruising, and launches attack. Britain is operating “Taranis,” an AI drone that is capable of engaging not only in reconnaissance mission but also in air fighting. The world has already opened an era of the Laser Weapons System (LaWS), or killer robots that attack the enemy even without the human’s order.

The theme of the United Nations Convention on Certain Conventional Weapons meeting, which will take place in Geneva, Switzerland from Monday, is also killer robots. Controversy will likely heat up over diverse matters ranging from the imagination that AI will come to rule humans just as in the movie “Terminator” franchise, to concern over negative consequences expected if a terrorist comes to possess killer robots. In August, a total of 116 robot and AI experts including Tesla founder Elon Musk urged the United Nations to ban killer robots. In contrast, those who support killer robots claim that killer robots are weapons that can rather minimize human casualties.

Brandon Bryant, a former U.S. military drone operator, confessed in 2013 that missiles with drones were fired at terrorists in Afghanistan from an airbase in Nevada, and remotely murdered more than 1,600 people for a period of five years.” Having pulled trigger only based on images on the monitor, he retired due to guilty conscience that he might have killed innocent people. If he could have depended on AI even to determine “targets for removal,” would he have had weaker sense of guilt? Rather, he could have suffered from different sense of guilt that the killer robot might have killed innocent people. We have to think about whether it is a good idea to rely on robots to determine the fate of human beings even during war.