Automating the Decision to Kill: The Programming of Death in the Future

The physical separation between a drone in the battlefield and its human controller is one of the most notable features of current drone technology—allowing a state to complete military missions without ever putting its soldiers in harm’s way. Still, despite the distance, the human element (and emotion) remains in the decision to pull the trigger–but for how long? The U.S. already has fully automated weapons systems in various stages of research and development, but some say autonomy in lethal weapons is all but inevitable given the amount of data they will eventually collect, sort and analyze to inform decision-making. Says the Washington Post, “Even when directly linked to human operators, these machines are producing so much data that processors are sifting the material to suggest targets, or at least objects of interest…In future operations, if drones are deployed against a sophisticated army, there may be much less time for deliberation and a greater need for machines that can function on their own.”

Government figures tend to downplay the possibility of what Lieutenant General Larry James calls a “science fiction-y” future. Humans are still very much a part of the intricate process that ends with a lethal drone strike in Pakistan or Yemen—200 people may be involved, from pilots to maintenance crews—but they will clearly become less important as the technology improves. Of course, there are particular advantages to using machines—even lethal ones—that  can operate autonomously. Some observers believe human emotion can actually be a hindrance in battlefield situations, an issue that a robotic entity would not have. “They have the potential to process information and to act much faster than humans in situations where nanoseconds could make the difference. They also do not act out of fear, revenge or innate cruelty, as humans sometimes do,” writes Christof Heyns, the UN Special Rapporteur on Extrajudicial Executions. However, others consider this same feature to be an important failing—human emotion can act as a critical restraint on violence. One of the more challenging philosophical questions concerning autonomous lethal drones is whether they would be capable of obeying the requirements of just war doctrine. For example, a key tenet of just war theory is distinguishing between combatants and non-combatants as lawful targets. In the guerrilla style of modern warfare, we struggle enough to identify who is a combatant and who is not—how can we program a computer to do it if we can’t do it ourselves?

Could they be regulated?

Accepting the inevitability of automated drones in some capacity, what recourse might there be to at least control their operation? In a 2013 TED Talk, Daniel Suarez, a science-fiction writer, presented the possibility of an international treaty similar to chemical and biological weapons treaties that bans outright the use of lethal drones (or his broader term, “killer robots”) without a human director. A committee of engineers, lawyers, philosophers and human rights activists created the International Committee for Robot Arms Control (ICRAC) to “call[] upon the international community to urgently commence discussions about an arms control regime to reduce the threat to humanity posed by [military robotic] systems.” Roboethicist Ronald Arkin believes that regulation can lie in the machines themselves, programming them with “rules of engagement” software. Suarez warns it is important for the international community to cooperate now before a future dramatic event  sparks an irrational arms race in automated weaponry.