Most Read

Top stories

Killer Robots Are Not The Problem: Humans Are.

Killer Robots Are Not The Problem: Humans Are.

SECOND NEXUS PERSPECTIVE

A dramatic open letter was unveiled at The International Joint Conference on Artificial Intelligence (IJCAI) last month, signed by a number of high-profile scientists and technological pioneers – including Stephen Hawking, Elon Musk and Steve Wozniak – pleading with the powers of the world to ban autonomous war machines driven by artificial intelligence (A.I.), more prosaically known as “killer robots.”


It’s not the first time this has happened, either. In 2012, the Pentagon reassured the public that no matter how advanced our military technology gets, humans will always be involved in the final decision of what they blow up. In 2013, the United Nations Human Rights Council recommended a national moratorium on Lethal Autonomous Robots (LARs): weapon systems that can select and engage targets without human intervention. Earlier this year, a bunch of A.I. researchers signed another open letter that expressed deep concern that if we are not careful about the way we develop artificial intelligence, sociopathic super-computers might some day enslave mankind.

via Flickr user Debra Sweet

The politicians, public figures and their public relations agents who get behind these drives undoubtedly have good intentions, and they are correct about one thing: When we do develop autonomous decision-making military drones, we will need to be mindful that we prevent abuses and mistakes.

But they are wrong to push for a ban on killer robots. The reason goes deeper than the simple argument that laws won’t stop bad people from making them anyway--although that argument has been made. Outlawing military A.I. is misguided for a deeper reason: it’s addressing the wrong problem.

Do autonomous military robots pose a special risk? Do they represent a strange new threat that must be regulated? Let’s look at the specific arguments concerning autonomous military A.I.

1. Robots can make mistakes. So can humans.

One of the big fears when it comes to autonomous killing machines is that they will glitch out somehow. There might be a bug in the system, an algorithm that doesn’t function as expected, or a radar that experiences some kind or interference or distortion. Suddenly, hundreds of innocent people are dead.

Of course, the same thing happens with people. People make mistakes, misread signals, and

make bad decisions. Bombers miss targets, experience hardware malfunctions, and can get overtired or overstressed.

Is there a reason to believe that an artificial decision system will make more mistakes than a human? No reason at all. We already know that machines can operate faster on larger volumes of data than people can. We program the A.I. decision systems. We may even write decision systems that learn and improve themselves over time. While the programs may grow too sophisticated for the lay person to understand, it’s still humans who establish the basic mechanisms and – more importantly – the goals.

Second Nexus

So what about accuracy? Robert Bateman, writing for Esquire Magazine, suggests the following scenario: “The artificial intelligence of some robotic systems is now becoming so refined that, in fact, it is the humans who tend to make more mistakes in targeting. So what happens when, in the near future, it is demonstrated that an AI system will only, on average, make a targeting mistake 1 time in 500, whereas humans make mistakes in targeting about 10% of the time? When that is the situation (and that day is not far off) is it not then un-ethical to keep an error-prone biological person in the loop, since human pain and suffering will actually be greater than it would be if the AI operated by itself?”

It’s an important question to consider. Sure, machines can make mistakes. The important question is: will they make more mistakes than human, or fewer?

2. Robots can be programmed to do evil. So can humans.

Paul Schulte, Honorary Professor at Birmingham University, provided a concise description of the most common concern about A.I.-driven drones inan interview last year with radio and television host David Pakman. He points out that international law prohibits weapons systems that cannot follow the rules of “distinction and proportionality,” and there is no way to be certain that a robotic drone has been programmed to follow those rules.

An educational module by the Red Cross on Justice and Fairness in war provides a good overview of these concepts. According to their summary, “the principle of distinction requires parties to a conflict to distinguish between civilians and combatants, and between civilian objects and military objects.” The requirement to be able to distinguish between military and civilian targets is written into the Geneva convention, and part of international law. The principle of proportionality bans indiscriminate attacks, weapons that have indiscriminate effects, or that are disproportionately harmful to their victim.

Can an A.I. system be programmed to make these judgments and distinctions? Undoubtedly. Can an A.I. system be programmed to not follow these rules? Of course.

Schulte points out that a foreign country only needs to claim that the specific algorithms used by their weapons are military secrets, and we would have no way of being able to verify the terms and parameters of the programs. In other words, there would be no way for the international community to be sure that the artificial intelligence was, in fact, following the proper agreed-upon international rules for warfare.

Although what Schulte says is true, allowing a computer program to “make decisions” introduces no more and no less risk than having a human make these decisions. Again, this is being presented as a special risk associated with artificial intelligence when it is not: it is a risk that exists today, with human beings. A human commander may follow the guidelines of moral warfare, or may not. There is no way of “assessing” this ahead of time.

If you truly follow the logic of this argument, you must also outlaw human beings from making military decisions. There simply is no way of verifying that humans will follow the rules of distinction or proportionality either.

3. Robots lower the risk of attacking an enemy. Drones have already done that.

One of the most commonly-cited sections of the open letter presented last month at IJCAI is this passage about the risks of an A.I. arms race: "If any major military power pushes ahead with [A.I.] weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

By removing the risk of our own soldiers being killed, we will lower our inhibitions about attacking others. This, people believe, will automatically lead to escalation: everyone will want to stockpile these weapons, be less inhibited about attacking their enemies, and war will become ever more likely.

Except that has already happened, with drones. Today’s drones are still controlled by humans, but those humans are safely thousands of miles away from any actual combat.

Ethical arguments have been raging about drone warfare for years, and the lowered risk introduced by drone warfare is at the top of the list.

Last year, the nonpartisan Stimson Center released their Recommendations and Report of the Task Force on United States Drone Policy, which stated: “The seemingly low-risk and low-cost missions enabled by UAV [unmanned aerial vehicle] technologies may encourage the United States to fly such missions more often, pursuing targets with UAVs that would be deemed not worth pursuing if manned aircraft or special operation forces had to be put at risk. For similar reasons, however, adversarial states may be quicker to use force against American UAVs than against US manned aircraft or military personnel. UAVs also create an escalation risk insofar as they may lower the bar to enter a conflict, without increasing the likelihood of a satisfactory outcome.”

Second Nexusvia Flickr user The National Guard

Moreover, this argument seems to have been correct, to an extent. As Slate reporter William Saletan has pointed out, President Obama sent military drones to combat terrorism in Yemen, knowing that Americans would not have approved sending troops into a new war in a new country. He was able to celebrate our “brave service men and women coming home” from Afghanistan in his 2013 State of the Union address, without mentioning that they were being replaced by drones.

For people who worry about the lowered risk of the costs of war, that ship has sailed. Arguing against replacing human-controlled drones with A.I.-controlled drones is nothing new. As Evan Ackerman wrote quite eloquently for IEEE Spectrum:

“I do agree that there is a potential risk with autonomous weapons of making it easier to decide to use force. But, that’s been true ever since someone realized that they could throw a rock at someone else instead of walking up and punching them. There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century, we’ve got that pretty well figured out. If you want to argue that autonomous drones or armed ground robots will lower the bar even farther, then okay, but it’s a pretty low bar as is.”

4. Robots might make subjective judgment calls. But people already do.

Artificial intelligence simulates human decision-making, but faster and without error. Of course, they use different advanced techniques and algorithms. Some are expert “decision systems,” while others use adaptive pattern matching. But regardless of the technology and programming going on “under the hood,” artificial intelligence has always been inspired by attempts to replicate intelligent human behaviors in machines.

We already have automated decision systems in place in our military. We have not yet given machines the responsibility of choosing strategic objectives in warfare, but Lockheed Martin has a guidance system, according toa recent report in the New York Times, that allows missiles to navigate the terrain, weaving in and out of obstacles and avoiding radar detection. So, it is making decisions on the fly, without the aid of a human being.

via Flickr user Naval Surface Warriors

Navigating unknown terrain requires weighing hundreds of different factors in a very short timeframe. Making strategic decisions about what to bomb or when to attack in a war will surely be even more complex, involving a whole host of different considerations, from strategic to moral. These are decisions that humans make already, and the artificial system we design will be deliberately designed to simulate the process humans go through today: just as the automated navigation systems in a missile simulate the decisions a pilot must make when flying.

Will there be test cases where A.I. robots make bad decisions? Probably. That’s why we test software before we use it. Like a rogue automated shopper that suddenly started buying illegal items – it wasn’t programmed not to, after all – when our automated systems make a bad decision, we will work with it, train it, and figure out what needs to be fixed to make it better.

Indeed, that’s what we do when we train human soldiers and pilots.

5. Killer robots will just do more of the bad things people already do. So why let them?

Just because people can also do these terrible things, why does that mean we should allow robots to do them as well?

The answer is simple: outlawing “killer robots” is attacking the wrong problem.

If you don’t trust the reliability of the machines, your real issue is that you don’t trust the people behind the machines. A malfunctioning machine is no more dangerous than a malfunctioning soldier – and as our technology improves, a malfunctioning machine will be infinitely less likely than a human that is tired or irrational. Don’t ban robotic artificial intelligence: ban sloppy warfare, by either humans or robots.

Finally, if your problem is that going to war without risking the lives of your own soldiers is just “too easy,” then your issue is, again, not with artificially intelligent robots. Your complaint is with unmanned weapons. You could fight for a law that requires a soldier to be strapped into the seat with every A.I. bomber drone. That would solve your moral quandary, and it would still be safer than a bomber piloted by a human being.

Second Nexusvia Flickr user U.S. Army Alaska (USARAK)

Getting the Argument Right

This is an emotional issue, because it’s about killing. But stop for a moment and consider this: if you have a problem with robots killing, maybe your problem isn’t with the robots. Maybe it’s that we are killing in the first place.

And if you believe that sometimes war is necessary and just, then banning robotic military A.I. isn’t the way to make sure the violence of war doesn’t get out of hand. We simply need to treat our A.I. the same way that we treat humans: educate them. Make sure to train the A.I. systems how to make decisions, what is important, and how to avoid mistakes. Artificial intelligence researchers are already familiar with what it means to “train” artificially intelligent systems, and it means basically the same thing that it means with people: teach them, let them practice, expose them to real-world scenarios and let their knowledge grow over time.

As futurist author B.J. Murphy argues at Serious Wonder: “We need to begin looking at technologies like A.I. as if they’re infants. They’ll require a gradual, step-by-step process of education – including the development of a moral compass – in order for them to truly grow [in a way] that is both favorable to our existence and their very own.”

In other words, treat them as we treat our human soldiers – our children – and we will certainly be no worse off than we are today, and perhaps a whole lot better.