Elon Musk, of Telsa and SpaceX fame, is a man who recognizes no technological boundaries: his projects have included colonizing Mars, slowing climate change and revolutionizing transportation. But according to him, his most significant venture will be gaining international cooperation to ban a specific branch of artificial intelligence (AI), autonomous weapons—before it’s too late.
Open Letter to UN
Musk and Mustafa Suleyman, of Google’s parent company Alphabet, led a group of 116 AI and robotics experts from 26 nations in an open letter (the “2017 letter”) calling for the ban of lethal autonomous weapons, often called killer robots. This is the first time leaders in the AI and robotics industries have joined forces on this issue with researchers such as Stephen Hawking, Apple co-founder Steve Wozniak, and Noam Chomsky.
At the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne, the writers presented the letter in advance of a review of the UN’s Convention on Certain Conventional Weapons (CCCW), which is considering adding robotic weapons to their list of restricted weapons. The CCCW and other corresponding treaties currently ban chemicals, mines and other forms of weaponry that create “unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.”
Pointing to these criteria, the letter explains how autonomous weapons can be used in ways envisioned by the definition in the CCCW: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The experts’ letter also warns of a potential arms race in autonomous weapons, calling it a “third revolution in warfare,” following the development of guns and nuclear weapons.
Moreover, the writers state these so-called killer robots have the potential to escalate conflict beyond imagination. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.” The letter adds, “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
As those most knowledgeable about the weapons’ capabilities, the writers of the letter felt obligated to act. “As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” they wrote.
The review of the UN’s CCCW was originally scheduled for August but was postponed until November for unrelated reasons.
Previous Warning Cries from Musk
The 2017 talks about autonomous weapons were brought on, in part, by a similar open letter presented at the ICJIA conference in 2015. Elon Musk was one of the 1,000 tech experts and scientists who signed the previous letter (the “2015 letter”) warning the conference about the dangers of autonomous weapons.
There’s an even deeper history of warning bells for autonomous technology. For instance, in 2012, the Pentagon announced that humans will forever remain involved in the final decision of what is detonated. Then, the following year, the United Nations Human Rights Council recommended a national ban on Lethal Autonomous Robots (LARs), which can choose and engage targets without human intervention.
Defining autonomous weaponry, the 2015 letter stated: “Autonomous weapons select and engage targets without human intervention.” Further, the writers provided examples: “They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.”
While some have dismissed autonomous weapons under this definition as years away, Musk continues to stress the need to be proactive in regulation of AI, calling it “a fundamental risk to the existence of human civilization.” To this end, he operates a non-profit known as Open AI, which seeks to advance ethical AI research.
On August 11, 2017, he tweeted: ‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,” while the United States and North Korea engaged in threats of nuclear war.
To read more, please continue to page 2.