Elon Musk And Tech Leaders Urge U.N. To Ban “Killer Robots”

Technology giant Elon Musk has, once again, signed onto an open letter to the UN seeking a ban on autonomous weapons. He has been working with others in his field for years to draw international attention to the risk of an autonomous weapons race, and it appears we may be too late.

Prior to that, at the U.S. National Governors Association meeting in July, Musk expressed his concerns more bluntly. “I have exposure to the very cutting-edge AI, and I think people should be really concerned about it,” he said. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.” 

Yet the 2015 letter explains the urgency of addressing the development of any type of killer robots. “The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.” Musk and the other writers added, “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”

Should Autonomous Weapons Be Banned?

Researchers and tech experts outlined some of the primary arguments for and against the autonomous weapon ban in the 2015 letter. First, they state “that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle.” Additionally, materials for production are inexpensive and readily available, but that also makes them more easily available to terrorists and dictators. The writers added, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

Based on these arguments, the writers “believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” They conclude that “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

In support of the 2017 letter, of which he was a key organizer, Toby Walsh—professor of artificial intelligence at the University of New South Wales in Sydney—views the risks of autonomous weaponry through a similar lens as other technological developments.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis,” Walsh said. “However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want.”

Stuart Russell, founder and Vice-President of Bayesian Logic, agrees the ban is imperative. “Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”

Despite the number of experts leaning on government and political officials to regulate autonomous weapons, some actors continue to deny the urgency of this problem.

Ryan Gariepy, founder of Clearpath Robotics, said in a press statement: “This is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action.”

Gariepy added, “We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

Are We Too Late Already?

Ongoing delays in creating a workable ban against autonomous weapons give rise to concern that the risks of AI technology may already have developed past the point of regulation. The U.S., Russia, China, Israel and others are already developing lethal autonomous weaponry.

In fact, autonomous and semi-autonomous weapons already exist, although there is dispute as to whether any of the autonomous weapons are yet used in that manner. The Samsung SGR-A1 sentry gun serves as an example; it’s technically capable of firing autonomously and is used on the South Korean border of the 2.5m-wide Korean Demilitarized Zone. It has also the option to fire non-lethal shots autonomously, which is its purported current use.

The UK’s BAE Taranis drone is an unmanned combat aerial vehicle, designed to replace certain human-piloted warplanes sometime after 2030. The U.S., Russia and others are developing robotic tanks, which either operate autonomously or via remote control. The U.S. Navy also is creating autonomous warships and submarine systems.

Even though it may be a while before these killer robots reach their full maturation through testing and refinement, they already populate the water, earth and skies without regulation for their production or use. Without such international laws, currently there’s little incentive for nations to slow development of autonomous weaponry when other competitive countries are producing them.

A US department of defense report strongly encourages increased investment in autonomous weapon technology, so that the U.S. can “remain ahead of adversaries who also will exploit its operational benefits.” 

It’s starting to sound like we’re already off to the races.

Load more...

Page 2 of 2
First | Prev | 1 | 2 | Next | Last
View All

Categories

Archives

type in your search and press enter
Search
Generic filters
Exact matches only