Musk, Woz: Let's Hold Off on the Killer Robots

Some of the industry's top players have signed on to support a ban on autonomous weapons systems.

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and professor Stephen Hawking are among the 1,000-plus artificial intelligence and robotics researchers who endorsed an open letter warning against the technology.

Penned by the Future of Life Institute (FLI) and presented today at a conference in Buenos Aires, the letter warns that "deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high."

While autonomous weapons make the front line safer for soldiers, they may also lower the threshold for going to battle, and likely result in more human casualties, according to the FLI.

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," the letter said.

The authors argue that if any major military operation were to forge ahead with AI weapon development, it would lead to a global arms race—much like the one that erupted over the atom bomb.

Still in its infancy, artificial intelligence is used by tech giants like Google and the Defense Advanced Research Projects Agency (DARPA). The latter, which boasts a strong focus on "rethinking military systems," has already put AI to use in its Ground X-Vehicle Technology program. The semi-autonomous armoured tank was built last year with the intention of creating a more mobile, less expensive combat platform.

Still, not everyone is convinced of these systems' benefits.

"[T]he endpoint of this technological trajectory is obvious," the FLI said in its letter. "Autonomous weapons will become the Kalashnikovs of tomorrow."

Cheaper and easier to construct than nuclear weapons, autonomous weapons are also easier to sell on the black market. And, really, no one wants armed quadcopters falling into the hands of terrorists, dictators, or warlords.

"We therefore believe that a military AI arms race would not be beneficial for humanity," the FLI said. "There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."

Last fall, Tesla and SpaceX chief Elon Musk declared that "with artificial intelligence, we are summoning the demon." Bill Gates expressed similar fears, saying during a Reddit Ask Me Anything in January that he is concerned with "super intelligence."

Former Apple exec Steve Wozniak, meanwhile, had a change of heart about the technology: Initially terrified of AI, he recently backtracked, saying it could eventually benefit the human race. Still, Woz signed the LFI letter, alongside Skype co-founder Jaan Tallinn, professor Noam Chomsky, and Google DeepMind CEO Demis Hassabis.

A founding member of the Future of Life Institute, Tallinn is joined in the group by Scientific Advisory Board members Hawking and Musk (as well as actors Alan Alda and Morgan Freeman).

"All technologies can be used for good and bad," Toby Walsh, professor of AI at the University of New South Wales in Australia, said in a statement. "Artificial intelligence is a technology that can be used to help tackle many of the pressing problems facing society today—inequality and poverty, the rising cost of health care, the impact of global warming… But it can also be used to inflict unnecessary harm."

"We need to make a decision today that will shape our future and determine whether we follow a path of good," he added.

In April, a report from Human Rights Watch and Harvard Law School expressed similar concerns.