Experts warn of killer robots arms race

Tech leaders, scientists call for ban on autonomous weapons

Tech leaders and scientists including Elon Musk, Stephen Hawking and Steve Wozniak warn that the deployment of robots capable of killing while untethered to human operators is “feasible within years, not decades.”
Tech leaders and scientists including Elon Musk, Stephen Hawking and Steve Wozniak warn that the deployment of robots capable of killing while untethered to human operators is “feasible within years, not decades.”

Elon Musk and Stephen Hawking, along with hundreds of artificial intelligence researchers and experts, are calling for a worldwide ban on so-called autonomous weapons, warning that they could set off a revolution in weaponry comparable to gunpowder and nuclear arms.

In a letter unveiled as researchers gathered at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, on Monday, the signatories argued that the deployment of robots capable of killing while untethered to human operators is “feasible within years, not decades.” If development is not cut off, it is only a matter of time before the weapons end up in the hands of terrorists and warlords, they said.

Unlike drones, which require a person to remotely pilot the craft and make targeting decisions, the autonomous weapons would search for and engage targets on their own. Unlike nuclear weapons, they could be made with raw materials that all significant military powers could afford and obtain, making them easier to mass-produce, the authors argued.

The weapons could reduce military casualties by keeping human soldiers off battlefields, but they could also lower the threshold for going to battle, the letter said. “If any major military power pushes ahead with A.I. weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” it said.

READ MORE

Musk, the head of SpaceX, has raised warnings about artificial intelligence before, calling it probably humanity’s “biggest existential threat.” Hawking, the physicist, has written that while development of artificial intelligence could be the biggest event in human history, “Unfortunately, it might also be the last.”

The letter said artificial intelligence “has great potential to benefit humanity in many ways.” Proponents have predicted applications in fighting disease, mitigating poverty and carrying out rescues. An association with weaponry, though, could set off a backlash that curtails its advancement, the authors said.

Other notable signatories to the letter included Steve Wozniak, the co-founder of Apple; Noam Chomsky, the linguist and political philosopher; and Demis Hassabis, the chief executive of the artificial intelligence company Google DeepMind.

- The New York Times News Service