CBSN

​Hawking, Musk: "Starting a military AI arms race is a bad idea"

Stephen Hawking

Flickr/NASA HQ PHOTO.

The latest warning from renowned physicist Stephen Hawking and tech entrepreneur Elon Musk about the dangers of artificial intelligence targets its potential on the battlefield.

In an open letter from the Future of Life Institute (FLI), a research institute focused on mitigating possible threats "from the development of human-level artificial intelligence," Musk and Hawking, among many others, paint the gruesome image of a world in which we allow artificial intelligence systems to kill without human intervention.

Musk and Hawking are scientific advisors to FLI, and Musk donated $10 million to the organization in January.

"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter, released Monday, reads.

AI, the letter argues, could make fully autonomous weapons, such as armed drones that can search out and attack targets based on a defined set of criteria, "feasible within years, not decades." They will be cheap and easy to mass-produce, it says, and should be banned before they become a reality.

The letter, sponsored and cosigned by dozens upon dozens of professors and scientists, including Apple cofounder Steve Wozniak and MIT's Noam Chomsky, states simply: "Starting a military AI arms race is a bad idea."

Musk has previously likened AI to "summoning the demon," and said that it could be "more dangerous than nukes."

Hawking has issued similarly dire concerns, positing that artificial intelligence could one day "spell the end of the human race."

Hawking has opened his position up to discussion in an "Ask Me Anything" (AMA) discussion on Reddit. Beginning Monday, users could pose questions about the promise and peril of AI.

"I signed an open letter earlier this year imploring researchers to balance the benefits of AI with the risks," Hawking wrote in the introduction to his AMA, referring to an earlier letter from FLI which approached the need for careful and robust AI research more broadly.

"The letter acknowledges that AI might one day help eradicate disease and poverty, but it also puts the onus on scientists at the forefront of this technology to keep the human factor front and center of their innovations."

Unlike most AMA's, which take the form of rapid Q&A-style discussions, Hawking will answer questions at his own pace, gathering requests ahead of time and working with Reddit moderators to post responses over the coming weeks.

  • Amanda Schupak

    Amanda Schupak is the science and technology editor at CBSNews.com