Elon Musk invests in $1B effort to thwart the dangers of AI

Elon Musk, CEO of Tesla and SpaceX, will co-chair OpenAI, a new non-profit research company dedicated to artificial intelligence.

CBS News

Elon Musk, the CEO of Tesla and SpaceX -- and an outspoken critic of the potential dangers of of artificial intelligence, or AI -- has joined with other technology experts to form OpenAI, a not-for-profit artificial intelligence research company.

The goal of the group "is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," according to a statement on its website. "The founders hope to grow AI into a leading, open-sourced research institution that deploys new technologies in a way that is safe for humanity.

Musk -- who once called AI "potentially more dangerous than nukes" -- will be co-chairing the company with Sam Altman, the CEO of Y Combinator. Ilya Sutskever, a former research scientists at the Google Brain research team, is OpenAI's research director.

The project is also being funded by a number of influential tech leaders including Paypal Co-founder Peter Thiel, LinkedIn founder Reid Hoffman, and Jessica Livingston, author and founding partner of Y Combinator, who have committed a total of $1 billion to the project.

While advances in artificial intelligence have enabled popular technology like the digital assistants Siri and Microsoft's Cortana, the development of self-driving cars, and machine learning that can speed medical research, many remain uneasy with the concept.

Leading scientists like Stephen Hawking have gone on the record raising concerns about the potential dangers of artificial intelligence and the need to set limits on how it is developed and used.

Last October, while speaking at a conference at MIT, Musk called artificial intelligence "our biggest existential threat.""With artificial intelligence, we are summoning the demon," he said. He has also spoken of the need for some sort of regulatory oversight.

OpenAI appears to be designed to address these concerns.

"It's hard to fathom how much human-level AI could benefit society," OpenAI's website states, "and it's equally hard to imagine how much it could damage society if built or used incorrectly."