Elon Musk gave millions to protect us from artificial intelligence

It's not just the sci-fi community envisioning a world where machines take over. It's a concern among some prominent visionaries, including a group that just shelled out nearly $7 million for research into potential ill effects of artificial intelligence.

The Future of Life Institute has awarded the money to 37 research teams that will be tasked with researching a range of topics related to the oncoming advancements of artificial intelligence, or AI, the organization announced on Wednesday. The funds come partly from the $10 million investment famed tech entrepreneur Elon Musk provided the group in January to determine the risks associated with AI.

AI is a term used for the ability for a machine, computer, or system, to exhibit human-like intelligence. The term "artificial intelligence" was coined by one of its founders, John McCarthy, who is credited with first using the term in 1955.

Since then, there's been a push to see if artificial intelligence can eventually exceed the intelligence of humans. Indeed, some companies have come close: IBM's Watson, for instance, has become a prime example of what artificial intelligence can deliver as it's learned history, facts, and other information, and even won a game on the popular quiz show "Jeopardy."

Meanwhile, some industry watchers have grown increasingly concerned with how far AI can go and its potential dangers. They caution that controlling AI before it becomes too smart is critical.

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," famed physicist Stephen Hawking said in an article he co-wrote last year for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Microsoft co-founder Bill Gates has also sounded off on AI, saying that he doesn't "understand why some people are not concerned" about the possibility of super-intelligent machines.

Musk said last August that AI could be "potentially more dangerous than nukes" and followed that in October by saying that AI may require "regulatory oversight" so the world doesn't "do something very foolish."

"Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people," Apple co-founder Steve Wozniak said in March, adding to the list of prominent critics who are concerned about the future of AI. "If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."

By providing the Future of Life Institute with $10 million, Musk signaled that he's willing to play a major role in monitoring the impact of increasingly intelligent machines. Future of Life Institute, like the aforementioned critics, is concerned with the future of AI and launched its grant program to find research teams that would "answer a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI." Future of Life has stopped short of calling on companies to stop advancing AI but has argued that all efforts should be "safeguarding life."

The grant winners, however, are not necessarily expected to beat up on AI. Many, in fact, are focused on understanding more about AI, how it could impact humanity, and other topics. A Duke University research project that netted $200,000 will study ethics and AI, while another from Rice University will spend its $69,000 on how AI will impact working in the future.

The highest amount awarded through the grant program was given to the University of Oxford, which will receive $1.5 million to form a Strategic Research Center of Artificial Intelligence. Many research projects scored six figures for their own efforts in the space.

The $7 million in total grant awards will be given over the next three years as the projects progress. Most of the projects will begin in September. The Future of Life Institute has not said when final reports will be given.

This article was originally published on CNET as "Elon Musk-backed group gives $7M to explore artificial intelligence risks."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.