Watch CBS News

AI could be smarter than "experts" in 10 years, OpenAI CEO says

Senators propose AI regulatory commission
Senators propose AI regulatory commission 03:51

Artificial intelligence could surpass the "expert skill level" in most fields within a decade — and trying to stop the emergence of "superintelligence" is impossible, wrote OpenAI CEO Sam Altman in a Monday blog post.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past," Altman wrote in the post, which was co-authored with two other OpenAI executives, Greg Brockman and Ilya Sutskever.

Altman's prediction — and his warning — come just days after he warned a Senate committee that artificial intelligence could "go quite wrong." The rapid emergence of AI tools like OpenAI's ChatGPT and Google's Bard has sparked debate and concern about their impact on everything from employment, with some experts suggesting AI could eliminate almost 1 in 5 jobs, to education, with students turning to AI to write papers.

The growing power of AI could help humanity, the OpenAI executives wrote in the blog post. But, they added, the technology will likely need to be regulated to ensure it doesn't create harm as AI develops into "superintelligence." 

"Given the possibility of existential risk, we can't just be reactive," Altman and his co-authors wrote. "Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example."

They added, "We must mitigate the risks of today's AI technology too, but superintelligence will require special treatment and coordination."

To that end, an agency like the nuclear industry's International Atomic Energy Agency may be needed to regulate superintelligence, they noted. Some lawmakers have also proposed a commission to oversee AI.

"Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.," they wrote.

Trying to stop the emergence of superintelligence won't work, they added. 

Superintelligence is "inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn't guaranteed to work," they wrote. "So we have to get it right."

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.