MINNEAPOLIS – An ominous warning about the future of artificial intelligence, and the human race, has some people wondering what they should believe.
Key minds that helped create AI, or artificial intelligence, are now worried it could lead to humanity's extinction at the pace it's developing.
Concerns can drift away on an afternoon at the beach at Bde Maka Ska in Minneapolis.
"I can kind of breathe," said Karin Coughlin as she approached the sand.
"I get actually super present with myself," said Rumay Ali as he walked by with a friend.
Naturally in a place of relaxation, we decided to ask them if they were worried about AI.
"I don't think it's like a flat yes or no for me," Coughlin said. She recently used Chat GPT, an AI chatbot, to help her develop a diet plan.
"I think to an extent there are some concerns because America is not the only country that's developing AI," Ali said.
Perhaps the greatest concern involves humanity's future. The Center for AI Safety recently released the following statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Hundreds of AI developers and researchers cosigned the statement, including David Krueger, an assistant professor at the University of Cambridge in England. He's a member of the University's Computational and Biological Learning Lab. His research group focuses on Deep Learning, AI Alignment, and AI Safety.
"My focus is pretty squarely on preventing catastrophic outcomes and human extinction," Krueger said.
What is the risk of extinction with AI?
"We are very rapidly approaching a point at which it might actually be too late to take effective action or prevent the development and proliferation of extremely dangerous AI capability," he said.
To put it bluntly, Krueger feels it's possible that humanity as we know it might not exist by the end of the century, if not sooner.
The Center for AI Safety lists several risks associated with AI on its website. Weaponization is one of them.
"AIs could be used by malicious actors to design novel bioweapons more lethal than natural pandemics," Dan Hendryks, the director for the Center for AI Safety, told CBS News. "Alternatively, malicious actors could intentionally release rogue AI that actively attempt to harm humanity. If such an AI was intelligent or capable enough, it may pose significant risk to society as a whole."
Another risk is enfeeblement. That's when tasks and jobs are increasingly given to machines. That could make humans less useful or necessary for a functional society, while also becoming heavily dependent on machines.
The thought of extinction can weigh heavy, from the average person to AI developers.
"It's a very emotionally difficult thing to confront, as actually both Geoffrey Hinton and Yoshua Bengio have remarked recently," Krueger said.
Hinton and Bengio are considered two "godfathers" of AI. Both of them cosigned the warning statement. Hinton is a computer scientist who recently retired from Google. He said he spent 50 years trying to make models on computers that can learn in a way that a brain learns.
"My epiphany was a couple of months ago. I suddenly realized that maybe the computer models we have now are actually better than the brain. And if that's the case, then maybe quite soon they'll be better than us. So that the idea of superintelligence, instead of being something in the distant future might come much sooner than I expected," Hinton said.
Fictional books and movies have long predicted a version of this outcome, playing on a fear of AI becoming smarter than humans and taking over the world.
"I'm nervous and I'm also aware that human beings are just afraid of the unknown. So I don't want fear to be how I engage with AI," Coughlin said.
"[AI is] a very, very powerful tool. And I think when a powerful tool is used in a wrong way, it can be very dangerous," Ali said, who thinks if extinction occurs, it won't be a direct result of AI, but rather those who control it. "It's going to be because of greed, selfishness."
University of St. Thomas Professor Dr. Mangeet Rege specializes in software engineering and data science. He feels the extinction warning is over the top, but he agrees more oversight is necessary.
"I believe there should not be a complete pause on AI research. There needs to be freedom in terms of what you can develop with AI, but there needs to be regulation on when you deploy it," Rege said.
For Krueger, he feels trying to better understand the AI that's operating right now and how to control it should be prioritized.
"We don't need to rush to be making smarter, more powerful systems as people are still doing," Krueger said. "There's a lot of ways that we can use the systems we have for socially beneficial applications, and I'd like to see a lot more focus on that."
What can the average person do to help prevent possible extinction from AI if they are concerned? It starts with understanding the risk and doing research. The next step would be raising awareness, said Krueger, specifically to those who hold power like government officials.
"There's a large political aspect to this problem, because we're talking about regulation, we're talking about international cooperation. And we need to make sure that politicians and leaders understand that this is a serious concern and something that is a priority for us," he said. "We've seen what happens there with things like climate change that I think we don't want this to be another climate change. We want to be taking the steps now to make the world safe from advanced AI systems."
for more features.