In the future, will artificial intelligence be so sophisticated that it will be able to tell when someone is trying to deceive it? A Carnegie Mellon University professor and his team is working on technology that could move this idea from the realm of science fiction to reality. Their work -- rooted in game theory and machine learning -- is part of a larger push for more advanced AI.
As AI becomes more commonplace in the technology we use every day, detractors and supporters are becoming more vocal about its potential risks and benefits. For some, smarter AI sets up a dangerous precedent for a future too reliant on machines to make decisions about everything from medical diagnoses to the operation of self-driving cars. On the flip side, this kind of technology's proponents see AI as the ultimate utility player for humanity -- assisting with a range of needs from the military to medicine.
Tuomas W. Sandholm, a professor of computer science at Carnegie Mellon who is spearheading research on AI and deception, obviously falls in the latter category.
"I do think AI is going to be improving our lives, people's lives, and improving efficiency of the economy in so many different ways, it's going to be amazing," Sandholm told CBS News. "You see, 20 years ago, if you talked about artificial intelligence, people rolled their eyes. 'Where are the logical applications?' they'd ask. And today, it is totally different. There are tons of real-world applications for AI."
Sandholm's work with AI is centered on game theory, or the study of mathematical models for strategies to maximize gains and minimize losses. At Carnegie Mellon, Sandholm and his team developed Claudico, a poker-playing supercomputer program that showed it could do quite well against professional poker players during a two-week tournament in 2015.
While it sounds like a fun novelty, this isn't just an excuse to create robots that are good at playing games. The larger objective is to effectively usher in a new machine-centric era.
Sandholm's lab is working on developing problem-solving algorithms to allow robots to learn what actions they can take to counteract "opponents" that are trying to deceive them.
Sandholm stressed that this shouldn't be thought solely in terms of combat. There are a wide range of real-life applications for these kinds of algorithms that can outsmart sneaky adversaries. For example, they could be harnessed to fight hackers or cyber spies.
"It's already been used in poker. In cybersecurity, we've been doing some work in that field. It will be prevalent in cybersecurity probably in another five or 10 years. In terms of medical treatment, these kinds of machines could be used to develop treatment strategies," he added. "If a patient has cancer or diabetes, how long would it take to build a treatment, a solution? Maybe you'll start to see things like that in the next five or 10 years, too. But that would require a long FDA approval."
As a researcher, Sandholm said it has been gratifying over the years to see acceptance of this kind of work grow. Initially, the idea that game theory could be applied to make machines smarter was met with frequent raised eyebrows.
"First, it started out that people thought it was crazy. You know, it would be, 'How can you model a real-world situation on a game? These games are so big that you can't possibly solve them -- unrealistic!' Now, it has totally changed, where there are all of these application of game theory," Sandholm asserted.
One thing that hasn't necessarily changed with time is skepticism -- even fear -- of artificial intelligence. It's difficult for many people, scientists and laypeople alike, to forget the manipulative and nefarious robots of science fiction, like HAL 9000 in "2001: A Space Odyssey."
One of the most prominent voices in this field is Swedish philosopher Nick Bostrom out of Oxford University. His 2014 book, "Superintelligence: Paths, Dangers, Strategies," argues that if machines move beyond humans in their overall intelligence, then we could be looking at a future ruled by robots.
Bostrom founded the Future of Humanity Institute at Oxford to delve into these very issues. For Stuart Armstrong, the Alexander Tamas Fellow at the institute, not all AI research spells doomsday for mankind. In fact, work like Sandholm's could be quite the opposite.
"Some of the results in there might turn out to have some application for making AI safer," Armstrong wrote in an email to CBS News regarding Sandholm's research. "If an AI becomes extremely powerful, we can't rely on being able to 'trick them.' However, if we learn about how AIs model and interpret human instructions, this could help us make sure that they actually do follow human intentions."
Armstrong added that the future of AI, and whether making machines smarter will prove more negative or positive, is almost impossible to predict.
"There are a great many immensely positive things that could come out of machine learning. Indeed, I feel that many people are underselling the advantages of AI -- powerful learning systems if their values are positively aligned, could potentially solve most of the world's problems today," Armstrong wrote. "If their value safe positively aligned, however. Getting this done is an important and difficult task. And if the values are not properly aligned, and the AI are still extremely powerful, the result could be catastrophic, threatening almost all we hold dear. The potential negative is of much higher magnitude than the positive, which is why it's so important to get it right."
Sandholm stressed that oftentimes the "hype about the dangers of AI" is overblown.
"We are the masters, if you will," he said. "AI is a servant and it's going to make our lives a lot better."