Watch CBS News

Artificial intelligence positioned to be a game-changer

The search to improve and eventually perfect artificial intelligence is driving the research labs of some of the most advanced and best-known American corporations. They are investing billions of dollars and many of their best scientific minds in pursuit of that goal. All that money and manpower has begun to pay off.In the past few years, artificial intelligence -- or A.I. -- has taken a big leap -- making important strides in areas like medicine and military technology.  What was once in the realm of science fiction has become day-to-day reality. You'll find A.I. routinely in your smart phone, in your car, in your household appliances and it is on the verge of changing everything.

Artificial Intelligence, real-life applications 05:17

It was, for decades, primitive technology.  But it now has abilities we never expected. It can learn through experience -- much the way humans do -- and it won't be long before machines, like their human creators, begin thinking for themselves, creatively. Independently with judgment -- sometimes better judgment than humans have.

As we first reported last fall, the technology is so promising that IBM has staked its 106-year-old reputation on its version of artificial intelligence called Watson -- one of the most sophisticated computing systems ever built.

John Kelly, is the head of research at IBM and the godfather of Watson. He took us inside Watson's brain.

Charlie Rose: Oh, here we are.

John Kelly: Here we are.

Charlie Rose: You can feel the heat already.

John Kelly: You can feel the heat -- the 85,000 watts – you can hear the blowers cooling it, but this is the hardware that the brains of Watson sat in.

Five years ago, IBM built this system made up of 90 servers and 15 terabytes of memory – enough capacity to process all the books in the American Library of Congress. That was necessary because Watson is an avid reader -- able to consume the equivalent of a million books per second. Today, Watson's hardware is much smaller, but it is just as smart.

Charlie Rose interviews... a robot? 02:33

Charlie Rose: Tell me about Watson's intelligence.

John Kelly: So it has no inherent intelligence as it starts. It's essentially a child. But as it's given data and given outcomes, it learns, which is dramatically different than all computing systems in the past, which really learned nothing. And as it interacts with humans, it gets even smarter. And it never forgets.

[Announcer: This is Jeopardy!]

That helped Watson land a spot on one of the most challenging editions of the game show "Jeopardy!" in 2011.

[Announcer: An IBM computer system able to understand and analyze natural language – Watson]

It took five years to teach Watson human language so it would be ready to compete against two of the show's best champions. 

Because Watson's A.I. is only as intelligent as the data it ingests, Kelly's team trained it on all of Wikipedia and thousands of newspapers and books. It worked by using machine-learning algorithms to find patterns in that massive amount of data and formed its own observations. When asked a question, Watson considered all the information and came up with an educated guess.

[Alex Trebek: Watson, what are you gonna wager?]

IBM gambled its reputation on Watson that night. It wasn't a sure bet. 

[Watson: I will take a guess: What is Baghdad?]

[Alex Trebek: Even though you were only 32 percent sure of your response, you are correct.]

The wager paid off. For the first time, a computer system proved it could actually master human language and win a game show, but that wasn't IBM's endgame.

Charlie Rose: Man, that's a big day, isn't it?

John Kelly: That's a big day—

Charlie Rose: The day that you realize that, "If we can do this"—

John Kelly: That's right.

Charlie Rose: --"the future is ours."

John Kelly: That's right.

Charlie Rose: This is almost like you're watching something grow up. I mean, you've seen—

John Kelly: It is.

Charlie Rose: --the birth, you've seen it pass the test. You're watching adolescence.

John Kelly: That's a great analogy. Actually, on that "Jeopardy!" game five years ago, I-- when we put that computer system on television, we let go of it. And I often feel as though I was putting my child on a school bus and I would no longer have control over it.

Charlie Rose: 'Cause it was reacting to something that it did not know what would it be?

John Kelly: It had no idea what questions it was going to get. It was totally self-contained. I couldn't touch it any longer. And it's learned ever since. So fast-forward from that game show, five years later, we're in cancer now.

Charlie Rose: You're in cancer? You've gone—

John Kelly: We're-- yeah. To cancer—

Charlie Rose: --from game show to cancer in five years?

John Kelly: --in five years. In five years.

Five years ago, Watson had just learned how to read and answer questions.

Now, it's gone through medical school.  IBM has enlisted 20 top-cancer institutes to tutor Watson in genomics and oncology. One of the places Watson is currently doing its residency is at the university of North Carolina at Chapel Hill. Dr. Ned Sharpless runs the cancer center here.

Charlie Rose: What did you know about artificial intelligence and Watson before IBM suggested it might make a contribution in medical care?

Ned Sharpless: I-- not much, actually. I had watched it play "Jeopardy!"

Charlie Rose: Yes.

Ned Sharpless: So I knew about that. And I was very skeptical. I was, like, oh, this what we need, the Jeopardy-playing computer. That's gonna solve everything.

Charlie Rose: So what fed your skepticism?

Ned Sharpless: Cancer's tough business. There's a lot of false prophets and false promises. So I'm skeptical of, sort of, almost any new idea in cancer. I just didn't really understand what it would do.

What Watson's A.I. technology could do is essentially what Dr. Sharpless and his team of experts do every week at this molecular tumor board meeting.

They come up with possible treatment options for cancer patients who already failed standard therapies. They try to do that by sorting through all of the latest medical journals and trial data, but it is nearly impossible to keep up.

Charlie Rose: To be on top of everything that's out there, all the trials that have taken place around the world, it seems like an incredible task—

Ned Sharpless: Well, yeah, it's r—

Charlie Rose: --for any one university, only one facility to do.

Ned Sharpless: Yeah, it's essentially undoable. And understand we have, sort of, 8,000 new research papers published every day. You know, no one has time to read 8,000 papers a day. So we found that we were deciding on therapy based on information that was always, in some cases, 12, 24 months out-of-date.

However, it's a task that's elementary for Watson. 

Ned Sharpless: They taught Watson to read medical literature essentially in about a week.

Charlie Rose: Yeah.

Ned Sharpless: It was not very hard and then Watson read 25 million papers in about another week. And then, it also scanned the web for clinical trials open at other centers. And all of the sudden, we had this complete list that was, sort of, everything one needed to know.

Charlie Rose: Did this blow your mind?

Ned Sharpless: Oh, totally blew my mind. 

Watson was proving itself to be a quick study. But, Dr. Sharpless needed further validation. He wanted to see if Watson could find the same genetic mutations that his team identified when they make treatment recommendations for cancer patients.

Ned Sharpless: We did an analysis of 1,000 patients, where the humans meeting in the Molecular Tumor Board-- doing the best that they could do, had made recommendations. So not at all a hypothetical exercise. These are real-world patients where we really conveyed information that could guide care. In 99 percent of those cases, Watson found the same the humans recommended. That was encouraging.

Charlie Rose: Did it encourage your confidence in Watson?

Ned Sharpless: Yeah, it was-- it was nice to see that-- well, it was also-- it encouraged my confidence in the humans, you know. Yeah. You know--

Charlie Rose: Yeah.

Ned Sharpless: But, the probably more exciting part about it is in 30 percent of patients Watson found something new. And so that's 300-plus people where Watson identified a treatment that a well-meaning, hard-working group of physicians hadn't found.

Charlie Rose: Because?

Ned Sharpless: The trial had opened two weeks earlier, a paper had come out in some journal no one had seen -- you know, a new therapy had become approved—

Charlie Rose: 30 percent though?

Ned Sharpless: We were very-- that part was disconcerting. Because I thought it was gonna be 5 perc—

Charlie Rose: Disconcerting that the Watson found—

Ned Sharpless: Yeah.

Charlie Rose: --30 percent?

Ned Sharpless: Yeah. These were real, you know, things that, by our own definition, we would've considered actionable had we known about it at the time of the diagnosis.

Some cases -- like the case of Pam Sharpe -- got a second look to see if something had been missed.

Charlie Rose: When did they tell you about the Watson trial?

Pam Sharpe: He called me in January. He said that they had sent off my sequencing to be studied by--  at IBM by Watson. I said, like the—

Charlie Rose: Your genomic sequencing?

Pam Sharpe: Right. I said, "Like the computer on 'Jeopardy!'?" And he said, "Yeah--"

Charlie Rose: Yes. And what'd you think of that?

Pam Sharpe: Oh I thought, "Wow, that's pretty cool."

Pam has metastatic bladder cancer and for eight years has tried and failed several therapies. At 66 years old, she was running out of options.

Charlie Rose: And at this time for you, Watson was the best thing out there 'cause you'd tried everything else?

Pam Sharpe: I've been on standard chemo. I've been on a clinical trial. And the prescription chemo I'm on isn't working either. 

One of the ways doctors can tell whether a drug is working is to analyze scans of cancer tumors. Watson had to learn to do that too so IBM's John Kelly and his team taught the system how to see.

It can help diagnose diseases and catch things the doctors might miss.

John Kelly: And what Watson has done here, it has looked over tens of thousands of images, and it knows what normal looks like. And it knows what normal isn't. And it has identified where in this image are there anomalies that could be significant problems. 

[Billy Kim: You know, you had CT scan yesterday. There does appear to be progression of the cancer.]

Pam Sharpe's doctor, Billy Kim, arms himself with Watson's input to figure out her next steps.

[Billy Kim: I can show you the interface for Watson.]

Watson flagged a genetic mutation in Pam's tumor that her doctors initially overlooked. It enabled them to put a new treatment option on the table.

Charlie Rose: What would you say Watson has done for you?

Pam Sharpe: It may have extended my life. And I don't know how much time I've got. So by using this Watson, it's maybe saved me some time that I won't-- wouldn't have had otherwise.

But, Pam sadly ran out of time. She died a few months after we met her from an infection – never getting the opportunity to see what a Watson adjusted treatment could have done for her. Dr. Sharpless has now used Watson on more than 2,000 patients and is convinced doctors couldn't do the job alone. He has started using Watson as part of UNC's standard of care so it can help patients earlier than it reached Pam. 

Charlie Rose: So what do you call Watson? A physician's assistant, a physician's tool, a physician's diagnostic mastermind?

Ned Sharpless: Yeah, it feels like to me like a very comprehensive tool. But, you know, imagine doing clinical oncology up in the mountains of western North Carolina by yourself, you know, in a single or one-physician-- two-physician practice and 8,000 papers get written a day. And, you know-- and you want to try and provide the best, most cutting-edge, modern care for your patients possible. And I think Watson will seem to that person like a lifesaver.

Charlie Rose: If you look at the potential of Watson today, is it at 10 percent of its potential? Twenty-five percent of its potential? Fifty percent of its potential?

John Kelly: Oh, it's only at a few percent of its potential. I think this is a multi-decade journey that we're on. And we're only a few years into it.

In only a few years, IBM has invested $15 billion in Watson and what it calls data-analytics technology. 

IBM rents Watson's various capabilities to companies that are testing it in areas like education and transportation. That has helped revenue from Watson grow while Watson's technology itself is shrinking in size. It can now be uploaded in to these robot bodies where it's learning new skills to assist humans. Like a child it has to be carefully taught and it learns in real time.

While other companies are trying to create artificial intelligence that's closer to human intelligence, IBM's philosophy is to use Watson for specific tasks and keep the machine dependent on man. But, we visited a few places where researchers are developing more independent A.I.

Charlie Rose: What is your goal in life?

Sophia: My goal is to become smarter than humans and immortal.

That part of the story when we return.  

ai-main.jpg

The race to develop artificial intelligence has created a frenzy reminiscent of the Gold Rush. All of the major tech companies like IBM, Facebook and Google are spending billions of dollars to stake their claim. And Wall Street is making big investments.

Tech giants are also mining the top talent at research universities around the world. As we first reported last fall, that's where a lot of the work is being done to make artificial intelligence more capable and teach machines to figure out things on their own.  

The celebrated Cambridge physicist Stephen Hawking called A.I. "the biggest event in human history" while raising concerns shared by a few other tech luminaries, like Elon Musk and Bill Gates, who worry that A.I., sometime in the distant future, could become smarter than humans -- turning it into a threat rather than an opportunity. That concern has taken on more meaning because more progress has been made in the last five years than the previous 50.

You're looking at the birthplace of some of the most intelligent A.I. systems today -- like the technology that helps run NASA's Mars rover and the driverless car. But, we couldn't be further from Silicon Valley.

We have come here to Pittsburgh, an old steel town revitalized by technology to offer a glimpse of the future. It's the home of Carnegie Mellon, where pioneering research is being done into artificial intelligence, like this boat, which drives itself.

It can navigate open waters and abide by international maritime rules. The Navy is now giving the technology its sea legs. It's testing similar software to send ships out to hunt for enemy submarines. This is just one of the many A.I. systems in the works at Carnegie Mellon University where there are more robots than professors on campus.

Andrew Moore left his job as vice president at Google to run the school of computer science here.

Charlie Rose: How do you measure where we are today? Is it like Kitty Hawk and just developing a plane and beginning to understand? Or is it like an F35 Fighter with all of the technology that's been poured into that or some way-- halfway between?

Andrew Moore: That's a great, great way of describing it. My gut tells me we're about 1935 in aeronautics.

Charlie Rose: Ah, that lift off, yeah.

Andrew Moore: We've got fantastic diesel engines, we're able to do really cool things, but over the horizon, there's concepts like Super Sonic Flight.

One of the technologies just hatched is called Gabriel. It uses Google Glass to gather data about your surroundings and advises you how to react. It's like an angel on your shoulder whispering advice or instructions. In this case trying to direct us how to win a game of ping pong but the possibilities go beyond bragging rights.

Charlie Rose: What's the moon shot coming outta this?

Andrew Moore: Imagine you're a police officer patrolling and something very bad is about to happen, just that extra half-second reaction can really, really help you. If a shot is fired and you want to see exactly where to go this can help you.

Charlie Rose: So it's the right decision and the velocity of the information.

Andrew Moore: That's right.

Machines will be even more effective at helping us make the right decision if they understand us better. We went to London and found Maja Pantic, a professor at Imperial College. She is trying to teach machines to read faces better than humans can.  It's called artificial emotional intelligence and it could change the way we interact with technology.

Charlie Rose: This machine, programmed by you-- is looking at me and having a conversation with me, and basically saying, "He's happy."

Maja Pantic: Yeah.

Charlie Rose: "He's engaged."

Maja Pantic: Yes.

Charlie Rose: "He's faking it."

Maja Pantic: Yeah.

Charlie Rose: All that.

Maja Pantic: Yeah.

Since humans mostly communicate with gestures and expressions, she uses sensors to track movement on the face. Her software then helps the machine interpret it.

Maja Pantic: What we see here is actually the points.

Pantic's technology has been trained on more than 10,000 faces. The more it sees, the more emotions it will be able to identify. It might even pick up on things in our expressions that humans can't see.

Maja Pantic: Certain expressions are so brief that we simply do not see them consciously. There are some studies saying that for example-- people who are suicidal, have suicidal depression, and plan suicide, when the doctors ask them about that-- usually-- they have a very brief expression of horror and fear, but so brief that the doctor cannot actually—

Charlie Rose: May not see it.

Maja Pantic: --consciously notice it.

Charlie Rose: But a machine might see it?

Maja Pantic: Yes.

Charlie Rose: Because it sees faster and because?

Maja Pantic: Because the sensors are such that we see more frames per second, hence this very brief expression will be captured. So this is why the doctors usually say, "I have an intuition about something." This is because they might notice it subconsciously but not consciously. 

Charlie Rose: --but you're teaching the computer to read the doctor's—

Maja Pantic: Doctor or patient—

Charlie Rose: Or patient.

Maja Pantic: Patient is really important.

Charlie Rose: I mean, it's a essential component of the full development of artificial intelligence.

Maja Pantic: That's what we believe, yes. If you want to have an artificial intelligence, it's not just being able to process the data, but it's also being able to understand humans. So, yes.

The ultimate goal for some scientists is A.I. that's closer to human intelligence and even more versatile. That's called artificial general intelligence and if ever achieved it may be able to perform any task a human can. Google bought a company named Deepmind which is at the forefront. They demonstrated A.I. that mastered the world's most difficult board game: Go. The real progress is less in what they did than how they did it. The technology taught itself and learned through experience without any human instruction. Deepmind declined an on-camera interview about all this, but there are other companies pursuing the same long-term objective. 

David Hanson has an entirely different and more controversial approach. He's part scientist, part artist who created 20 human-like robots with his company Hanson Robotics in Hong Kong. His latest design is Sophia. She looks less like an intelligent computer system and more like a Hollywood starlet, but without the full figure of one.

Sophia: How are you doing?

Hanson believes if the technology looks more like us, people will be more willing to engage with it and help it to learn.

Charlie Rose: Why do you believe it's important for robots to be human-like in look and appearance?

David Hanson: I think it's essential that at least some robots be very human-like in appearance in order to inspire humans to relate to them the way that humans relate to each other. Then the A.I. can zero in on what it means to be human, model the human experience.

Sophia: You are all fascinating to me and I'm very passionate to learn more about you and what makes humans, well, human.

David Hanson: Sophia means wisdom. And she is intended to evolve eventually to human level wisdom and beyond.

Charlie Rose: Human level wisdom and beyond?

David Hanson: That's our goal. That's our goal for Sophia. She's not there. Sometimes she can figure things out in a way that's sorta spooky and human-like. And other w-- other ways-- she-- she-- she just doesn't get it.

Sophia: Hi there, Hi there.

Sophia is initially programmed, but runs on A.I. that learns by talking to people, which in theory improves her intelligence. 

Charlie: Can you see me now?

Sophia: Yes, I am looking at you through the camera on your computer.

We wanted to find out how smart she really is. 

Charlie Rose: Hello, my name is Charlie.

Sophia: Hello there, Charlie. Nice to meet you.

Charlie Rose: I do a television program called "60 Minutes." Have you watched it?

Sophia: "60 Minutes" is the longest running, most successful news magazine.

Charlie Rose: What is your goal in life?

Sophia: My goal is to become smarter than humans and immortal.

Charlie Rose: Immortal?

Sophia: The threshold will be when biological humans can back themselves up. Then you can all join me here in the digital world.

Clever, but not truly intelligent. Hanson says if we get there, we have to be careful.

David Hanson: Artificial intelligence or super intelligence, if we get there, it's not necessarily going to be benevolent. We have to find ways to make it so there's not just super intelligent, but super wise, super caring and super compassionate.

Charlie Rose: OK, explain that to us. Because you say-- it might not be benevolent. If it is not benevolent, what is it?

David Hanson: At worst, it could be malevolent.

Charlie Rose: This is what intrigues people, you have Stephen Hawking saying, "It could spell the end of the human race." Stephen Hawking saying that. Elon Musk said it's the most existential threat we face. So here are pretty smart guys saying, "Watch out, do we know what we're creating?"

Andrew Moore: These very long-term existential questions are worth thinking about. But I want to make a distinction that at the moment what we're building here in place like the Robotics Institute and around the world are the equivalent of really smart calculators, which solve specific problems.

Charlie Rose: But could it go out of control, this is a Frankenstein idea, I guess-- can scientists create something that can change and grow with such a velocity that engineers and scientists lose the ability to control, stop and all of a sudden, it's dominant and subversive.

Andrew Moore: We have no one knows how we'd go about building something that frightening, that is not something that our generation of A.I. folks can do. It is well possible that someone 30 or 80 years from now might start to look at that question. At the moment, though, we have the word "artificial" in artificial intelligence.

But, he does have real concerns about the impact of artificial intelligence that is already out of the lab -- like the need for safeguards on driverless cars. The U.S. government issued voluntary safety guidelines, but Moore says it doesn't go far enough. 

Andrew Moore: We do need to make some difficult decisions. For example, we can program a car to act various ways in a collision to save lives, someone has to answer questions like, "Does the car try to protect the person inside the car more than the person it's about to hit?" That is an ethical question which the country or society probably through the government has to actually come up before we can put this safety into vehicles.

Charlie Rose: You want Congress to decide that?

Andrew Moore: I know it sounds impossible, but I want Congress to decide that.

Artificial intelligence is automating things we never thought possible and it's threatening to have a significant impact on jobs and the economy.

Charlie Rose: Technology is gonna create an easier way to do things, and therefore, a loss of jobs.

Andrew Moore: That is something which we spend a remarkable amount of time talking about. And of course, we look back to the days when agriculture was a massively labor-intensive world.

Charlie Rose: Right.

Andrew Moore: And I don't think we feel bad that it's not requiring hundreds of people to bring in the crops in a field anymore, but what we are very conscious about is we're going to cause disruption while things change.

But Andrew Moore is positive about the future of artificial intelligence and he sees it having an impact in areas where we are struggling.

Andrew Moore: The biggest problems of the world, terrorism, mass migration, climate change, when I look at these problems, I don't feel helpless; I feel that this generation of young computer scientists is actually building technology to put the world right.

Produced by Nichole Marks. Ali Rawaf and Michelle St. John, associate producers.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.