Watch CBS News

Artificial intelligence could end disease, lead to "radical abundance," Google DeepMind CEO Demis Hassabis says

What's next for AI at Google DeepMind
What's next for AI at DeepMind, Google's artificial intelligence lab | 60 Minutes 14:00

When Demis Hassabis won the Nobel Prize last year, he celebrated by playing poker with a world champion of chess. Hassabis loves a game, which is how he became a pioneer of artificial intelligence. The 48-year-old British scientist is co-founder and CEO of Google's AI powerhouse, called DeepMind. We met two years ago when chatbots announced a new age. Now, Hassabis and others are chasing what's called artificial general intelligence—a silicon intellect as versatile as a human but with superhuman speed and knowledge. After his Nobel and a knighthood from King Charles, we hurried back to London to see what's next from a genius who may hold the cards of our future. 

Demis Hassabis: What's always guided me and-- the passion I've always had is understanding the world around us. I've always been-- since I was a kid, fascinated by the biggest questions. You know, the-- meaning of-- of life, the-- nature of consciousness, the nature of reality itself. I've loved reading about all the great scientists who worked on these problems and the philosophers, and I wanted to see if we could advance human knowledge. And for me, my expression of doing that was to build what I think is the ultimate tool for advancing human knowledge, which is-- which is AI.

Scott Pelley: We sat down in this room two years ago. And I wonder if AI is moving faster today than you imagined. 

Demis Hassabis: It's moving incredibly fast. I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So that's adding to the to this exponential progress. 

Scott Pelley: Exponential curve, in other words straight up?

Demis Hassabis: Yep, straight up and increasing speed of progress.

We saw the progress...

Astra: Hello, Scott. It's nice to see you again.

...in an artificial companion that can see and hear and chat about anything. Early chatbots learned only the internet, an app called Astra also takes in the world. 

Scott Pelley: Do we call her she?

Bibo Xu: Um, that's a good question. I'm not sure, I'm not sure we all know the answer yet…

Bibo Xu, is product manager for Project Astra, an app in a new generation of chatbots that interpret the world with their own eyes. We challenged Astra with virtual paintings we chose and showed to Astra for the first time. 

Testing out ASTRA
Scott Pelley tests out Astra. 60 Minutes

Astra: This is "July Day" by Thomas Hart Benton from 1942.

Scott Pelley: How about this one?

Astra: That is "The Virgin of Charity" by El Greco.

Scott Pelley:That painting is "Automat" by Edward Hopper. 

Scott Pelley: The person in the painting, what emotion is she exhibiting?

Astra: The subject in the painting appears pensive and contemplative, her expression suggesting a sense of solitude. 

Scott Pelley: Can you create a story around this picture? 

Astra: It's a chilly evening in the city. A Tuesday, perhaps. The woman, perhaps named Eleanor, sits alone in the diner. She is feeling melancholy due to the uncertainty of her future, and the weight of unfulfilled dreams.The scene freezes in time. Only the flow of ideas moving onward.

That line, "only the flow of ideas moving onward?" I wish I'd written that. And did we hear a limit to Astra's patience?

Scott Pelley: What was that fictional conversation about?

Astra: Ahhh, in the story I imagined, Eleanor and her friend talked about their shared dreams and the challenges of pursuing them. 

Scott Pelley: Something about the way you said "Ahh" made me wonder if you were getting bored.

Astra: I apologize if my tone came across that way. My aim is always to engage thoughtfully.

Scott Pelley: And she said, "Well, I'm sorry if you don't like my tone."

Demis Hassabis: Wow. Well, you see--

Scott Pelley: What's, what's happening there?

Demis Hassabis
Demis Hassabis 60 Minutes

Demis Hassabis: Well, that's interesting. That's, again a challenge with these systems. As they act in the moment with the context that's around them. And that may have never been tested before. 

He's often surprised because AI programs are sent out on the internet to learn for themselves. They can return later with unexpected skills. 

Demis Hassabis: So we have theories about what kinds of capabilities these systems will have. That's obviously what we try to build into the architectures. But at the end of the day, how it learns what it picks up from the data is part of the training of these systems. We don't program that in. It learns like a human being would learn. So new capabilities or properties can emerge from that training situation.

Scott Pelley: You understand how that would worry people?

Demis Hassabis: Of course. It's the duality of these types of systems, that they're able to do incredible things, go beyond the things that we're able to design ourselves or understand ourselves. But, of course, the challenge is, is making sure that the knowledge databases they create we understand what's in them.

Now DeepMind is training its AI model called Gemini to not just reveal the world but to act in it like booking tickets and shopping online. It's a step toward "AGI" artificial general intelligence—with the versatility of a human mind.

Scott Pelley: On track for AGI in--

Demis Hassabis: In the next five to ten years, I think. 

Scott Pelley: And in 2030 you will have what?

Demis Hassabis: Well, we'll have a system that-- really understands everything around you in very-- nuanced and deep ways-- and are kind of embedded in your everyday life.

'Embedded' like Astra in eyeglasses. 

Scott Pelley: What can you tell me about this building I'm looking at?

Astra: This is the Coal Drops Yard, a shopping and dining district.

She sees what I see. There's a speaker in the earpiece only I can hear.

Scott Pelley: What was it originally before it became shops?

Astra: The Coal Drops Yard was originally a set of Victorian coal warehouses used to receive and distribute coal across London.

Scott Pelley: Was coal ever a problem for the environment in London?

Astra: Yes. Coal was a significant source of air pollution in London, particularly during the Industrial Revolution. 

It occurred to us that the only thing we contributed to this relationship--were legs—which will also soon be engineered. 

Demis Hassabis: I also think another big area will be robotics. I think it will have a breakthrough moment in the next couple of years where we'll have demonstrations of maybe humanoid robots or other types of robots that can start really doing useful things.

For example, researchers Alex Lee and Giulia Vezzani showed us a robot that understands what it sees...

Giulia Vezzani, Scott Pelley and Alex Lee
Giulia Vezzani, Scott Pelley and Alex Lee 60 Minutes

Giulia Vezzani: It's a tricky one.

And reasons its way through vague instructions. 

Alex Lee: Put the blocks whose color is the combination of yellow and blue into the matching color bowl.

Scott Pelley: The combination of yellow and blue 

Scott Pelley: …is green. And it figured that out. It's reasoning.

Giulia Vezzani: Yes, definitely, yes.

The toys of Demis Hassabis's childhood weren't blocks but chess pieces. At 12, he was the number two champion in the world for his age. This passion led to computer chess, video games and, finally, thinking machines. He was born to a Greek Cypriot father and Singaporean mother. Cambridge, MIT, Harvard-- he's a computer scientist with a PhD in neuroscience because, he reasoned, he had to understand the human brain first. 

Scott Pelley: Are you working on a system today that would be self-aware?

Demis Hassabis: I don't think any of today's systems to me feel self-aware or, you know, conscious in any way. Obviously, everyone needs to make their own decisions by interacting with these chatbots. I think theoretically it's possible. 

Scott Pelley: But is self-awareness a goal of yours?

Demis Hassabis: Not explicitly. But it may happen implicitly. These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems to understand you, self and other. And that's probably the beginning of something like self-awareness.

But, he says, if a machine becomes self-aware, we may not recognize it.

Demis Hassabis: I think there's two reasons we regard each other as conscious. One is that you're exhibiting the behavior of a conscious being very similar to my behavior. But the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously with machines, they're running on silicon. So even if they exhibit the same behaviors, and even if they say the same things, it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.

Demis Hassabis
Demis Hassabis 60 Minutes

Scott Pelley: Has an AI engine ever asked a question that was unanticipated?

Demis Hassabis: Not so far that I've experienced, and I think that's getting at the idea of what's still missing from these systems. They still can't really yet go beyond asking a new novel question or a new novel conjecture or coming up with a new hypothesis that has not been thought of before.

Scott Pelley: They don't have curiosity?

Demis Hassabis: No, they don't have curiosity, and they're probably lacking a little bit in what we would call imagination and intuition.

But they will have greater imagination, he says, and soon. 

Demis Hassabis: I think actually in the next maybe five to ten years. I think we'll have systems that are capable of not only solving an important problem or conjecture in science but coming up with it in the first place.

'Solving an important problem' won Hassabis a Nobel Prize last year. He and colleague John Jumper created an AI model that deciphered the structure of proteins.

Demis Hassabis: Proteins are the basic building blocks of life. So, everything in biology, everything in your body depends on proteins. You know, your neurons firing, your muscle fibers twitching, it's all mediated by proteins.

But 3D protein structures like this are so complex, less than 1% were known. Mapping each one used to take years. DeepMind's AI model did 200 million in one year. Now Hassabis has AI blazing through solutions to drug development. 

Demis Hassabis: So on average, it takes, you know, ten years and billions of dollars to design just one drug. We can maybe reduce that down from years to maybe months or maybe even weeks. Which sounds incredible today but that's also what people used to think about protein structures. And it would revolutionize human health, and I think one day maybe we can cure all disease with the help of AI.

Scott Pelley: The end of disease?

Demis Hassabis: I think that's within reach. Maybe within the next decade or so, I don't see why not. 

Demis Hassabis told us AI could lead to what he calls "radical abundance"—the elimination of scarcity. But he also worries about risk.

Demis Hassabis: There's two worries that I worry about. One is that bad actors, humans you know, users of these systems repurpose these systems for harmful ends. And then the second thing is the AI systems themselves as they become more autonomous and more powerful. Can we make sure that-- we can keep control of the systems? That they're aligned with our values, they-- they're doing what we want that benefits society. And they stay on guardrails. 

"Guardrails" are safety limits built into the system. 

Scott Pelley: And I wonder if the race for AI dominance is a race to the bottom for safety.

Demis Hassabis: So that's one of my big worries actually, is the-- of course all of this energy, and racing, and resources is great for progress. But it might incentivize certain actors in-- in that to cut corners. And one of the corners that could be shortcut would be safety and responsibility. So the question is, is how can we-- coordinate more, you know, as leading players, but also nation-states, even. I think this is an international thing. AI's gonna affect every country-- everybody in the world. So I think it's really important that the world and the international community has a say in this.

Scott Pelley: Can you teach an AI agent morality?

Demis Hassabis: I think you can. They learn by demonstration. They learn by teaching. And I think that's one of the things we have to do with these systems, is to give them a value system and-- a guidance, and some guardrails around that, much in the way that you would teach a child.

Google DeepMind is in a race with dozens of others striving for artificial general intelligence so human that you can't tell the difference. Which made us think about Demis Hassabis signing the Nobel Book of Laureates—when does a machine sign for the first time? And after that, will humans ever sign it again?

Demis Hassabis: I think in the next steps is going to be these amazing tools that enhance our, almost every endeavor we do as humans and then beyond that, when AGI arrives, you know, I think, it's gonna change pretty much everything about the way we do things. And its almost you know, I think we need new great philosophers to come about hopefully in the next five, ten, years to understand the implications of this. 

Produced by Denise Schrier Cetta. Associate producer, Katie Brennan. Broadcast associate, Michelle Karim. Edited by Sean Kelly.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.