"I've always thought of AI [artificial intelligence] as the most profound technology humanity is working on. More profound than fire or electricity or anything that we've done in the past," said Sundar Pichai, the CEO of Google and its parent company Alphabet.
The 50-year-old Pichai gave 60 Minutes correspondent Scott Pelley rare access to the inner workings of Google's AI development, which includes robots that have acquired skills through machine learning and Project Starline, an AI video conferencing experience Google is developing to allow people to feel as though they are together, despite being in different locations.
Perhaps Google's most anticipated and noteworthy foray into AI is its chatbot,. The company presently calls it an experiment, in part to do more internal testing. Bard notably made a mistake when Google debuted the program in February. Unlike Google search, Google says Bard does not look for answers on the Internet. Instead, it relies on a self-contained and mostly self-taught program.
"[AI] gets at the essence of what intelligence is, what humanity is," Pichai told Pelley.
In the video below, Pelley asked Pichai how Bard will affect Google search which runs 90% of internet queries and is the company's most profitable division.
When Google filed for its initial public offering in 2004, its founders wrote that the company's guiding principle, "Don't be evil" was meant to help ensure it did good things for the world, even if it had to forgo some short term gains. The phrase remains in Google's code of conduct.
Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers.
One of the things Pichai told 60 Minutes that keeps him up at night is Google's AI technology being deployed in harmful ways.
Google's chatbot, Bard, has built in safety filters to help combat the threat of malevolent users. Pichai said the company will need to constantly update the system's algorithms to combat disinformation campaigns and detect, computer generated images that appear to be real.
As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.
"There has to be regulation. You're going to need laws…there have to be consequences for creating deep fake videos which cause harm to society," Pichai said. "Anybody who has worked with AI for a while…realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt."
Adaptation that is already happening around us with technology that Pichai believes, "will be more capable "anything we've ever seen before."
Soon it will be up to society to decide how it's used and whether to abide by Alphabet's code of conduct and, "Do the right thing."
You can watch Scott Pelley's two-part report on Google, below.
The video at the top was produced by Keith Zubrow and edited by Sarah Shafer Prediger
for more features.