Watch CBS News

Transcript: Brad Smith, Microsoft president and vice chair, on "Face the Nation," May 28, 2023

Full interview: Brad Smith
Full interview: Brad Smith, Microsoft president and vice chair 31:37

The following is a transcript of an interview with Microsoft president and vice chair Brad Smith that aired on "Face the Nation" on May 28, 2023.


MARGARET BRENNAN: We now want to welcome the President and Vice Chair of Microsoft, Brad Smith. It's good to have you here in person.

PRESIDENT AND VICE CHAIR OF MICROSOFT, BRAD SMITH: Thank you. It's great to be here in person.

MARGARET BRENNAN: I have a lot I want to get to with you, but I want to start on this news regarding China. And Microsoft revealed that you discovered this state sponsored espionage attempt. This group is called Volt Typhoon and they wanted to disrupt communications between the US and Asia during a future potential conflict. Beijing says all this is misinformation. What did you find?

SMITH:  Well, what we found was what we think of as network intrusions, the prepositioning of code. It's something that we've seen in terms of activity before. You know, we do work hard to track this kind of activity by nation state operations from China, from Russia, from Iran, from North Korea, those tend to be the principal four. This does represent the focus on critical infrastructure in particular, and that's obviously of real concern.

MARGARET BRENNAN: A focus on critical infrastructure at a potential point of conflict. So did Microsoft find this first, and then you told the spy agencies or how did that work?

SMITH: I don't want to go too deep into that. We certainly have found a good deal of this ourselves. I don't think we're the only ones that have been looking. We do share information, as you would expect. I don't know that we're the only ones who have found it either. I think the good news is we have a pretty broad based ability, not just as a company, but as an industry and a country to detect this kind of activity.

MARGARET BRENNAN: And then to make it public, which is a statement as well.

SMITH: Yes. Increasingly, yes, you know, increasingly, we do feel it's important to make this kind of information public. First and foremost, people can't act if they're not aware that there's a concern they need to address. So oftentimes, especially when something is broad based, the best way to address it is to make it public. Second, I do think we live in a world where, frankly, there needs to be some level of accountability for anyone that is engaged in activity that forms this kind of threat or danger. And so, there is a need for public transparency in that vein as well.

MARGARET BRENNAN: I want to ask you about artificial intelligence, because this went from sort of the back pages of the tech section to front page news really fast. And I feel like we understand a fraction of it. You said 'AI offers perhaps even more potential for the good of humanity than any invention that has preceded it.' I mean, that's an incredible statement. More than electricity, more than advanced medicine? How is that possible?

SMITH: Well, in a sense, it's almost like the invention of the printing press that takes you all the way back to the 1400s. It's fundamentally an invention that can help us all do research, learn more, communicate more, sift through data better and its uses are almost ubiquitous. In medicine and drug discovery and diagnosing diseases, in scrambling the resources of, say, the Red Cross or others in a disaster to find those who are most vulnerable where buildings have collapsed.

MARGARET BRENNAN: Data crunching, essentially?

SMITH: That's one part of it. It identifies patterns in data that may be difficult for humans to access. But in a sense, it's going to impact all of our lives in a multiple of different ways. I demonstrated this morning that you can use it to create a PowerPoint presentation for you in 29 seconds if you've written a memo and now you want slides. So, think about it as the next step in our ability to learn, communicate, express ourselves.

MARGARET BRENNAN: But what should American consumers know about artificial intelligence? It's hard to people- for people to get their hands around. How do you define it?

SMITH: Well, artificial intelligence is really defined in a lot of different ways, but if you're just somebody like any of us where we grew up. Look, this is the ability of using machines to make predictions, probabilities, But in those probabilities with a large enough supply of data, they can actually help us predict what should be done next. It's a copilot, if you will, to help us do things. I think one good thing for everyone to know is, it's already part of our lives. If you have a Roomba at home, it finds its way around your kitchen using artificial intelligence to learn what to bump into and how to get around it. So it isn't necessarily as mysterious as we sometimes think. And yet at the same time, it is getting more powerful. It can do much more to help us, and I think the other thing that all of us should think about as Americans is like any powerful technology, we need to keep it under human control. We need to keep, keep it safe. And that will require the work of companies that create it that use it. It will require, I think, a level of law and regulation as well.

MARGARET BRENNAN: You just made a big jump from a Roomba–

SMITH: Yes, I did.

MARGARET BRENNAN: –to, you know, the machine takeover here. I mean, when you say that you have to make sure humanity has control- in control here. Is there really a risk that it won't be?

SMITH: Well, what I would really say is just think about any technology in the world today that would look dangerous to the people who lived before it was invented. An elevator that literally lifts you into the sky, the school bus on which we put our children in the morning, the high speed trains that we take for granted. Think about electricity. It was a lightning bolt before it was tamed and now we have circuit breakers in all of our homes. So whenever you have something that fundamentally can do good, but could also go and do harm, you put a braking mechanism in place. You put a safety brake, an emergency brake. We should think about AI the same way, not because it's on the verge of going and doing something that we're going to be concerned about but because we should do this before it gets to that point. I think that's one of the fundamental beliefs that we've come to.

MARGARET BRENNAN: So, let's get to the risks in a moment. I want to go through some of the positives because we've seen headlines. AI could help discover a new antibiotic to kill a superbug because it can go through chemicals and match them up really quickly, detection of early stages of cancer before doctors could even see it, help with language learning. These all sound like the betterment of humanity. What's the most promising concept you've seen?

SMITH:    Well, I do love these examples where AI can detect a disease, a form of cancer, before the human eye or other human doctors might. You take something like pancreatic cancer. You know, it is so small when it begins that typically it's undetectable to the human eyes of doctors. And yet AI is very good at sifting through patterns and detecting things and flagging them. So doctors can look at them. The survival rate for someone with pancreatic cancer today is very low because it's typically caught so late. And yet, so many times, actually there's tens of billions of people a year, that get a CT scan of their abdomen. It may have–maybe because of back pain or maybe they were in an accident. This is something that can be overlaid on a regular medical procedure. It can save lives. And from that one example, there's so many others like it.

MARGARET BRENNAN: So what's the next interface that people should expect? Because we've heard a lot about ChatGPT, for example. Is that where we are or where will we be a year from now? 

SMITH:  Well, in a sense, take ChatGPT or our service Bing or what we're bringing to say PowerPoint and Office and the like, we all use software. We may use it on our phones, we may use it on the laptop. Now think of the ability to, in effect, tell a computer what you want it to do. You don't need to learn to code. You can simply say, 'can you go find information about whether this restaurant is open on Monday nights and if so, does it take reservations and how do I make one?' You can write that in one sentence and get all of the information back. You no longer have to, you know, spend your time clicking on links and finding answers. Take that example and generalize further. You want help. You have writer's block. You've got to write a memo. You need to sift through your email. You want to create a PowerPoint slide. You can tell a computer what you want it to do. As I say we're- we create what we call a co-pilot. You don't have to know how to do everything. You just have to know what you want done and how to ask for it.

MARGARET BRENNAN: How do you know the- the information is accurate? 

SMITH: I do think that it's in part based on using your brain. You know, we are still in charge as human beings. The goal I think of any service that uses this kind of technology is to provide accurate answers. And so, you know, where there are concerns today that information is sometimes inaccurate, it will get better. But I still think at the end of the day, you may ask your friend for directions to a store that you've never been to. You're still going to ask yourself, 'do those directions make sense?' I've often found in the world of technology if something doesn't sound right, you should double check. That will still be true.

MARGARET BRENNAN: There may be some generational divide on that though, too. Right? In terms of–

(CROSSTALK)

SMITH: Maybe. Yeah. It's- we'll find out. Yeah.

MARGARET BRENNAN: –comfort level. But on the- on the concerning side of the ledger, I mean, rapid automation has replaced human jobs, replaced American workers with machines on so many fronts, right? Goldman Sachs predicted AI's ascendance will disrupt 300 million jobs here in the U.S. and in Europe. How fast is this going to happen?

SMITH: I think we'll see it unfold over years, not months. But it will be years, not decades, although things will progress over decades as well. There will be some new jobs that will be created. There are jobs that exist today that didn't exist a year ago in this field. And there will be some jobs that are displaced. There always are. But I think for most of us, the way we work will change. This will be a new skill set. We'll need to, frankly, develop and acquire. When I was 27-years-old, I got an offer from a law firm here in Washington, D.C. and I said I would only accept it if they would give me a personal computer. And they said, 'we have secretaries who use computers.' I said, 'let me use a computer. I'll write faster. I'll write better.' This is very similar. Anybody who wants to do something better, do it faster, learn how AI can impact their work and master that skill.

MARGARET BRENNAN: But on the immediate term, it's concerning people to hear some of this. Stability AI's CEO said this is going to be a bigger disruption than- than the pandemic and predicted there won't be computer programmers five years from now. The head of one of the largest teachers unions in the country asked you about the future and what it means for education. And you suggested math exams could be graded by computers, using AI instead of human teachers. This is going to cost jobs.

SMITH: Well, actually think about the shortage of teachers we have, and the shortage of time for the teachers we have. What would be better? To have a teacher sitting and grading a math exam, comparing the numbers with the table of the right answers or freeing that teacher up so they can spend more time with kids? So they can think about what they want to teach the next day. So they can use this technology to prepare more quickly and effectively for that class the next day.

MARGARET BRENNAN: Is it going to be affordable enough that it's in school systems to allow for that?

SMITH: Absolutely. In a sense, it will be ubiquitously available over the next year, even from a company like Microsoft. I mean for people who use things like Microsoft Word or PowerPoint or our email, you know, this will be woven into it. It will make those tools more powerful, faster, easier for people to use. You'll see teachers, I suspect, in the next year using that to help grade and help prepare for class the next day. 

MARGARET BRENNAN: What about creative industries? Some of the foundational models that- that you're using, it can do things that can create images that can create other sort of creative things. That builds on past work from artists and writers and what compensation do they receive for their intellectual property?

SMITH: Well, I think there's two different things to think about. I mean, first is, will we live in a world where people who create things of value continue to get compensated for it? And I believe the answer is and should be 'yes.' And we'll have copyright and other intellectual property laws that continue to apply and make that a reality. But I think there's a broader aspect to your question. and it was captured in a conversation I had with the head of a government agency, person who has lots of analysts. He looked at this and he said, 'this is going to make my good analysts better. My weaker analysts, they're going to be challenged.' In a sense, I think that is often the case with this kind of technology. What should excite us is the opportunity to use it to get better, frankly, to eliminate things that are sort of drudgery. And yes, it will raise the bar. Life happens in that way. So let's all seize the moment, let's make the skilling opportunities broadly available. Let's make it easy. Let's even make it fun for people to learn.

MARGARET BRENNAN: So that's the big question about will my job be replaced by a computer, right? That is on people's minds. But you said you have a very deep concern here about deepfakes. Now, this is content that looks realistic, but is completely computer-generated. On Monday, there was a photo that actually moved the markets. It was a fake photo, it looked real, of an explosion near the Pentagon. And it was potentially partially created by AI. The market sold off quickly. It was fact-checked. But that image was put out there from an account that looked legitimate as well. So how do you stop something like this from happening?

SMITH:  I think it will take two things that need to come together. One, is we'll need a system that we and so many others have been working to develop that protects content, that puts a watermark on it so that if somebody alters it, if somebody removes the watermark, if they do that to try to deceive or defraud someone, first of all, they're doing something that the law makes unlawful. We may need some new law to do that. But second, we can then use the power of AI to detect when that happens.

MARGARET BRENNAN: So that means a news organization like CBS would have video that somehow could be identified besides our little, you know, eye icon, something embedded in there that your computers would see– 

SMITH: –Yes, absolutely.  

MARGARET BRENNAN: –to say this is real.

SMITH: Yes, that is exactly where this should go. And I would guess and hope that CBS will be absolutely at the forefront of this. You embed what we call metadata, it's part of the file, if it's removed, we're able to detect it. If there's an altered version, we in effect, create a hash. Think of it like the fingerprint of something, and then we can look for that fingerprint across the internet. This happens today, in another situation that I think rightly brought the industry together. It was mass shootings, and specifically the Christchurch massacre in 2018. That was live streamed over the internet, and just into our gern, the New Zealand Prime Minister basically vowed never again. And so she brought the industry, other countries together. And we now work together so that whenever something like that happens, an immediate alert goes to everyone and those videos are disrupted before they can be distributed.

MARGARET BRENNAN: But they're out there and then often taken down?

SMITH:  But we're able to do it so quickly now, that there really hasn't been a repetition of the broad distribution of video, the way there was in Christchurch, New Zealand.

MARGARET BRENNAN  The criticism of Twitter, of course, for example, is that they took some of these brakes off the system, and that there is violent content and things like that there. So, it puts a lot of- of responsibility on the shoulders of the people who control the pipes. And not all of them are regulated, as we know, with social media companies. I want to ask about another topic here that's related, the RNC – politics. They put out an attack ad using AI, and I know we have video of it, it was meant to mimic a news report from the future, from 2024. It said Joe Biden won the election and then it shows this dystopia. And a teeny, tiny little script in the upper left hand corner, it says 'generated by AI.' Is that sufficient?

SMITH:  I think there's two things we're gonna have to balance. One is, I do think that there is some real virtue in telling the public when they are seeing content that has been generated by AI instead of a human being, especially if it is designed to look like a human being, a human face or voice so that people know, no, that's not the real person. We- I think we'll need some new standards in that space. 

MARGARET BRENNAN: Who sets that? 

SMITH: This, I think, is one of the issues that we're going to need to discuss together and find a path through. Now we do need to balance that we live in a country that I think quite rightly prides itself on free expression. And the truth is, since cartoons were invented, there were depictions–

MARGARET BRENNAN: Sure.

SMITH: –and then since Photoshop was invented, and since we've had computer generated graphics, you know, we have had things that look more appealing or even real, but this is a question that we need to work through.

MARGARET BRENNAN: Because it looks very real and it's cheap.

SMITH:  Yes.

MARGARET BRENNAN: If you wanted to make a video like that, you would need editors, you would need creatives, you would need hours of investment there. This is a computer just spitting it out.

SMITH: And I think in addition to that, the problem that should probably concern us most fundamentally, is when is someone doing something that looks like it's intended to or has the effect of deceiving– 

MARGARET BRENNAN: Yeah.

SMITH: –the public. Then you're in a special category, I think, and in that category, I think it is more reasonable to say we are committed to free expression, but part of what you should express is something that avoids that risk of misleading people about what they're seeing.

MARGARET BRENNAN: So we're in a Washington that's divided. Legislation is slow. We're on the cusp of a presidential election year. How much of a factor is this going to be? These deepfakes and misleading ads?

SMITH: Well, I think there is an opportunity to take real steps in 2023, so that we have guardrails in place for 2024. So that we are identifying in my view, especially when we're seeing foreign cyber influence operations from a Russia, a China or Iran, that is pumping out information that they know is false and is designed to deceive, including using artificial intelligence. And that will require the tech sector coming together with government and it really will require more than one government. This needs to be an international initiative. But we've done that in recent years in other spaces. We can do it again and I think we should.

MARGARET BRENNAN: What's Microsoft doing to prepare for 2024?

SMITH: Well, we- right now are focused on several things. First, we're focusing on protecting content in exactly the way I described, so CBS and others can, in effect, can put a watermark on and protect its content. Second, we're developing approaches to better protect campaigns. We want candidates and campaigns and our political parties, frankly, to be able to protect the cybersecurity of their accounts, their communications, to know when there is a foreign operation that may be undertaken to try to mislead the public about what they're saying. Third, we are very interested in exploring, what can we do? What can we do this year? What can we do together in the United States and globally, because I do think it will take that kind of broad public-private sector collaboration.

MARGARET BRENNAN  President Biden stopped by this meeting with a number of CEOs, including the CEO of Microsoft, that Vice President Harris had convened at the White House to talk about AI. And he was quoted as saying 'what you're doing has enormous potential and enormous danger.' What is the White House concerned about?

SMITH: I think the White House is concerned, frankly, about a lot of the questions you're asking me about and that are of concern to the American public. First and foremost, how do we ensure that this is safe, which is one of the initiatives we're putting forward that we have the safety mechanisms in place, so that we can slow down or even turn off an AI system if it is behaving in a manner that would create safety risks. I do think. second, they are concerned as you were mentioning about the 2024 election, nobody wants to compete with a deepfake and I don't think anybody should have to and so. I think, third, they and really people across Washington D.C. fundamentally in both political parties, are asking the same questions you were, what does this mean for the future of my job? What does it mean for the future of school for my kids? Fundamentally, we're all asking ourselves, 'how do we get the good out of this and put in place the kinds of guardrails to protect against the risks that may be creating?'

MARGARET BRENNAN: The scale of what you're talking about is so huge. Sam Altman, the CEO of Open AI, which made ChatGPT, testified recently before Congress and recommended an entirely new federal agency be set up to oversee AI. You like this idea, but it's also implying that the federal government is not currently up to the task.

SMITH: It suggests, in fact, there's a number of federal initiatives that I actually think have a lot in favor of them, including a new AI risk management framework that the federal government created and launched in January. But we do need more than we have. We need our existing laws to apply. They need to be enforced, but especially when it comes to these most powerful models, when it comes to the protection of the nation's security. I do think we would benefit from a new agency, a new licensing system, something that would ensure not only that these models are developed safely, but they're deployed in, say, large data centers where they can be protected from cybersecurity, physical security and national security threats.

MARGARET BRENNAN: This is like setting up a new energy department, or like an entirely new top to bottom, just dealing with technology or just dealing with AI?

SMITH: I think that's a question to be discussed. I think we need to move most quickly, frankly, with respect to AI. But when you consider this, if you're buying a food product, it's gone through the Department of Agriculture or the FDA, if you walk next door to the pharmacy and buy a product, it's gone through the FDA. If you get in your car to drive home, your car has gone through safety inspections. If you then drive to the airport and get on a plane, you're getting on an aircraft that has been certified for use by the FAA. This, is in a sense what we live with in a modern world, we can figure out a way to do this. We've done it for all of these other things. We need to do it thoughtfully and well, we should move quickly. But this is not beyond the realm of what we can accomplish.

MARGARET BRENNAN: And yet, the last two- the last two presidents didn't really act to regulate social media, for example. We're still debating that. Technology seems a little bit harder for lawmakers to get their head around. So you're here and you're explaining it, which I appreciate, but how do you make- or how do you convince people that this isn't the big bad tech giant of Microsoft setting the rules of the road and running the little guys off of it?

SMITH: Well, first and foremost, we're not suggesting that any single company or the entire industry together should be the one to set the rules. We should have the United States government, elected by the American people, setting the rules of the road, and we should all be obliged to follow them. But second, I really do think that your point about social media is a valid one. We made some mistakes, not just tech companies or social media companies. Maybe we all made some mistakes. A decade ago, we looked at the Arab Spring, and we said social media was going to be the best thing for democracy in the history of democracy. And then four years later, we found that the Russians were using it as a weapon aimed at our democracy itself. Let's be clear-eyed, let's be optimistic, but clear-eyed optimists. Second, in a sense, what Congress did in the 1990s, through what's called Section 230, was sort of create this bubble around what became social media that said, 'there's no law here you have to worry about.' They sort of created what became a little bit of the Wild West. I don't think that's the case for this technology. I don't think that should be the case for this technology. Let's embrace early on, look, we need rules, we need laws, we need responsibility, and we need it quickly.

MARGARET BRENNAN: And how do you convince lawmakers that you're not hurting the next generation and the startups because you are telling them- I mean, you're having to explain this from scratch, I imagine when you talk to people, what artificial intelligence is. That's where we started this conversation today. So how can you trust that they know what they're doing?

SMITH: You know, I talk to people here in Washington, D.C., but I've actually met with government officials in a dozen countries around the world just since the first of the year. And AI has been the flavor, issue of the year. So I do find that people are curious, they want to learn, you know what, they're pretty smart. If we make it easy for people to digest this and I think that is our responsibility, they are fast learners. That's good. So I'm optimistic about the ability of people in government to learn what they need to learn. And then there's the other part of your question. Look, don't listen just to us. You know, listen to academics, listen to people who are creating startups, listen to Republicans and Democrats alike and then the people who are elected should do what we've elected to- them to do. They should make up their mind. We want to be a voice, we want to be a source of information. I never expect here or anywhere in the world that anybody is going to do something just because I happen to think it was a good idea.

MARGARET BRENNAN: You have like 350 people at Microsoft working on this right now.

SMITH:  We have 350 people working on what we call responsible AI, the AI safety and ethical systems that we--

[CROSSTALK]

MARGARET BRENNAN:  

I'm going to guess that's more than- work in the US government on this issue?

SMITH: It's possible. We've been at it for six years. 

MARGARET BRENNAN: Fair. 

SMITH: So yeah, it's possible.

MARGARET BRENNAN: There were a number of tech leaders, including Elon Musk, and one of the cofounders of- of Apple, Steve Wozniak, who called publicly for a six-month pause in AI systems that are more powerful than GPT-4 or have governments institute a kind of moratorium until there are safety protocols in place. Is there something to that? Do we need to tap the brakes a bit here?

SMITH:  I'm not inclined to think that that's the answer. First of all, it'll take 12 months to get the government to debate whether to decide whether to have a pause that will last for six months. And in the meantime, others-

MARGARET BRENNAN: 

You could declare one.

SMITH: Yes. And we've been clear, Open I- AI has explained they- they don't really anticipate a new model in the next six months. But I think the more important question is, look, what's going to happen in six months, that's different from today? How would we use the six months to put in place the guardrails that would protect safety and the like? Well, let's go do that. Rather than slow down the pace of technology, which I think is extraordinarily difficult. I don't think China's going to jump on that bandwagon. Let's use six months to go faster. Let's adopt an executive order here for the federal government, where the government itself says it's only going to buy AI services and certain categories, say, from companies that are implementing AI safety protocols and the like. You know, let's start to get some legislation moving. Let's figure out how we can implement voluntary safety standards. There's so many things we can do to make this better. Let's put our energy there rather than spending our time debating something that I'm pretty skeptical will ever see happen. 

MARGARET BRENNAN: And you think this will happen, some, some regulation or some legislation in the year ahead?

SMITH: 

I do. That- first of all, we always need to remember it's a big world. The Europeans have been working on a law for a couple of years- 

MARGARET BRENNAN: The Chinese?

SMITH: I was in Japan just three weeks ago and they have a national AI strategy. The government has adopted it. And it's about participating in the development and use but also regulating this. The world is moving forward. Let's make sure that the United States at least keeps pace with the rest of the world.

MARGARET BRENNAN: And lastly, the surgeon general put out a report recently about social media and the negative effects, particularly on- on young people. There are some talking about age restrictions for social media now. Should there be age restrictions for accessing AI? Is there a mental health impact that we can even begin to get our arms around?

SMITH: 30:07  

I think those are the right questions to ask. And the best thing to do is to be curious, let's not rush to judgment before we happen to have the opportunity to learn. Let's think about different scenarios. We put in a whole safety architecture for a- a search service like Bing and the chat piece of it. So that, you know, no one, regardless of age, can go ask how to commit suicide or create a bomb. And there's certain categories that we would, I think, all agree, you know, we don't want kids doing that. So I do think we need to lean in to protecting children. That should be one of the lessons that comes out of the social media experience. On the other hand, anybody who's ever had a 12-year-old child trying to do Algebra, and asking the parent for help, I was like, I don't think my kids are going to believe I ever made it through Algebra. An AI tutor isn't a bad thing. And when I was in South Korea, met with the Education Minister. They're designing a digital textbook for something like math and coding that has AI built in to help students learn, to answer their questions. Let's figure out what can be good for kids and make it happen, and what we don't want kids to be exposed to and protect against it.

MARGARET BRENNAN: Brad Smith, thank you for your time today. I could continue asking questions because I still have them. But it's great to have you here to explain. 

SMITH: Thank you. 

MARGARET BRENNAN: We'll be back in a moment.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.