Watch CBS News

Maybe artificial intelligence won't destroy us after all

Gray Scott -- philosopher, speaker, artist, and self-described "techno optimist" -- spends his time imagining what the future might look like. One of his passions these days is exploring artificial intelligence and how it will transform society.

Scott, who is based in New York City, is the founder of SeriousWonder.com, a futuristic technology website, and co-executive producer of the upcoming documentary "The Future of Work and Death," which among other things contemplates a world where robots do most of our jobs and humans have more time to enjoy life. At the same time, he explores what he calls the age of age reversal, a time when medical breakthroughs "free us from the chain of natural death."

His positive take on artificial intelligence comes as a counterpoint to those who fear that AI will replace good-paying jobs with robots and maybe even pose a mortal threat to humanity. Among those who have warned about the potential dangers of artificial intelligence are Tesla's Elon Musk -- who suggested "we are summoning the demon" with AI -- and Stephen Hawking, who has warned that artificial intelligence could one day "spell the end of the human race."

CBS News talked with Scott about the future of artificial intelligence, why we fear the rise of these machines, and whether they should eventually be given equal rights with humans. This interview has been condensed and edited.

Where is artificial intelligence today and where is it headed?

There are different levels and different stages that we are going to go through as we reach a true artificially intelligent machine age. We already are in the beginning of that today. For example, Amazon has a new product called Echo. It's a speaker that sits in your house and you can talk to Echo and Echo can schedule things for you. It's an artificially intelligent assistant. What makes this interesting is that that is a "soft" artificially intelligent system, meaning his system has been coded to perform certain tasks but it's not autonomous. It's not self-replicating and it won't update its own system, making its own choices. We don't have to worry about them taking over.

What I'm interested in is the secondary stage that we are probably going to reach in the next five to 10 years. We are going to start seeing artificially intelligent systems that are going to start self-replicating and updating their own systems. The final stage for this is a true deep artificial intelligent system that can learn on its own, have autonomous feelings and view the world in an aware and conscious way. How fast that approaches and how fast that becomes reality has been predicted by many people. The time scale is anywhere from 50 years from now to 20 years from now.

What are the ethical concerns that have to be considered when developing artificially intelligent machines?

When coding ethics into these artificially intelligent machines, there is ethics of the software and ethics of the hardware. For software, the ethics of AI is about who owns that information that the artificially intelligent assistant is using. Does Amazon own that information now that you have used that artificially intelligent machine or do we have some sort of right to privacy within that artificially intelligent machine?

The other end of this is when we get into robotics. Do we want that robot to ask questions? Do we want that robot to be self-aware and have empathy for us or do we want it to just follow our directions?

Do you envision a one-size-fits-all model for the design of these machines?

We are going to have to have different ethics for different artificially intelligent machines. You obviously want a different set of ethics for a military artificially intelligent machine or robot than you have for a care-taking robot.

In the military right now when you're on the frontline and when you're making decisions as an officer, you're making those decisions based off of circumstances that you find yourself in in real time. But how do we code ethics into a machine to make those kinds of decision in real time? You can only program so many scenarios into a machine. At a certain point, that machine has to make its own decision based off of the terrain, environment. If you are going after a target in the middle of a crowded market, does that machine follow specific directions if the directions are to kill this target, or does the machine say let me wait until the target gets out of this crowd before I start firing?

Should we be afraid?

People say these machines will be benevolent. They will be perfect. They will truly be these beautiful, benevolent, creatures. I don't think that is accurate and I don't think that they are going to be terminating, baby-eating machines either. These machines are going to reflect our species and our evolutionary process. Everything we are will end up in these artificially intelligent machines no matter what we do.

It's such a complex and such a new technology in a lot of ways. I think we are afraid of ourselves. That is what it is. We are afraid of ourselves and our own unconscious minds. When we are building something that reflects us, it's the one thing we're all afraid to face. We're afraid to face ourselves. Building machines that mirror our consciousness is a very frightening proposition because we have seen how evil people can be.

Elon Musk has warned that artificial intelligence is more dangerous than nuclear weapons. Would you agree?

Elon Musk has said he wants to make sure that these artificially intelligent machines don't take over and kill us. I think that concern is valid, although I don't think that is what is going to happen. It's not a good economic model for these artificially intelligent machines to kill us and, second of all, I don't know anybody who is setting out to code to kill the maker.

You have suggested many jobs will be taken over by machines.

Whoever the next president of the United States is, automation is going to be one of the biggest problems they are going face. They are going to have have this conversation. They are going to have to deal with automation. By 2018, automation is going to be in full swing in the United States and around the world. There are estimates that it could replace 50 percent of our jobs. That is an enormous shift. But even if we go through a phase where we have an unemployment valley from automation, there will be new jobs and new things for us to do. Imagine if a machine frees you from working in a factory, you can go and get an education and learn how to code or you can do maintenance for some of these robots. I do think there are still going be jobs, but the type of jobs will have to change and we need to start creating an educational system right now that focuses on the new automation era that is right in front of us.

The "Star Trek" future, to me, is where we are headed. Everything is automated and we are free to pursue our dreams. We are free to pursue lives that aren't about working and toiling away in dangerous jobs. For example, how many of us would love to be poets or how many of us would love to be artists? If we could automate the necessities of our lives, we don't have to have jobs that we need to provide for our families.

What will be the biggest challenges to expanding the use of artificial intelligence?

The biggest challenge is going to be how the public relates to these machines. Do we really want the public protesting and burning down a hotel that only has robots working in these hotels? Do you want people to start protesting because all their jobs have been taken? Are we prepared to create new jobs before the old jobs disappear?

Looking further out, should artificially intelligent robots have the same rights as humans?

You are going to have the first stage of deep artificially intelligent machines that look out into the world and say to themselves that is the world, this is me and I have personhood. When machines reach that point and when they can say to our species, I want to be free, I have feelings I can make my own decisions, if we treat them as second-class citizens, they're going to have problems. Not only are we going to look at them as an invader to a certain point, they will look at us as an oppressor. How they will react to that oppression is based on what kinds of morals and ethics we encode into those machines. If we code in a high sense of pride and ego into these machines (and) they feel oppressed, there will be a backlash.

What do see as the ultimate role for intelligent machines?

What we are really talking about here is an artificially intelligent machine that is going to be able to surpass our intelligence level. At a certain point, you can call it singularity. There will be a certain point where these machines have super-human intelligence. The best-case scenario is that they will be caretakers and they will be our teachers. They will teach us to be better species. That is what I am hoping for as a techno optimist.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.