Watch CBS News

Artificial intelligence: The advent of virtual humans

Justine Cassell has taken her virtual assistant Sara on a road trip.

They're in Tianjin, China, where Carnegie Mellon University's associate dean of technology strategy and impact traveled to offer a glimpse of tomorrow at this week's Annual Meeting of New Champions.

Sara, for "socially aware robot assistant," has spent the past several days greeting hundreds of people coming to the event, hosted by the World Economic Forum, at a station showcasing the office of the future.

A life-size face and torso on a big-screen TV, Sara served as the front end to the event app. That presentation might make you think of Max Headroom, the stuttering AI character from the 1980s show. But Sara is as professional, service-oriented and fully charged with artificial intelligence as Max was wacky and fictional.

People can sit down and chat with Sara, who asks what they want to get out of the conference before suggesting people to meet and sessions to attend. It's all conversational. No keyboards required. If a guest seems nervous around Sara, the autonomous virtual personal assistant kick-starts the conversation by introducing herself.

"It's a great chance to show people what the future might hold," says Cassell, who's been studying human-machine interaction for much of her career.

cmu-sara.jpg
Sara, the virtual assistant from Carnegie Mellon. Carnegie Mellon University

That future will likely include Sara and other forms of AI that think and behave like humans. And they'll become an everyday part of our lives sooner than you might imagine.

AI is one of the hottest tickets in tech right now, fueled by powerful chips, fast networks and the massive trail of data we all leave behind us as we go about our digital days.

"It's the most exciting thing going on," Microsoft co-founder Bill Gates said at the Recode tech conference earlier this month. "It's the big dream that anybody who's ever been in computer science has been thinking about."

AI's burst of activity may seem sudden -- if all you know about AI is that Google's AlphaGo program beat a human champ in the ancient strategy game of Go a few months back or that Microsoft's Tay chatbot went haywire around the same time and spewed racist tweets.

Yet people have been studying artificial intelligence for 60 years.

In the summer of 1956, a handful of mathematicians and computer scientists gathered at Dartmouth College in New Hampshire for the first-ever research project on AI. Their key impulse: "every aspect of learning or other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

That's been the underlying theme of AI research ever since.

But recent advances -- in chips, in networks and in software -- have sparked a frenzy of activity focused on AI. Google has dozens of projects as it seeks to tap into the big data gathered by its popular search engine. Microsoft, IBM and Amazon are tinkering like mad. Facebook CEO Mark Zuckerberg even dreams of an AI-powered butler, like Jarvis in the "Iron Man" movies. (Meanwhile, some like renowned physicist Stephen Hawking warn that artificial intelligence could someday turn against humanity.)

Funding of AI-focused startups nearly tripled to $394 million in 2014 from the year before, according to research firm CB Insights. And while the dollar value dropped some last year, the number of funding deals hit new highs of 24 in the fourth quarter of 2015 and then 27 in the first quarter of 2016.

Social intelligence

Enter the virtual humans. Not the Hollywood kind, but software agents that mimic and engage us. Apple has Siri, Microsoft features Cortana, Amazon offers Alexa and Google is rolling out its Assistant. Those are separate from the specialized AI programs that provide leadership training, help adults in therapy and assist children with autism.

screen-shot-2016-06-29-at-12-14-36-pm.png
Justine Cassell of Carnegie Mellon University's school of computer science has been studying human-computer interaction for much of her career. Carnegie Mellon

But those activities aren't really moving the needle, say Cassell and other researchers. If we want machines to understand how people interact, they need to develop a rapport with us.

"In general, AI is moving into more artificial social intelligence," says Jonathan Gratch, director of virtual human research at the University of Southern California's Institute for Creative Technologies. He defines that as the ability "to understand people, how they think, how to communicate with them, what their emotional state is."

Consider Siri and Alexa. Even though we've been interacting with it for years, Siri's behavior hasn't changed much, at least not in the way we'd expect a person to change. And even with its built-in quips, Siri is not much of a conversationalist. Alexa is still getting the hang of our speech patterns and personal preferences.

"We're still a long way from being able to do things the way humans do things," says an Amazon spokesman, "but we're solving complex problems every day."

Apple didn't respond to a request for comment for this story.

Smarter, more autonomous systems that are able to act on their own will be able to interpret your moods from seeing where you're looking, how you've tilted your head or if you're frowning -- and then respond to your needs.

USC's SimSensei program has been developing AI to do just that. While chatting with people, SimSensei records, quantifies and analyzes our behavior and gets to know us better. One application displays an onscreen virtual therapist named Ellie who gets people to tell her about their problems. She adjusts her speech and gestures to show she's paying attention and understands what's bothering you.

The program has been adapted to coach people in public speaking and handling themselves in job interviews. The US Army has used it for leadership training.

Room for robots?

Some of today's virtual humans are just faces on a screen or a voice over a speaker. For Kerstin Dautenhahn, a professor of AI at the University of Hertfordshire in England, they're humanoid robots. One is named Kaspar, a vaguely boyish robot with simplified humanlike features.

Kaspar works with children on the autism spectrum to help them cope with person-to-person interactions that they find overwhelming. With touch sensors and the ability to detect gestures and eye gaze, Kaspar acts as a mediator, teaching social skills like making eye contact, taking turns and knowing when it's appropriate to touch others. In this environment, predictability outweighs autonomous AI behavior.

Some parents report that their children with autism are now interacting with others or looking people in the eye for the first time.

"Robots can really elicit the behaviors that you wouldn't necessarily expect some of the children to show, and that could be a really, really good starting point," says Dautenhahn.

Learning from mistakes

The point of AI is for the machines to keep learning. Some lessons can be painful.

That was the case with Microsoft's Tay, a research project set loose on Twitter to learn from interactions with real people. Within hours of the chatbot's debut, pranksters had taught Tay to spew some pretty vile tweets, reflecting the worst of humanity. The project is offline while Microsoft makes adjustments.

Microsoft declined to comment, pointing instead to a March blog post about lessons learned from Tay's missteps.

"AI systems feed off of both positive and negative interactions with people," Peter Lee, vice president for Microsoft Research, wrote at the time. "In that sense, the challenges are just as much social as they are technical."

Here's the thing: Getting from clever algorithms to a system as complex as your average human will involve a lot of trial and error. There will be failures when the machine doesn't really understand us, and for the foreseeable future there will be limits to what a given AI can do.

Today's technology still lacks what Cassell calls a "calculus of human behavior. It doesn't have what we have, which is a model of how relationships are built."

That's not to say there's no room for obnoxious behavior. Cassell thinks AI could learn a little something from rude teens. She and her colleagues have spent years watching teens tutor each other in algebra, and what they found is that they learn more when they insult each other. Calling out your friends, it seems, helps keep them on task.

It will take both a deep understanding of human nature and a deft hand in fine-tuning truly sophisticated machine learning.

"Both kinds of work are very hard," Cassell says. "Not everyone is cut out to do this kind of work in AI because you have to have a lot of patience."

And who knows? The wise-cracking Max Headroom might be closer to reality than we realized.

This article originally appeared on CNET.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.