Robot knows when to pour you a beer

A robot pouring beer in a demonstration by Cornell University's Personal Robotics Lab.
YouTube/Hema Koppula

Thirsty for a cold brew? Now there's a robot that is learning how to keep your beer glass full.

Researchers at Cornell University's Personal Robotics Lab have programmed a robot to anticipate human actions, and assist in tasks, like opening a refrigerator door or pouring a drink.

The robot works in conjunction with a Microsoft Kinect, which is a 3D-scanning device, and a database of 3D videos. It uses Kinect to scan a room to identify what action is taking place, and then predicts different scenarios by considering how various objects in the room can be used.

Once the data is gathered, the robot determines its moves based on what it anticipates a person will do next. Researchers say it does this by generating a "set of possible continuations into the future" and "chooses the most probable."

In one demonstration, the robot is seen observing a human reach for a pot and move toward a refrigerator. In that scenario, the robot senses the person's direction and moves to open the refrigerator door. In another, the robot sees that a subject has poured cereal and milk, and responds by putting the milk back in the refrigerator.

"We extract the general principles of how people behave," Ashutosh Saxena, Cornell professor of computer science, said in a press release.

An algorithm helps the robot anticipate what move a human will make next. The researchers show an example of a robot pouring a beer without anticipating a man's movement. In that case, the robot does not see that the person goes in for a sip of his drink. Once it is programmed to anticipate future movement, it sees that the man reaches for a drink and waits to pour the beer.

Researchers say the robot has 82 percent accuracy in making predictions 1 second into the future, 71 percent for three seconds and 57 percent for 10 seconds.

Saxena says the goal is for the robot to learn how to predict human actions on its own, rather than following a set of actions determined by code.

"The future would be to figure out how the robot plans its action," Saxena said. "Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

The research is supported by the U.S. Army Research Office, the Alfred E. Sloan Foundation and Microsoft. Findings will be presented at the International Conference of Machine Learning in Atlanta from June 18 to 21.

Watch a demonstration of Cornell's beer-pouring robot below.