A new system being developed by computer scientists at Cornell University can both "learn" new information from the Internet and serve as a resource for increasingly intelligent robots.
The computational "Robo Brain" system absorbs data from public Internet sites and computer simulations so that robots can apply that knowledge in their future interactions. The Robo Brain is now "studying" about 1 billion photographs, 120,000 YouTube videos and 100 million how-to documents and appliance manuals. All this information is then translated and stored in a format that robots can later access.
According to the project's website, the system has potential uses in in robotics research, household robots and self-driving cars.
To become effective helpers for people in homes, offices and factories, robots need to understand how our world works and how people behave. Researchers have been trying to teach robots how to perform basic actions such as finding a person's keys or pouring a drink, and they say the new new system could help.
For instance, if a robot sees a coffee mug, Robo Brain will quickly recognize from its base of knowledge that liquids can be poured into or out of it, and that the robot can grasp it by the handle. It will also understand that while the mug must be carried upright while it is full, it's ok to turn it sideways when it's being carried from the dishwasher to the cupboard.
And just like a human learner, Robo Brain will have human teachers. The learning process will facilitated by crowdsourcing. The Robo Brain website will display what the robot's "brain" has learned, and visitors to the site will be able to contribute to the existing data and correct it if needed.
"Our laptops and cell phones have access to all the information we want," Ashutosh Saxena, an assistant professor of computer science at Cornell University and lead author on the project, said in a statement. "If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud."
The researchers say that Robo Brain will be able to process images to select and recognize the objects in them. It will also be able to connect images and video with text, learning to recognize objects and understand how they are used, along with human language and behavior.
The researchers presented the project at the 2014 Robotics: Science and Systems Conference in Berkeley in July. Here a video of the presentation: