Researchers creating assistant that whispers directions in your ear
7 December 2015 | 0
Researchers at Carnegie Mellon University are working on artificial intelligence software that could one day act like a personal assistant, whispering directions to get to a restaurant, put together a book shelf or repair a manufacturing machine.
The software is named Gabriel, after the angel that serves as God’s messenger, and is designed to be used in a wearable vision system – something similar to Google Glass or another head-mounted system. Tapping into information held in the cloud, the system is set up to feed or ‘whisper’ information to the user as needed.
At this point, the project is focused on the software and is not connected to a particular hardware device.
“Ten years ago, people thought of this as science fiction,” said Mahadev Satyanarayanan, professor of computer science and the principal investigator for the Gabriel project, at Carnegie Mellon. “But now it’s on the verge of reality.”
The project, which has been funded by a $2.8 million grant from the US National Science Foundation, has been in the works for the past five years.
“This will enable us to approach, with much higher confidence, tasks, such as putting a kit together,” said Satyanarayanan. “For example, assembling a furniture kit from IKEA can be complex and you may make mistakes. Our research makes it possible to create an app that is specific to this task and which guides you step-by-step and detects mistakes immediately.”
He called Gabriel a “huge leap in technology” that uses mobile computing, wireless networking, computer vision, human-computer interaction and artificial intelligence.
Satyanarayanan said he and his team are not in talks with device makers about getting the software in use, but he hopes it’s just a few years away from commercialisation.
“The experience is much like a driver using a GPS navigation system,” Satyanarayanan said. “It gives you instructions when you need them, corrects you when you make a mistake and, most of the time, shuts up so it doesn’t bug you.”
One of the key technologies being used with the Gabriel project is called a cloudlet. Developed by Satyanarayanan, a cloudlet is a cloud-supported data center that serves multiple local mobile users.
Cloudlets can be set up close to users, located, for instance, on a cell tower or in an office building or manufacturing planet. Their close proximity to users makes them just one wireless hop away.
By “bringing the cloud closer,” cloudlets reduce the roundtrip time of communications from the 70 milliseconds typical of cloud computing to just a few tens of milliseconds, or less, according to Carnegie Mellon.
Right now, researchers are working to improve the computer vision and location sensing needed for the project.
The first applications are expected to focus on specialized tasks, such as repairing an industrial machine, but ultimately the cognitive assistant should be applied to tasks such as cooking, navigation directions and performing CPR.
Sharon Gaudin, IDG News Service