Talking to Robots Researchers Look for Novel, New Ways to Communicate With Unmanned Systems
Editor's Note: The following story is one of the features in AUVSI's new magazine, Mission Critical: Sensors. For a look at the entire issue, click here.
Talking to Robots
Researchers Look for Novel, New Ways to Communicate With Unmanned Systems
By Brett Davis
|
| The LS3 goes through its paces at Virginia’s Fort Pickett. Photo courtesy DARPA. |
Researchers and end users are constantly seeking new ways to communicate with robots and unmanned systems.
One goal is to make such interactions as easy and intuitive as interaction with other humans, but that poses tough challenges on engineers and programmers. Research continues, however, on new ways to talk to robots.
Five in Five
For the past seven years, IBM has been releasing a list of five technologies its researchers think have the potential to change the way people live and work. While not specific to robotics, most of the 2013 technologies singled out could lead to a revolution in the way people interact with unmanned systems of all kinds.
The first is touch: In the next five years, you’ll be able to touch through a phone.
“You’ll be able to share the texture of a basket woven by a woman in a remote village halfway across the globe,” says IBM Retail Industry Expert Robyn Schwartz in a company video. “The device becomes just as intuitive as we understand touch in any other form today.”
The second is sight. In five years, IBM posits, computers won’t just be able to look at images, but can understand them. A computer could, for example, scan photos of skin melanomas taken on patients over time, possibly diagnosing cancer before physical problems result. This could be a boon for the emerging market of medical robotics.
Dmitri Kanevsky, an IBM master inventor, who lost his hearing at age three, says in another video that in five years computers will be able to hear “what matters,” such as monitoring mountainsides in Brazil for audible signs that a mudslide is imminent.
“It can hear that a flood is coming,” Kanevsky says. “This is an example of how hearing sensors can help to prevent catastrophes.”
Another sense coming to computers is smell, according to the IBM researchers. This could lead to sensors in the home that literally can smell disease and then communicate that to a doctor.
“Smelling diseases remotely, and then communicating with a doctor, will be one of the techniques which will promise to reduce costs in the healthcare sector,” says Hendrik Hamann, a research manager of physical analytics, who adds that “your phone might know that you have a cold before you do.”
IBM further predicts that computers will be able to detect how food tastes, helping create healthier diets and even developing unusual pairings of food to help humans eat smarter.
“These five predictions show how cognitive technologies can improve our lives, and they’re windows into a much bigger landscape — the coming era of cognitive systems,” says Bernard Myerson, IBM’s chief innovation officer.
As an example, he cites a track-inspecting robot doing its work inside a train tunnel. A current robot could evaluate track but wouldn’t understand a train barreling down that same track.
“But what if you enabled it to sense things more like humans do — not just vision from the video camera but the ability to detect the rumble of the train and the whoosh of air?” he asks on the IBM website. “And what if you enabled it to draw inferences from the evidence that it observes, hears and feels? That would be one smart computer — a machine that would be able to get out of the way before the train smashed into it.”
In the era of cognitive systems, he says, “humans and machines will collaborate to produce better results — each bringing their own superior skills to the partnership. The machines will be more rational and analytic. We’ll provide the judgment, empathy, moral compass and creativity.”
|
| An IBM chart showing how computers could understand photographs in the next five years. |
Verbal Commands
DARPA has been working for years with the Legged Squad Support System, or LS3, the follow-on to the legendary Big Dog robotic mule. In a new video, the defense research agency demonstrated how a ground robot could obey verbal commands, giving it roughly the same capability to follow a soldier as an animal and handler would do.
In December, the LS3 was put through its paces, literally, at Virginia’s Fort Pickett, where it followed a human soldier and obeyed voice commands.
“This was the first time DARPA and MCWL [the Marine Corps Wafighting Lab] were able to get LS3 out on the testing grounds together to simulate military-relevant training conditions,” Lt. Col. Joseph Hitt, DARPA program manager, says in a DARPA press release. “The robot’s performance in the field expanded on our expectations, demonstrating, for example, how voice commands and follow-the-leader capability would enhance the robot’s ability to interact with warfighters. We were able to put the robot through difficult natural terrain and test its ability to right itself with minimal interaction from humans.”
In a DARPA video, the LS3 turns itself on after a voice command, and then begins following the human leader.
“The LS3 program seeks to demonstrate that a highly mobile, semi-autonomous legged robot can carry 400 pounds of a squad’s equipment, follow squad members through rugged terrain and interact with troops in a natural way similar to a trained animal with its handler,” DARPA says.
LS3 is being developed by Boston Dynamics, leading a team that includes Bell Helicopter, AAI Corp., Carnegie Mellon, the Jet Propulsion Laboratory and Woodward HRT.
The December testing was the first in a series of demonstrations planned to continue through the first half of 2014, according to DARPA.
Social Interactions
Interacting with robots in a social manner could become more important in the future, as service robots take on a greater role in everyday life.
Researchers at Carnegie Mellon University have been working on what seems like a simple problem: how to let a robot tell where people are looking.
“It’s a common question in social settings, because the answer identifies something of interest or helps delineate social groupings,” the university’s Robotics Institute says.
The institute developed a method for detecting where people’s gazes intersect, by using head-mounted cameras.
“By noting where their gazes converged in three-dimensional space, the researchers could determine if they were listening to a single speaker, interacting as a group or even following the bouncing ball in a ping-pong game,” the institute says.
The algorithm used for determining “social saliency” could be used to evaluate various kinds of social cues, including peoples’ facial expressions or body movements.
“This really is just a first step toward analyzing the social signals of people,” says Hyun Soo Park, a Ph.D. student in mechanical engineering, who worked on the project with Yaser Sheikh, assistant research professor of robotics, and Eakta Jain of Texas Instruments, who was awarded a Ph.D. in robotics last spring. “In the future, robots will need to interact organically with people and to do so they must understand their social environment, not just their physical environment,” Park said in a university press release.
Head-mounted cameras, as worn by soldiers, police officers and search-and-rescue officials, are becoming more common. Even if they don’t become ubiquitous, they could still be worn in the future by people who work in cooperative teams with robots.
Tapping the Phones
Ground robots have sometimes been plagued by issues of bandwidth and range. These problems are especially problematic in urban areas, particularly in modern, multistory buildings, where communications can drop off fast.
A research team from the U.S. Army, University of Washington and Duke University has demonstrated one way to help expand the communications bandwidth of ground robots inside buildings, by using the existing electrical systems to create a “super antenna” to achieve wireless, non-line-of-sight communications.
The concept is based on the idea of power line networking, or using the bandwidth in electrical connections to send information as well. Such applications are already in use for streaming high-definition television and music and even providing high-speed Internet service using existing wall plugs.
“The power line’s ability to receive wireless signals is a well-known phenomenon, but only recently has it been exploited for in-building communication,” says a paper presented by the Army’s David Knichel at AUVSI’s Unmanned Systems North America 2012.
The downside for current power line systems is that users on both ends of such a connection have to be plugged into a wall, not a viable concept for a moving, stair-climbing robot. A team led by Shwetak Patel of the University of Washington, which included the U.S. Army and Duke University, have developed a concept that takes the power line idea and makes it mobile.
According to the paper presented at AUVSI’s Unmanned Systems North America 2012, the concept is called Sensor Nodes Utilizing Power line Infrastructure, or SNUPI. SNUPI uses tiny, lightweight sensor nodes that contain antennas that can connect wirelessly to a power line infrastructure, dramatically boosting their transmission range.
A soldier could be on the bottom floor of a building, or even outside it, and use a single base station connected to the system to control and communicate with a robot exploring the upper floors.
SNUPI features a low-power microcontroller that can provide coverage for an entire building while consuming less than one megawatt of power. The initial prototype of the system is just 3.8-by-3.8-by-1.4 centimeters and weighs only 17 grams, including the battery and antenna.
Brett Davis is editor of Mission Critical.

