The idea here is that the wearer may want to share an experience--travel or a party or whatever--with someone who can’t be present at the time and place. With the MH-2, that person can stand in front of a 3-D immersive display (or, more likely, a television screen) outfitted with a motion capture device (like a Kinect) and remotely embody the robot on the user’s shoulder.
What the robot sees, the remote user sees on his or her screen. Likewise, his or her speech and gestures are translated back to the robot, which uses it’s remarkably plentiful degrees of freedom--seven for the arms, three for the head, and two for the body, plus one for realistic breathing (yes, breathing)--to recreate the remote user’s persona, albeit on a slightly smaller scale. The idea, according to the researchers, is something like the vision below.