News

Scientists have developed a code that can help machines understand your body language

To encourage more research and applications, the researchers have released their code.
To encourage more research and applications, the researchers have released their code. To encourage more research and applications, the researchers have released their code.

Researchers have developed a computer code that could help robots understand human body language.

Scientists at Carnegie Mellon University’s Robotics Institute in the US say they have enabled a computer to interpret our poses and movements – allowing it to not only perceive what people around them are doing but also what kind of mood they are in.

“We communicate almost as much with the movement of our bodies as we do with our voice,” said Yaser Sheikh, associate professor of robotics at the university and study author.

“But computers are more or less blind to it.”

Animated GIF - Find & Share on GIPHY

Sheikh adds their work could open up new ways for people and machines to interact with each other.

The code was developed with the help of the university’s Panoptic Studio – a two-storey dome with 500 video cameras.

The system “sees” human movement, using a 2D model of the human form.

According to the researchers, it localises all the body parts in a scene (such as legs, arms or face) and then associates those parts with particular individuals.

Body language tracking.
Body language tracking.
(Carnegie Mellon University)

The motion is tracked in real time, capturing everything from hand gestures to the movement of the mouth. It can even track multiple people at once.

“A single shot gives you 500 views of a person’s hand, plus it automatically annotates the hand position,” PhD student Hanbyul Joo said.

“Hands are too small to be annotated by most of our cameras, however, so for this study we used just 31 high-definition cameras, but still were able to build a massive data set.”

To encourage more research and applications, the researchers have released their code.

They add that this approach could be used in numerous applications, such as helping improve self-driving cars’ ability to predict pedestrians’ and cyclists’ movements.

It could also be used in sports analytics and track players’ movements and how they are using their arms and legs and which direction they are looking at.

The researchers will present reports on their methods at the Computer Vision and Pattern Recognition Conference to be held in Honolulu, Hawaii from July 21-26.