Computer created to detect body language
Researchers have developed a computer to better understand the human body poses and movements of multiple people, tracking every detail of their body including each individual finger.
Researchers from the Carnegie Mellon University’s Robotic Institute, with the collaboration of Panoptic Studio, manufactured this two-storey dome that was embedded with 500 video cameras. The results made it possible to sense everything from hand gestures to mouth movements of multiple people using a single camera and a laptop computer.
The associate professor of robotics, Yaser Sheikh informed that these procedures for tracking 2-D human form and motion open doors for machines and people to interact with one another, and also for people to make use of machines for understanding the world around them.
Sheikh said, “We communicate almost as much with the movement of our bodies as we do with our voice. But computers are more or less blind to it.”
To overcome numerous challenges, the researchers used a bottom-up approach localizing every body part in a scene. The parts are then linked with particular individuals, according to Daily Mail.
Hanbyul Joo, a student in robotics expressed, “A single shot gives you 500 views of a person’s hand, plus it automatically annotates the hand position. Hands are too small to be annotated by most of our cameras, however, so for this study we used just 31 high-definition cameras, but still were able to build a massive data set.”
The method could ultimately be used in several applications like sports analytics, behavioral diagnosis and for improving self-driving car’s ability for detecting pedestrian movements. The team is also looking forward to progress from 2D models to 3D via Panoptic Studio for improving body, hand and face detectors.
According to Science Daily, the nonverbal communication between individuals will permit the computers to serve in social places which will further allow them to identify what people around them are doing, what kind of mood they are in and if they can be interrupted or not.
The researchers are expected to present their work in CVPR 2017, the Computer Vision and Pattern Recognition Conference, on July 21-26 in Honolulu.
Comments
Comments are closed.