NeckFace: Smart Necklace To Track Facial Expressions

195
 

A new kind of wearable sensing necklace has been developed that can track facial expressions as accurately as a smartphone camera.

Researchers at Cornell University have developed a necklace-type wearable sensing technology named NeckFace, that can track facial expressions. Facial movements convey emotions, and help us communicate nonverbally and perform physical activities. Tracking facial movements and helping communication is one of the proposed applications for NeckFace. It can continuously track full facial expressions by using infrared cameras to capture images of the chin and face from beneath the neck.

This new development is an upgrade of Cheng Zhang’s previous works. Cheng Zhang is an assistant professor of information science at Cornell Ann S. Bowers College of Computing and Information Science. He earlier developed a similar divide in headset format. Zhang says that NeckFace provides significant improvement in performance and privacy, and gives the wearer the option of a less-obtrusive neck-mounted device.

“The ultimate goal is having the user be able to track their own behaviors, through continuous tracking of facial movements,” said Zhang, principal investigator of the SciFi Lab. “And this hopefully can tell us a lot of information about your physical activity and mental activities.”

To test the accuracy of the developed devices, the researchers conducted a user study with 13 participants. They were asked to perform eight facial expressions while sitting and eight more while walking. In the sitting scenarios, the participants were also asked to rotate the head while performing the facial expressions, and remove and remount the device in one session. The researchers determined that NeckFace detected facial movement with nearly the same accuracy as the direct measurements using the phone camera.

Researchers see many applications of this device such as virtual conferencing when a front-facing camera is not an option, facial expression detection in virtual reality scenarios, and silent speech recognition.

The research appeared in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.


 

SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here