Paper
A comparative analysis of different facial action tracking models and techniques
Published Mar 11, 2016 · Prem Chand Yadav, Hari Singh Dhillon, Ankit Patel
2016 International Conference on Emerging Trends in Electrical Electronics & Sustainable Energy Systems (ICETEESES)
2
Citations
0
Influential Citations
Abstract
The tracking of facial activities from video is an important and challenging problem. Now a day, many computer vision techniques have been proposed to characterize the facial activities in the three levels (from local to global). First level is the bottom level, in which the facial feature tracking focuses on detecting and tracking of the prominent local landmarks surrounding facial components (e.g. mouth, eyebrow, etc), in second level the facial action units (AUs) characterize the specific behaviors of these local facial components (e.g. mouth open, eyebrow raiser, etc) and the third level is facial expression level, which represents subjects emotions (e.g. Surprise, Happy, Anger, etc.) and controls the global muscular movement of the whole face. Most of the existing methods focus on one or two levels of facial activities, and track (or recognize) them separately. In this paper, various facial action tracking models and techniques are compared in different conditions such as the performance of Active Facial Tracking for Fatigue Detection, Real Time 3D Face Pose Tracking from an Uncalibrated Camera, Simultaneous facial action tracking and expression recognition using a particle filter and Simultaneous Tracking and Facial Expression Recognition using Multiperson and Multiclass Autoregressive Models.
Full text analysis coming soon...