Mass 3D Facial Animation Parameter Extraction (MirrorMocap)

(C) I-Chen Lin, CAIG Lab, NCTU / CMLab, NTU

In this project, a robust, accurate and inexpensive approach to automatically estimate mass 3D facial motion trajectories is proposed. This system is an extension of our previous tracking procedure "RFAP" [2], where a single video camera and two plane mirrors are utilized to capture frontal and side views simultaneously. UV "black light blue" (BLB) lamps are also applied to enhancing the distinctness of markers in video clips for more reliable feature extraction. To be automatic tracking, the temporal and spatial coherence of facial markers' motion are utilized to detect and rectify false tracking and tracking conflict. Such a great quantity of facial motion parameters can provide more faithful shape variations for facial animation, and they can also be applied to further analysis of facial motion. Currently, the system can automatically track 188 markers at 12.75 fps and 300 markers at 9.2 fps from video clips on a Pentium4 3.0GHz PC, and it will extend to be a live motion tracking system in the near future.

Keywords: facial animation, motion capture, 3D motion tracking, Kalman filter, mirrored image, 3D position estimation, stereo computer vision

The executable file and complete documentation are released and can be downloaded for non-commercial usages.

Please refer to CG&A'02, TVC'05 papers for details.

Demo video

The source video clips (MPEG1 video, about 3MB)

Fluorescent markers are illuminated by UV "black light blue" lamps. (539 frames and 300 markers)

The synthetic facial animation (MPEG1 video, about 3.3MB)

Retargeting and generating facial animation from tracked motion data


Tracking with only Kalman filtering (MPEG1 video, about 2.1 MB)

Considering only the previous markers' trajectories separately and some trajectories are "derailed" due to markers' missing in video clips.

(The dot are estimated markers' position at time t ; the lines are motion vectors with respect to time 1.)

Tracking with the proposed method for detection and rectification of false tracking (MPEG1 video about 2.0 MB)

Using the temporal and spatial coherence of neighbors. (automatical tracking 295 valid markers)

Introduction by I-chen Lin's synthetic face (MPEG1 video about 15.3MB)

(The source motion was performed by Sheng-Yao Cho)



System requirement

Detailed documentation and instruction

Executable software tools and examples


  1. I-Chen Lin, Ming Ouhyoung, ¡§Mirror Mocap: Automatic and Efficient Capture of Dense 3D Facial Motion Parameters from Video¡¨, The Visual Computer, vol.12, no.6, pp.355-372, July, 2005.

  2. I-Chen Lin, Reliable Extraction of Realistic 3D Facial Animation Parameters from Mirror-reflected Multi-view video clips, Ph.D dissertation, National Taiwan University, Taiwan, 2003.

  3. I-Chen Lin, Jeng-Sheng Yeh, Ming Ouhyoung, "Extracting 3D facial animation parameters from multiview video clips ," IEEE Computer Graphics and Applications, vol. 22, no. 6, pp. 72-80, Nov.-Dec. 2002.

Go back to "I-Chen's project webpage (English)"