Abstract | We present a skin tracking and reconstruction method that uses a monocular camera and a depth sensor to recover skin sliding motions on the surface of a deforming object. Such depth cameras are widely available. Our key idea is to use a reduced coordinate framework that implicitly constrains skin to conform to the shape of the underlying object when it slides. The skin configuration in 3D can then be efficiently reconstructed by tracking two dimensional skin features in video. This representation is well suited for tracking subtle skin movements in the upper face and on the hand. The reconstructed skin motions have many uses, including synthesizing and retargeting animations, recognizing facial expressions, and for learning data-driven models of skin movement. In our face tracking examples, we recover subtle but important details of skin movement around the eyes. We validated the algorithm using a hand gesture sequence with known skin motion, recovering skin sliding motion with a low reconstruction error. |
Paper | PDF (13.3M) |
Video | YouTube Video |
Funding | This work was supported in part by grants from the Institute for Computing, Information and Cognitive Systems (ICICS) at UBC, NSERC, Canada Foundation for Innovation, MITACS, and the Canada Research Chairs Program.We thank Vital Mechanics Research for providing the hand model and software. |