Special Issue Paper
Weighted local shape blending for facial motion retargetting
Version of Record online: 29 MAY 2010
Copyright © 2010 John Wiley & Sons, Ltd.
Computer Animation and Virtual Worlds
Special Issue: CASA' 2010 Special Issue
Volume 21, Issue 3-4, pages 255–265, May 2010
How to Cite
Na, K.-G. and Jung, M.-R. (2010), Weighted local shape blending for facial motion retargetting. Comp. Anim. Virtual Worlds, 21: 255–265. doi: 10.1002/cav.346
- Issue online: 28 JUN 2010
- Version of Record online: 29 MAY 2010
- Korea Science and Engineering Foundation (KOSEF)
- facial motion retargetting;
- shape blending
We present a system that maps a sparse configuration of facial markers captured from an actor on to target meshes, by blending highly detailed target key meshes. One way of determining the blending weights is local shape blending which segments a facial mesh into disjoint regions, and tries to represent each region on the facial mesh as a weighted sum of the corresponding regions on the key meshes. This has the side effect of decoupling the natural correlation between different parts of a face. This problem has been improved by a recent method which considers the entire face as the sum of overlapping soft regions centered at each control point, where the influence of the control-points are reduced with distance. But it goes too far in the opposite direction: by treating control points independently and thereby ignoring the spatial coherence among nearby control points on a face, it can cause unwanted interferences between the nearby control-points (i.e. between the associated soft regions). To avoid both problems, we retain the basic framework of local shape blending, but consider nearby control points together by treating them as a weighted region with a weighting function defined over it. Copyright © 2010 John Wiley & Sons, Ltd.