• facial motion retargetting;
  • shape blending


We present a system that maps a sparse configuration of facial markers captured from an actor on to target meshes, by blending highly detailed target key meshes. One way of determining the blending weights is local shape blending which segments a facial mesh into disjoint regions, and tries to represent each region on the facial mesh as a weighted sum of the corresponding regions on the key meshes. This has the side effect of decoupling the natural correlation between different parts of a face. This problem has been improved by a recent method which considers the entire face as the sum of overlapping soft regions centered at each control point, where the influence of the control-points are reduced with distance. But it goes too far in the opposite direction: by treating control points independently and thereby ignoring the spatial coherence among nearby control points on a face, it can cause unwanted interferences between the nearby control-points (i.e. between the associated soft regions). To avoid both problems, we retain the basic framework of local shape blending, but consider nearby control points together by treating them as a weighted region with a weighting function defined over it. Copyright © 2010 John Wiley & Sons, Ltd.