Facial motion retargeting has been developed mainly in the direction of representing high fidelity between a source and a target model. We present a novel facial motion retargeting method that properly regards the significant characteristics of target face model. We focus on stylistic facial shapes and timings that reveal the individuality of the target model well, after the retargeting process is finished. The method works with a range of expression pairs between the source and the target facial expressions and emotional sequence pairs of the source and the target facial motions. We first construct a prediction model to place semantically corresponding facial shapes. Our hybrid retargeting model, which combines the radial basis function (RBF) and kernel canonical correlation analysis (kCCA)-based regression methods copes well with new input source motions without visual artifacts. 1D Laplacian motion warping follows after the shape retargeting process, replacing stylistically important emotional sequences and thus, representing the characteristics of the target face. Copyright © 2011 John Wiley & Sons, Ltd.