Get access

Facial performance illumination transfer from a single video using interpolation in non-skin region

Authors

  • Hongyu Wu,

    1. State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China
    Search for more papers by this author
  • Xiaowu Chen,

    Corresponding author
    • State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China
    Search for more papers by this author
  • Mengxia Yang,

    1. State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China
    Search for more papers by this author
  • Zhihong Fang

    1. State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China
    Search for more papers by this author

Correspondence: Xiaowu Chen, State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China.

E-mail: chen@buaa.edu.cn

ABSTRACT

This paper proposes a novel video-based method to transfer the illumination from a single reference facial performance video to a target one taken under nearly uniform illumination. We first filter the key frames of the reference and the target face videos with an edge-preserving filter. Then, the illumination component of reference key frame is extracted through dividing the filtered reference key frames by the corresponding filtered target key frames in skin region. The differences in non-skin region caused by different expressions between the reference and target face may bring about artifacts. Therefore, we interpolate the illumination component of the non-skin region by that of the surrounded skin region to ensure the spatial smoothness and consistency. After that, the illumination components of key frames are propagated to non-key frames to ensure the temporal consistency between the two adjacent frames. We obtain convincing results by transferring the illumination effects of a single reference facial performance video to a target one with the spatial and temporal consistencies preserved. Copyright © 2013 John Wiley & Sons, Ltd.

Ancillary