Posed and spontaneous communication of emotion via facial and vocal cues1

Authors


  • 1

    This study was funded in part by Biomedical Research Support Grants, Division of Research Resources, National Institutes of Health, to the Johns Hopkins University and to Harvard University, and by the Milton Fund of Harvard University. We thank Richard H. Brown and Janice A. Krinsky for their help in conducting the experiment. Judith A. Hall was formerly Judith Hall Koivumaki.

Abstract

Subjects' facial expressions were videotaped without their knowledge while they watched two pleasant and two unpleasant videotaped scenes (spontaneous facial encoding). Later, subjects' voices were audiotaped while describing their reactions to the scenes (vocal encoding). Finally, subjects were videotaped with their knowledge while they posed appropriate facial expressions to the scenes (posed facial encoding). The videotaped expressions were presented for decoding to the same subjects. The vocal material, both the original version and an electronically filtered version, was rated by judges other than the original senders. Results were as follows: (a) accuracy of vocal encoding (measured by ratings of both the filtered and unfiltered versions) was positively related to accuracy of facial encoding; (b) posing increased the accuracy of facial communication, particularly for more pleasant affects and less intense affects; (c) encoding of posed cues was correlated with encoding of spontaneous cues and decoding of posed cues was correlated with decoding of spontaneous cues; (d) correlations, within encoding and decoding, of similar scenes were positive while those among dissimilar scenes were low or negative; (e) while correlations between total encoding and total decoding were positive and low, correlations between encoding and decoding of the same scene were negative; (f) there were sex differences in decoding ability and in the relationships of personality variables with encoding and decoding of facial cues.

Ancillary