Autism Spectrum Conditions (ASC) are a set of neurodevelopmental conditions marked by atypical social behavior and repetitive behavior/restricted range of interests. Autistic traits are distributed continuously in the general population, and individuals with ASC represent the high end of this distribution [Baron-Cohen, Wheelwright, Skinner, Martin, & Clubley, 2001; Robinson et al., 2011]. Individuals with ASC show reduced interest toward social stimuli, which is thought to reflect reduced social motivation [Chevallier, Kohls, Troiani, Brodkin, & Schultz, 2012; Dawson et al., 2002]. At a neural level, individuals with autism show reduced activity in reward processing regions of the brain such as the ventral striatum when processing social rewards (e.g., happy faces) relative to controls from the general population [Scott-Van Zeeland, Dapretto, Ghahremani, Poldrack, & Bookheimer, 2010]. This diminished interest in socially rewarding stimuli could potentially cause a cascade of other developmental deficits as the social rewards (e.g., smiles) do not act as effective reinforcers for shaping socially appropriate behavior. A series of behavioral, electrophysiological, and neuroimaging studies provide further support for atypical reward system functioning in ASC [Dawson et al., 2005; Kohls et al., 2011; Schmitz et al., 2008; Scott-Van Zeeland et al., 2010].
Mimicry is a crucial part of our social behavioral repertoire. We use the term “mimicry” in this paper to refer to motor acts in which an individual copies a motor act of another individual, regardless of the extent of cognitive/neural processing involved. “Automatic mimicry” refers to mimicry that operates without any intention to have produced such behavior [Heyes, 2011]. In some paradigms investigating this behavior, the tendency to mimic operates with or counter to an explicitly instructed movement [Bird, Leighton, Press, & Heyes, 2007; Press, Richardson, & Bird, 2010]. In such paradigms, participants are typically asked to execute a hand movement congruent or incongruent with an observed movement, and it is found that they are faster on congruent trials. The automaticity of mimicry is consistent both with theories that supporting mechanisms are innate [Ferrari & Gallese, 2007; Ferrari et al., 2006] and that they develop through domain-general processes of learning [Heyes, 2001]. If mimicry occurs in the absence of any specific task instruction to produce behavior of any type, it is referred to as “spontaneous mimicry.” Spontaneous mimicry is commonly studied in experiments where the participants are required to merely observe movements (commonly facial expressions) [Beall, Moody, McIntosh, Hepburn, & Reed, 2008; Sims, Van Reekum, Johnstone, & Chakrabarti, 2012].
Mimicry is associated with empathy, such that positive correlations have been found between spontaneous mimicry and levels of empathy [Hess, Philippot, & Blairy, 1999; Maringer, Krumhuber, Fischer, & Niedenthal, 2011; Sonnby–Borgström, 2002]. Furthermore, individuals with ASC have been found to exhibit lower trait empathy as well as reduced spontaneous mimicry of social stimuli [Chakrabarti & Baron-Cohen, 2006; Hermans, Putman, & Van Honk, 2006]. Interestingly, while individuals with ASC generally display comparable levels of automatic mimicry of face and body parts [e.g., hands; Bird et al., 2007; eyes/mouth; Press et al., 2010] they often show reduced spontaneous mimicry of social stimuli in more natural settings [Beall et al., 2008; Rogers, Hepburn, Stackhouse, & Wehner, 2003; Stel, van den Heuvel, & Smeets, 2008]. This raises the possibility that while the underlying machinery for mimicry maybe intact in autism, it is not brought online spontaneously in social situations. This may be because of the low reward value ascribed to social stimuli.
To test this possibility, we investigated the hypothesis that the response to rewarding social stimuli (i.e., stimuli that can reasonably be recognized as belonging to a conspecific) and their spontaneous mimicry are directly related. According to this hypothesis, valence (positive/negative) of social stimuli will not modulate mimicry in individuals with ASC in the way that it would influence neurotypical individuals [Likowski, Muehlberger, Seibt, Pauli, & Weyers, 2008; Sims et al., 2012; Stel et al., 2010].
This study investigated individual differences in how reward modulates mimicry of social vs. nonsocial stimuli, in the general population. Individual differences were measured using measures of autistic traits [Autism Quotient; Baron-Cohen et al., 2001], and empathy [Empathy Quotient; Baron-Cohen & Wheelwright, 2004]. Behavioral genetic studies suggest a similar etiology of autistic traits both in the general population as well as in its extreme end [Robinson et al., 2011]. This suggests that observed associations with the AQ in the general population are potentially generalizable to groups characterized by high AQ scores (such as individuals with ASC). For determining the impact of reward on social vs. nonsocial stimuli, human and robot hands were conditioned with high and low rewards, and then participants were tested on an automatic mimicry reaction time task as described in Bird et al. . This task has been associated with greater automatic mimicry of human than robot hands in neurotypical individuals [Press, Bird, Flach, & Heyes, 2005] and comparable levels of automatic mimicry of human and robot hands in ASC compared to controls [Bird et al., 2007].
We predicted that trait empathy will be positively associated with the extent of automatic mimicry for rewarding human hands compared to non-rewarding human hands. In parallel, we predicted that autistic traits will be negatively correlated with the extent of automatic mimicry for rewarding human hands compared to non-rewarding human hands. No such association was expected for automatic mimicry of rewarding vs. non-rewarding robot hands. The robot hands were included as a control condition since there is no evidence to suggest a direct relationship between empathy/autistic traits and reward value for nonsocial objects.
Forty-seven participants aged between 18 and 40 were recruited, 11 of whom were excluded from the data leaving 36 participants (18 males, 18 females: mean age = years months, s.d. = years and months). Participants were excluded if they did not complete the experiment (two participants), or there were technical issues with the equipment (two participants), or if they had less than 75 valid trials in the motion task (see section on EMG Measurement for criteria; seven participants). All participants had normal or corrected to normal vision and were right handed. Participants completed the Autism Quotient (AQ) and Empathy Quotient (EQ) online (see Table 1). The study was approved by the University of Reading Research Ethics Committee.
The human and robotic hands used in the conditioning and testing phase were silhouettes derived from previous research [Press et al., 2005]. All silhouettes were matched for image dimensions and luminescence. The human and robotic hands were each presented in two colors on a black background (one associated with high reward, and the other with low reward). During the conditioning phase, the hands were presented in the neutral position (see Fig. 1). In the test phase, motion was simulated by presenting the hand in the neutral position followed by an image of the hand either open or closed (see Fig. 1). The original photos of the human and robot hands were used as the stimuli for the practice trials.
Eight emotionally neutral objects were taken from the International Affective Pictures System [IAPS; Lang, Bradley, & Cuthbert, 1999] to present during both parts of the tasks (see below) to serve as stimuli for a memory task that participants were ostensibly performing, and to ensure that participants were paying attention to the stimuli on the screen. These distractor images were rated as producing little arousal and consisted of unrelated objects. All stimuli were displayed using E-Prime 2.0 (Psychology Software Tools, PA, USA).
After giving informed consent participants were briefed about completing the conditioning task and sat 50 cm away from a Viewsonic VE510s monitor (ViewSonic Corporation, California, USA). After completing the conditioning task, the electrodes were attached to the participants (see EMG Measurement for more detail). Participants were then briefed about how to complete the motion task. After completing the motion task, the electrodes were removed and the participants were debriefed.
Phase 1: Conditioning: During each trial a hand (in a neutral position, see Fig. 1) was presented on the right hand side of the screen while participants completed a card guessing game on the left hand side. A card between a 2 and 7 was presented on the screen face up, next to a face down card. Participants were required to guess whether the face-down card was higher or lower than the face-up card [Sims et al., 2012]. Each hand was presented 30 times. There were four distractor images also included which participants were instructed to remember for a later task. Each distractor image was presented twice, contributing to a total of 128 trials. Participants won 90% of the trials against one human and one robot hand (“positive” hands), and lost 90% of the trials against the other human and robot hand (“negative” hands). The color of winning and losing hands was counterbalanced across participants. To make participants attend to the hands, they were instructed to look at the images at the side as part of a subsequent memory task.
Phase 2: Motion task: Participants were instructed to either open or close their right hand when presented with simulated motion of one of the robot or human hands opening or closing (see Fig. 1). This instruction “open” or “close” was presented for 1,000 ms before presenting the hand stimuli. The hand would then be presented in a neutral posture for 1,000 ms, followed by an opened or closed posture for 1,000 ms. Upon the presentation of this posture, the participant was required to complete the instructed motion. The instructed motion would either match that of the presented motion (“compatible” trials), or would conflict (“incompatible” trials). The response time (RT) from the onset of the image of the closed or open hand was recorded. Contrasts between compatible and incompatible motion RTs were used to calculate the extent of mimicry (see Data Analysis for more detail). There were 16 compatible and 16 incompatible trials for each of the four hand types. This resulted in 32 trials per hand and 128 trials in total. The order of trials was random for every participant. After the electrodes were attached, participants were instructed to rest their right arm on the table at an angle of 45 degrees away from them and the screen. This posture prevented the participants' hand position matching that of the hand on the screen; eliminating spatial compatibility between the participant's hand and the human or robot hand. They were instructed how to open and close their hands, and 10 practice trials were conducted to ensure the participants' open and close movements produced a clear electromyography (EMG) signal. At the end of the task participants were shown all eight distractor images, and asked to identify which images had already been presented in the conditioning task. All participants were 75% or more accurate in identifying the presence of novel distractor stimuli, suggesting they attended to the stimuli during the conditioning phase.
EMG activity was measured during the test phase using sensors placed over the first dorsal interosseous muscle. A ground electrode was placed on the central forehead. The raw signal was recorded by an ML-870 Power Lab (AD Instruments, Australia), and passed through a ML-138 Octal Bio amplifier (AD Instruments, Australia) with a gain of 10,000. The raw signal was sampled at 1 khz, which was then digitally filtered with a band-pass filter of 20-500Hz. Baselines were calculated by recording standard deviation (SD) of the EMG activity for 100 ms before the onset of each motion image. A median baseline was calculated for every 32 consecutive trials. Response onset was defined as when the SD of the signal was a) at least 1.2 times the SD of a sliding 30s window, individually selected to most accurately detect response periods by eye, and b) when the SD of a 20-ms epoch was greater than 2.75 times the median baseline as calculated above [Press et al., 2005]. Trials were rejected if participants failed to produce activity greater than 2.75 times the standard deviation of the baseline. Trials in which the participant responded before 100ms or after 1000ms after the presented motion image were manually removed.
Three sets of dependent variables were generated for testing correlations with the personality trait measures. First, to calculate the participants' tendency to mimic each hand, the average response time for the compatible hands was subtracted from the average response time of the incompatible hands, producing a measure of hand mimicry. This was done separately for human and robot hands. To test the impact of reward conditioning on this measure, hand mimicry of the negatively conditioned stimulus (e.g., the negative human hand mimicry) was subtracted from the hand mimicry of the positively conditioned stimulus (e.g., the positive human hand mimicry) to produce conditioned mimicry. This too was done separately for human and robot hands. Higher conditioned mimicry indicates a greater mimicry of positively than negatively conditioned hands.
All test statistics presented in the following section include two-tailed P-values.