The effect of interbrain synchronization in gesture observation: A fNIRS study

Abstract Introduction Gestures characterize individuals' nonverbal communicative exchanges, taking on different functions. Several types of research in the neuroscientific field have been interested in the investigation of the neural correlates underlying the observation and implementation of different gestures categories. In particular, different studies have focused on the neural correlates underlying gestures observation, emphasizing the presence of mirroring mechanisms in specific brain areas, which appear to be involved in gesture observation and planning mechanisms. Materials and methods Specifically, the present study aimed to investigate the neural mechanisms, through the use of functional Near‐Infrared Spectroscopy (fNIRS), underlying the observation of affective, social, and informative gestures with positive and negative valence in individuals' dyads composed by encoder and decoder. The variations of oxygenated (O2Hb) and deoxygenated (HHb) hemoglobin concentrations of both individuals were collected simultaneously through the use of hyperscanning paradigm, allowing the recording of brain responsiveness and interbrain connectivity. Results The results showed a different brain activation and an increase of interbrain connectivity according to the type of gestures observed, with a significant increase of O2Hb brain responsiveness and interbrain connectivity and a decrease of HHb brain responsiveness for affective gestures in the dorsolateral prefrontal cortex (DLPFC) and for social gestures in the superior frontal gyrus (SFG). Furthermore, concerning the valence of the observed gestures, an increase of O2Hb brain activity and interbrain connectivity was observed in the left DLPFC for positive affective gestures compared to negative ones. Conclusion In conclusion, the present study showed different brain responses underlying the observation of different types of positive and negative gestures. Moreover, interbrain connectivity calculation allowed us to underline the presence of mirroring mechanisms involved in gesture‐specific frontal regions during gestures observation and action planning.


| INTRODUC TI ON
Gestures are configured as a communicative vehicle that characterizes verbal and nonverbal communication (McNeill, 1992). Studies in the psychological, social, and linguistic fields have been interested in the investigation of gestures trough different perspectives, analyzing the relationship between thought, language, and action (Kong, Law, Kwan, Lai, & Lam, 2015). On the contrary, studies in the neuroscientific field have been interested in observing the neural correlates underlying the perception and the implementation of gestures with different functions (Bates & Dick, 2002;Green et al., 2009;Willems & Hagoort, 2007). Action observation, recognition, and interpretation, indeed, appears to be a fundamental ability for communication and social perception processes (Chong, Williams, Cunnington, & Mattingley, 2008).
In light of this evidence, in the present study, in order to investigate the brain correlates underlying the observation of different positive and negative types of gestures (affective, social, and informative), the neural responses of encoders and decoders were recorded through the use of fNIRS in hyperscanning, that is a very effective neuroimaging technique for the recording of individuals' neural activity underlying emotional or social processes (Balconi & Cortesi, 2016;Balconi, Vanutelli, & Grippa, 2017;Crivelli et al., 2018) under natural or maximally ecological conditions (Balconi & Molteni, 2016;, providing information on interbrain tuning and "resonance" and implicit coupling mechanisms (Balconi, Gatti, & Vanutelli, 2018;Vanutelli et al., 2016).
Specifically, the present study aimed to observe possible differences in individuals' neural responses underlying the observation of different types of gestures: affective, social, and informative of different valence: positive and negative.
In particular, affective gestures are aimed to express their moods and share their emotional experiences with the interlocutor (Tomasello, Carpenter, Call, Behne, & Moll, 2005).
On the contrary, social gestures are aimed at managing interpersonal relationships and are useful for starting, maintaining or interrupting an interaction with another individual (Kendon, 2017), providing the implementation of inclusion, cooperation, and exclusion behaviors, that can elicit positive and negative emotions in the interlocutor (Bavelas, Chovil, Lawrie, & Wade, 1992;Bressem & Müller, 2017;Calbris, 2011).
Finally, informative gestures are aimed at communicating a physical state to the interlocutor with the purpose to direct the decoder attention toward a specific element (Enfield, 2009;Enfield, Kita, & de Ruiter, 2007), satisfying different communication functions that can provide positive and negative emotional experiences (Enfield, 2009;Enfield et al., 2007).
Despite the role of this subcortical structure in emotional processes, our interest focuses mainly on the cortical regions involved in cognitive and emotional processes, since fNIRS measures cortical neuronal firing through the hemodynamic changes due to neurovascular coupling (Curtin et al., 2019;Fuster et al., 2005;Heeger & Ress, 2002).
In this regard, as demonstrated by previous studies (Blair, Morris, Frith, Perrett, & Dolan, 1999;Wildgruber et al., 2004), affective and social gestures result to activate more the frontal regions, such as the medial part of the ventral prefrontal cortex and the DLPFC, that is implicated in emotional valence of more expressive and emotional gestures.
Related to informative gestures, instead, we expected to observe an increase of O2Hb activity in parietal areas, that are more involved in visual and sensorimotor integration processes and in the imagination of body in time and space (Janowski, Kurpas, Kusz, Mroczek, & Jedynak, 2013;Nicolle et al., 2012;Ruby & Decety, 2001).
Moreover, considering gestures valence (positive, negative), we expected to observe a different cerebral asymmetry in the DLPFC area, more involved in interpersonal and emotional processes (Bavelas et al., 1992;Bressem & Müller, 2017;Calbris, 2011;Kendon, 2017;Müller, 2004Müller, , 2016 according to the observation of positive and negative affective gestures, which are those more involved in emotional and affective processes communication (Tomasello, Carpenter, & Liszkowski, 2007). In particular, based on neural signatures of affective experience model (Balconi, Grippa, & Vanutelli, 2015;Davidson, 1992), that postulates that positive stim- Finally, thanks to the use of fNIRS in hyperscanning, which allows the simultaneous recording of the activity of the two interagents individuals, we expected to observe an increase of interbrain connectivity and resonance mechanisms in the frontal areas during the observation of affective and social gestures. In particular, we expected to observe an increase of interbrain connectivity in encoder and decoder in frontal areas during the observation of affective and social gestures and in parietal areas for informative one due to the presence of mirroring mechanisms, that are activated during action observation, imagination and planning, and in line with the specificity of these brain areas in response to gesture types. Indeed, frontal areas are more involved in relational, prosocial, and empathic processes (Balconi & Bortolotti, 2012Balconi, Falbo, & Conte, 2012;Rameson & Lieberman, 2009), while parietal ones are more implicated in processes concerning gestures' observation and execution (Caplan, 2003;Ekstrom et al., 2005;Jones & Wilson, 2005;Sirota et al., 2008).
Starting from this evidence, we expected to observe a similar neural activation in the encoder, who observed the gesture and mentally plans the action to be successively reproduced, and in the decoder, who only observed the gesture without any action

reproduction.
Indeed, as demonstrated by previous studies, mirroring processes create a direct link between gestures' observation and execution in both the actor who is required to successively reproduce the action and who has simply to observe the action itself (Holle et al., 2008;Huxham et al., 2009) because actions observation activates the same brain areas involved in that actions execution.

| Participants
For the research conduction, seventeen dyads of participants (M age = 26.98; SD age = 0.03) of the same gender were recruited, for a total of 34 subjects. In particular, 14 dyads were composed of participants of female gender, while participants of male gender composed three dyads. Recruited participants were university students.
Specifically, the participants, coupled in dyads, did not know each other. Then, one of each dyads' participants was randomly assigned the role of encoder or decoder, who were asked to perform different functions. Participants were recruited with the following criteria:

| Procedure
For the conduct of the experiment, participants were invited to sit in a room at a distance of 60 cm from a centrally placed computer that allows observing the videos reproducing different gesture types, presented through the E-Prime 2.0 software (E-prime2 software; Tools Psychology Software Inc.). Specifically, 60 videos, that reproduced a non-verbal interaction between two actors, characterized by different gesture types (affective, social, and informative with positive and negative valence), were shown to participants. The presentation of the 60 videos took place in three randomized blocks, each consisting of 20 stimuli, with an interval of a few minutes to prevent participants' fatigue. The 60 videos consist in the reproduction of: 10 affective gestures with positive valence, aimed at communicating to the interlocutor a state of well-being, 10 affective gestures with negative valence, aimed of transmitting a state of malaise, 10 social gestures with positive valence, aimed at starting or maintaining a relationship with the interlocutor, 10 social gestures with negative valence, aimed at interrupting the relationship with the interlocutor, 10 informative gestures with positive valence and 10 informative gestures with negative valence, aimed to direct the attention of the interlocutor toward a specific object in the environment. The valence of informative gestures was defined by the context that was introduced before gesture video presentation.
Specifically, the experiment required both dyads participants firstly to observe the videos that appeared on the screen for a duration of 3 s (sec.). Subsequently, either one participant, casually identified as the encoder, was asked to reproduce the gesture observed toward his companion, the decoder. Specifically, the experiment was carried out in the following way: an initial phase of task familiarization, followed by the execution of the three task blocks (order randomized). The administration of the task consists of the presentation of a 2 s black screen; the presentation of a slide containing a context sentence, lasting 4 s, to help individuals to understand the meaning of gesture presented; the appearance of the video reproducing the gesture to be observed for 3 s; the presentation of a 4 s black screen and the presentation of a slide with the "go" signal to indicate participants to reproduce the gesture (Figure 1).
Fourteen judges (seven males and seven females) were recruited (M age = 28.34, SD age = 0.04) for the stimuli validation using a Likert scale of seven points. In particular, the evaluation concerns some gestures features, such as commonality, frequency of use, complexity, social meaning, familiarity, and emotional impact for the three types of gesture (affective, social, and informative). All gestures were homogeneous for the previous mentioned characteristics that were verified by statistical analysis, differing only for emotional degree and social content that differently characterize affective, social, and informative gestures. For the stimuli categories, statistical analysis was used to verify the similarity for previous characteristics

| fNIRS recording and signal processing
A NIRScout system (NIRx Medical Technologies, LLC) with a 16-optode matrix was used to record hemodynamic responses consisting of the variation of O2Hb and HHb concentrations. Specifically, through the use of an ElectroCap, eight sources and eight detectors were placed on each scalp following the 10/5 international system (Oostenveld & Praamstra, 2001).
The distance between sources and detectors was kept at 30 mm for contiguous optodes and a near-infrared light of two wavelengths Raw time series were visually inspected to detect noisy channels (e.g., due to large motion errors, sudden amplitude changes, poor coupling), excluding channels with a poor optical coupling, for example, absence of the ~1 Hz heartbeat oscillations in raw signals (Pinti et al., 2015).
O2Hb and HHb mean concentration for each channel was calculated for gesture category (affective, social, and informative), and valence (positive and negative). The mean concentration of each channel was computed by averaging data across the trials, starting from the appearance of the video reproducing the gesture to be ob- This normalized's index's average can be calculated despite the unit since the effect size parameter is not influenced by the differential pathlength factor (DPF), overcoming the fact that fNIRS raw data were initially related values and could not be precisely measured across participants or channels (Matsuda & Hiraki, 2006

| Data analysis
Three types of analyses were completed according to O2Hb-and HHbdependent measures. The first ANOVA was applied to single-brain data to test the effect of independent measures on O2Hb and HHb concentration for each participant (single-brain analysis). Secondly, Pearson correlational analysis for each couple of participants of encoder/decoder was calculated for each dependent measure finalizing to compute the synchronization values within each couple for each measure.
Thirdly, these indices were put into different ANOVA tests, as dependent variables, in order to evaluate differences in synchrony strength across the experimental conditions (interbrain connectivity analysis).
The degrees of freedom were corrected for all the ANOVAs using Greenhouse-Geisser epsilon with a 0.05 significance level.

Moreover, contrast analyses and multiple comparisons with the
Bonferroni test were applied. Finally, data distribution normality was tested with kurtosis and asymmetry tests. Due to multiple comparisons, type I and type II errors were considered and power analysis allowed to support adequate limitation to increasing of these errors.

| D ISCUSS I ON
The present study aimed to investigate the brain responsiveness and interbrain correlates associated with the observation of different gestures' types during a non-verbal interaction between encoder and decoder. In particular, the present study aimed to investigate the neural correlates underlying the observation of affective, social, and informative gestures with positive and negative valence. Specifically, in order to observe interagents' individuals brain responsiveness and brain tuning mechanisms, single-brain and interbrain analyses were conducted.
Firstly, from the results of the single-brain analysis, according to our hypothesis, an increase of O2Hb and a decrease of HHb activity were observed for affective gestures observation in the DLPFC and for social gestures observation in the SFG area. This result highlights the activation of specific brain areas according to the category of gesture observed.
Specifically, the greater activation of O2Hb activity in the DLPFC area for affective gestures observation may be due to the functional significance of these types of gestures aimed to transmit emotionally charged meanings and to share emotional experiences (Tomasello et al., 2005). Considering, therefore, the functional meaning of affective gestures, the increase of O2Hb activity in the DLPFC region can be related to a higher involvement of this cerebral area in emotional, prosocial, and empathic processes (Baeken et al., 2011;Balconi, Pezard, Nandrino, & Vanutelli, 2017;Kalbe et al., 2010) that can be experienced by individuals during affective gestures observation.
Moreover, DLPFC area appears to be involved in some processes that can be activated by affective gestures, such as theory of mind mechanisms, interpersonal relationships and other people's states understanding (Bavelas et al., 1992;Bressem & Müller, 2017;Calbris, 2011;Kendon, 2017;Müller, 2004Müller, , 2016. These results also appear to be confirmed by previous research that has observed an increase of frontal activity concerning emotional affective gestures observation (Peyk, Schupp, Keil, Elbert, & Junghöfer, 2009).
Furthermore, DLPFC compare with SFG, FEF, DPMC areas appears to be more involved in the ability to respond motivationally to innate or learned nonverbal social cues, such as facial expressions and emotional tone in speech or gestures. Moreover, DLPFC appears to be involved in understanding and reinterpreting the meaning of a stimulus to downregulate emotional response (Gökçay & Yildirim, 2010).
Similarly, the increase of O2Hb in SFG area for social gestures observation can be interpreted in light of the functional meaning of social gestures finalized to initiate, establish, or interrupt a relationship with another individual (Bavelas et al., 1992;Kendon, 2017). In light of the functional significance of social gestures, the greater activation of O2Hb in SFG region may be because this cerebral area appears to be involved in mechanisms of behavior control and in others' intentions implementation Nakamura et al., 1998;Shima & Tanji, 2017).  Davidson, 1992), which postulates that stimuli perceived by individuals as positive induce approaching behaviors and F I G U R E 6 (a) Histogram of O2Hb interbrain connectivity for three different types of gestures (affective, social, and informative) in the left and right side of the DLPFC. Bars represent +-1SE. Stars mark statistically significant (p<.05) pairwise comparisons. (b) Representation of O2Hb interbrain connectivity for affective gestures in DLPFC left and right side in encoder and decoder. The red color shows the increase of O2Hb interbrain connectivity in the DLPFC left side compared with the right one for affective gestures positive emotions experience, leading to a greater activation of the left frontal side; while, a more greater activation of the frontal right side results to be associated with the presentation of negative stimuli providing avoidance behaviors (Balconi & Mazza, 2009Davidson, 1992;Harmon-Jones, 2003  This result shows how these cerebral areas, which support emotional regulation, interaction and social understanding mechanisms (Baker, Bloom, & Davis, 2016;Kalbe et al., 2010;Liu et al., 2015;Suzuki et al., 2011), are involved in mirroring mechanisms that allow individuals to synchronize their brain responses during gestures observation (Marsh, Blair, Jones, Soliman, & Blair, 2009). Moreover, this result highlights that during affective and social gestures observation, neural synchronization and implicit coupling mechanisms occur between encoders and decoders, presupposing a sharing and a co-representation of actions that equally involve both individuals, as if they were preparing for the implementation of a synchronized response to movement.
Furthermore, in light of this result, it emerges that an understanding of the meaning of these types of gestures occurs during gestures observation both in encoder and decoder, which leads individuals to prepare for the development of joint action. Indeed, as has been shown by previous studies, during the development of joint actions, synchronic, and diachronic mechanisms take place in individuals, increasing the implicit neural coupling and interpersonal coupling dynamics (Balconi, Fronda, & Vanutelli, 2019;Balconi, Pezard, et al., 2017).
Concerning gesture valence, instead, from interbrain connectivity an increase of O2Hb activity in the left DLPFC area has emerged concerning positive affective gestures observation. This result confirms the frontal brain asymmetry postulated by the dual system model of neural signatures of affective experience according to the presentation of positive and negative stimuli Davidson, 1992).
Finally, it is interesting to notice that the outcome of the present study did not reveal any significant differences in the brain activity of encoder and decoder during gestures observation, despite the different roles of interagents that required encoder to observe the gesture in view of future reproduction and decoder to only observe the gesture reproduced by the video without any other action. This direct combination of observation and planning of the gesture has been observed by several studies (Chong et al., 2008;Rizzolatti & Craighero, 2004;Rizzolatti et al., 2001), pointing out that actions understanding occur when the observation activate the observer motor region (Chong et al., 2008;Rizzolatti et al., 2001). This similar neural activation has shown the involvement of the same cerebral areas during processes of gestures observation and gestures imagination and planning (Buccino, Binkofski, & Riggio, 2004;Chong et al., 2008;Coricelli et al., 2005;Gallese, 2003;Gallese et al., 1996;Rizzolatti & Craighero, 2004;Rizzolatti et al., 1996;Wilson & Knoblich, 2005). In the present study, the main frontal areas involved in encoder/decoder response may be represented as supporting mirroring mechanisms in the case of affective and social action representation, able to produce a dual resonance in both active and passive actor.
In conclusion, the present study highlighted different activation Despite the potential of this study, some limits may be highlighted that could be taken in consideration for future studies.
Firstly, by implementing the sample size, the power of the observations obtained could be increased. Secondly, the study could be repeated using different interaction contexts for the observation of specific categories of gestures.
Thirdly, the use of other neuroscientific techniques (such as electroencephalography) could allow us to gather further data in terms of temporal evolution of the interbrain dynamics, which is useful to confirm or add new evidence to results.
Fourthly, to better generalize the present results, an ample sample size could be suggested for future investigations.
At present, power analysis supported the results as a pilot study, in the absence of population as a reference for the sample size. Fifth, future analysis could be considered the comparison between encoder and decoder during the step of gesture reproduction by encoder, in which the encoder reproduces the gesture toward the decoder who passively receives it, to investigate other neural mechanisms underlined this moment, quite different from mirroring mechanisms presenting during gestures' observation. Finally, besides mechanisms of synchrony and symmetric interbrain connectivity in the same cerebral areas, in future studies the asymmetric pattern of coupling in different cerebral areas should be explored to observe the different psychological process of the subjects during social interactions.

CO N FLI C T O F I NTE R E S T
Authors declare they have not conflict of interests.

AUTH O R CO NTR I B UTI O N
MB contributed to the conception and design of the study; MB wrote the first draft and each section of the manuscript. MB and GF contributed to manuscript final writing and revision, read and approved the submitted version.

DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request.