Social cognition in the blind brain: A coordinate‐based meta‐analysis

Abstract Social cognition skills are typically acquired on the basis of visual information (e.g., the observation of gaze, facial expressions, gestures). In light of this, a critical issue is whether and how the lack of visual experience affects neurocognitive mechanisms underlying social skills. This issue has been largely neglected in the literature on blindness, despite difficulties in social interactions may be particular salient in the life of blind individuals (especially children). Here we provide a meta‐analysis of neuroimaging studies reporting brain activations associated to the representation of self and others' in early blind individuals and in sighted controls. Our results indicate that early blindness does not critically impact on the development of the “social brain,” with social tasks performed on the basis of auditory or tactile information driving consistent activations in nodes of the action observation network, typically active during actual observation of others in sighted individuals. Interestingly though, activations along this network appeared more left‐lateralized in the blind than in sighted participants. These results may have important implications for the development of specific training programs to improve social skills in blind children and young adults.

erature in social neuroscience has focused on the neural processing of visual social stimuli. Two brain systems, the action observation/mirror system and the theory of mind/mentalizing system, are responsible for the perception and representation of other individuals' states, actions and intentions (Van Overwalle & Baetens, 2009). Neuroimaging studies identify fronto-parietal and occipito-temporal regions collectively termed the action observation network (AON) as critically involved in processing others' actions and the meaning underlying them, via automatic simulation routines (Caspers, Zilles, Laird, & Eickhoff, 2010). This system allows us to internally, rapidly and intuitively simulate observed actions within our own sensorimotor system, providing an enriched "understanding" of another person's goals and intentions, on the basis of low-level behavioral input (Caspers et al., 2010;Iacoboni et al., 2005). In particular, observing actions recruits the superior and middle temporal gyrus (STG and MTG), inferior parietal lobule (IPL) and inferior frontal gyrus (IFG) (Gardner, Goulden, & Cross, 2015). In contrast, the mentalizing system allows to make inferences on others' mental and affective states (Molenberghs, Johnson, Henry, & Mattingley, 2016), involving the medial precuneus and temporo parietal junction (TPJ), as well as the ventro-medial and dorso-medial prefrontal cortex (Amodio & Frith, 2006). Consistent evidence entails gaze perception as one of the key factors for mentalizing ability (Calder, Young, Keane, & Dean, 2000). Although the AON and the mentalizing networks are mainly distinct, they are likely to play a complementary role during social interactions, and recent meta-analytic evidence suggests a common involvement of certain brain regions in both, such as the right pSTS-bordering the TPJ (Arioli & Canessa, 2019). Particularly, both systems are engaged during social interactions: the mirror system is responsible for the preparation of our own actions and the simulation of others' actions, while the mentalizing system allows to represent others' intentions, by drawing the capacity to understand the others' thoughts and beliefs (Sperduti, Guionnet, Fossati, & Nadel, 2014).
Considering the great role of vision in supporting human social cognition, a critical issue is the impact that blindness, and in particular the lack of vision since birth, may have on the development of social cognition, both at the functional and neural level. Blindness may indeed affect the development of emotional responsiveness and social skills, possibly predisposing to features of social isolation (including autism, e.g., Brunes, Hansen, & Heir, 2019;Hobson, Lee, & Brown, 1999), as well-known by blind educators that are challenged with the need to develop effective programs to promote social skills in blind children and youngsters (see Sacks, Kekelis, & Gaylord-Ross, 1992;Sacks & Wolffe, 2006). Accordingly, studies carried out in laboratory contexts suggest that blind children show impairment in several social abilities, such as representing others' mental and affective states (Brambring & Asbrock, 2010;Dyck, Farrugia, Shochet, & Holmes-Brown, 2004;Green, Pring, & Swettenham, 2004) and acting social interactions (Pérez-Pereira & Conti-Ramsden, 2013;Tadic, Pring, & Dale, 2010). Blind children seem also to exhibit a more limited repertoire of facial expressions compared to sighted children (Tröster & Brambring, 1992; see also Webb, 1977). The acquisition of verbal skills may reduce the impact of early visual deprivation on social capacities (Bedny & Saxe, 2012), with blind adults being able to understand other individuals' emotions and mental states in a way comparable to that of sighted individuals (e.g., Gamond, Vecchi, Ferrari, Merabet, & Cattaneo, 2017;Oleszkiewicz, Pisanski, & Sorokowska, 2017). Nonetheless, it is likely that social cognition may be mediated by at least partially different strategies and mechanisms in blind and sighted individuals. For example, blind individuals rely on partially different strategies in making impressions about social actors , have difficulties in the posing of emotional expressions (Valente, Theurel, & Gentaz, 2018) and do not seem to show valence-dependent hemispheric lateralization when processing emotions (Gamond et al., 2017). Whereas a consistent body of neuroimaging research has investigated how blindness affects at the neural level perceptual and cognitive processes, only a few studies have systematically investigated how blindness affects the way the brain mediates social processes. Two pioneering fMRI studies first showed that nonvisual modalities may still drive the development of the cortical networks underlying action recognition and Theory of Mind processes (e.g., Bedny, Pascual-Leone, & Saxe, 2009;Ricciardi et al., 2009). While these observations suggest that the large-scale functional organization of the "social brain" is maintained in congenitally blind individuals, there is also evidence that brain networks specifically devoted to face or voice processing may develop differently in the absence of visual experience (e.g., Holig, Focker, Best, Roder, & Buchel, 2014;Pietrini et al., 2004;Van Ackeren, Barbero, Mattioni, Bottini, & Collignon, 2018).
Meta-analyses are a useful approach that allows to highlight the consistency of neural pattern across different experimental studies, by integrating data into a unique statistical analysis. Whereas a recent meta-analysis by Zhang et al. (2019) has clarified the neurocognitive mechanisms underlying language, spatial and object processing in early blind individuals, a comparable approach in the domain of social cognition has never been implemented. In light of the above, in this study we implemented a quantitative meta-analysis of the available neuroimaging literature to provide more solid evidence on: (a) the brain regions associated with the neural representation of others in early blind individuals and (b) the specific brain activation in early blind individuals compared with sighted control individuals. Specifically, we employed activation likelihood estimation (ALE) in order to draw convergence across neuroimaging experiments on others' representation in early blind individuals.

| Rationale of the meta-analytic approach
We took a quantitative meta-analytic approach to investigate the neural representation of others in early blind individuals and to unveil which brain regions are selectively recruited in early blind compared with sighted control individuals. Critically, with this approach we can overcome the typical limitations inherent in single neuroimaging studies, for example, sensitivity to experimental and analytic procedures, lack of replication studies, as well as small sample size (Carp, 2012).
These constraints are known to increase the likelihood of false negatives (Button et al., 2013), thus pushing researchers toward procedures which, conversely, might promote false positives (Eklund, Nichols, & Knutsson, 2016;Muller et al., 2018). We thus aimed to identify the brain regions consistently associated with the neural coding of others in early blind individuals, over and beyond this process in sighted control groups. This goal was pursued with ALE, a coordinatebased meta-analytic approach using the MNI coordinates of peak locations to summarize and integrate published findings (Turkeltaub, Eden, Jones, & Zeffiro, 2002). Thus, we ran two separate ALE analyses: one for blind individuals and one for sighted control individuals.
After that, contrast analyses were conducted between the blind and the sighted control groups. In particular, we aimed to investigate brain activations related to others' representation irrespective of the input sensory modality (i.e., tactile or auditory), the stimulus type (i.e., vocalizations, sound of actions, words, 3D models of faces to be tactically explored, etc.), and the specific employed task.
All the inclusion criteria for each dataset were selected by the first author, and then checked by the other authors. This procedure, entailing a double check by independent investigators, was aimed to reduce the chances of a selection bias (Muller et al., 2018).

| Literature search and study selection: The representation of others in blind and sighted control individuals
We started our survey of the relevant literature by searching for "early blind fMRI" and "congenitally blind fMRI" on Pubmed (https:// www.ncbi.nlm.nih.gov/pubmed/). The preliminary pool of 1,242 studies, after duplicates removed, was first screened by title, and then abstract. We retained only those studies fulfilling the following selection criteria (see Figure 1 for the detailed study selection process): 1. studies written in English language; 2. empirical fMRI studies, while excluding review and meta-analysis articles and those employing other techniques, to ensure comparable spatial and temporal resolution; 3. studies reporting whole-brain activation coordinates, rather than results limited to regions of interest (ROIs) or using small volume corrected (SVC) analyses. Studies based on ROI or SVC analyses should be excluded from meta-analyses (Muller et al., 2018), because a prerequisite is that convergence across experiments is tested against a null-hypothesis of random spatial associations across the entire brain under the assumption that each voxel has the same a priori chance of being activated (Eickhoff, Bzdok, Laird, Kurth, & Fox, 2012); 4. studies focused on early blind participants, rather than on late blind participants. In fact, we decided to focus only on early visual deprivation since neural plasticity phenomena critically depend on age at blindness onset, with consistent evidence showing that following early visual experience, several brain areas maintain a involved (such as studies assessing memory capacities, language, or spatial processing). To this purpose, we selected contrasts requiring participants to attend to stimuli aimed to elicit a representation of other individuals and contrasting this kind of representation with baseline conditions where there was no human representation. The included studies ranged from those requiring participants to represent others' mental states (Bedny et al., 2009) to studies comparing responses to human voice processing versus object sounds (Dormal et al., 2018), to studies comparing haptic recognition of basic facial expressions versus object discrimination (Kitada et al., 2013), as well as studies assessing neural activations in response to sounds produced by human motion (such as footsteps, Bedny et al., 2010) or hand-actions (such as cutting paper with scissors, Ricciardi et al., 2009) compared to non-human environmental sounds.
The articles that were excluded, based on titles and abstracts, included review or meta-analysis studies (31 studies), single case studies (100 studies), studies not including a blind group (12 studies), studies assessing other (non-social) cognitive processes (such as spatial representation) in early blind subjects (137 studies) or in subjects with cortical visual impairment (34 studies) or in sighted individuals (646 studies), studies written in non-English language (49 studies) and studies not employing task-based fMRI technique (185 studies; of which 11 studies investigated non-social processes in blind subjects, 173 focused on non-social processes in sighted subjects and 1 study studied social processes in sighted individuals).
From the remaining 48 articles we retained only those fulfilling the above selection criteria (see Figure 1) and, thus, we excluded review and meta-analysis articles (11 articles) and studies employing non-fMRI technique (5 article); studies using ROIs or SVC analyses (1 study); studies focused on late blind participants (1 study) and studies that did not focus on others' representation (17 studies).
We included studies fulfilling the above criteria regardless of: (a) tested sensory modality (e.g., auditory or haptic), (b) experimental paradigm (e.g., memory or identification tasks). Our aim was indeed to pool across different experimental paradigms to ensure both generalizability and consistency of results, within the "others' representation > non-human representation" comparison inherent in our research question (Radua & Mataix-Cols, 2012). This selection phase resulted in 13 studies (out of 374) fulfilling our criteria. In some cases, we directly contacted the authors to have clarification and more information regarding their study.
We then expanded our search for other potentially relevant studies by carefully examining both the studies quoting, and those quoted by, each of these papers, alongside a recent meta-analysis on the cognitive processes of blind individuals (Zhang et al., 2019). This second phase highlighted four further studies fitting our search criteria.
This procedure led to include in the main ALE meta-analysis on blind participants 17 previously published studies (see Table 1 Table 1). This number of contrasts is in line with the recent prescriptions for ALE meta-analyses (Eickhoff et al., 2016; F I G U R E 1 Flowchart of literature search and selection process T A B L E 1 Overview of the 17 studies included in the meta-analysis on the neural representation of other individuals in both blind and sighted groups Note: The majority of these studies employed auditory stimuli (15/17, hence almost 90% of the studies we included), while only two studies used haptic stimuli (Kitada et al., 2013(Kitada et al., , 2014. Abbreviations: EB, early blind individuals; N, progressive study number; SC, sighted control individuals; Sub, subjects. Muller et al., 2018), and ensures that results would not be driven by single experiments (see also Zhang et al., 2019).
Importantly, the inclusion of multiple contrasts/experiments from the same set of subjects can generate dependence across experiment maps and thus decrease the validity of meta-analytic results. To prevent this issue, we adjusted for within-group effects by pooling the coordinates from all the relevant contrasts of a study into one experiment (Turkeltaub et al., 2002).

| Activation likelihood estimation
We performed two distinct ALE analyses, using the GingerALE software (Eickhoff et al., 2009), to identify consistently activated regions associated with the representation of others in both blind and sighted control groups. We followed the analytic approach previously described by Arioli and Canessa (2019), based on Eickhoff et al. (2012). In both meta-analyses, activation foci were initially interpreted as the centres of three-dimensional Gaussian probability distributions, to capture the spatial uncertainty associated with each individual coordinate. All coordinates were reported or converted into MNI space, using the automatic routine implemented in GingerALE.
The three-dimensional probabilities of all activation foci in a given experiment were then combined for each voxel, resulting in a modeled activation (MA) map. The union of these maps produces ALE scores describing the convergence of results at each brain voxel (Turkeltaub et al., 2002). To distinguish "true" convergence across studies from random convergence (i.e., noise), the ALE scores are compared with an empirically defined null distribution (Eickhoff et al., 2012). The latter reflects a random spatial association between experiments, with the within-experiment distribution of foci being treated as a fixed property. A random-effects inference is thus invoked, by focusing on the above-chance convergence between different experiments, and not on the clustering of foci within a specific experiment. From a computational standpoint, deriving this null hypothesis involved sampling a voxel at random from each MA map, and taking the union of the resulting values. The ALE score obtained under this assumption of spatial independence was recorded, and the permutation procedure iterated 100 times to obtain a sufficient sample of the ALE null distribution. The "true" ALE scores were tested against the ALE scores obtained under the null distribution and thresholded at p < .001, corrected for cluster-level family wise error, and the cluster level threshold was set at p < .05, to identify above-chance convergence in each analysis (Eickhoff et al., 2012).
The resulting maps were then fed into direct comparisons and conjunction analyses, within GingerALE, to unveil the common and specific brain activations between the early blind and sighted control individuals. A conjunction image was created, using the voxel-wise minimum value of the included ALE images, to display the similarity between datasets (Eickhoff et al., 2011 3 | RESULTS

| Others' representation in early blind individuals
Activations associated with the neural processing of others in early blind individuals encompassed the regions typically associated with the AON. These included the posterior portion of the right inferior frontal gyrus, as well as the inferior and middle temporal cortex, extending in the right superior temporal sulcus (STS) and the left fusiform gyrus (see Table 2 and Figure 2a).
The lack of consistent activation in the parietal cortex, a key node of the AON, is possibly due to the low number of studies specifically focusing on hand representation in our database (see Section 4 and

| Others' representation in sighted control individuals
Activations associated with representing others in sighted control groups involved the right superior and middle temporal gyri (see Table 3 and Figure 2b).

| Others' representation in early blind and sighted control individuals
A conjunction analysis highlighted no significant common activation to the processing of other individuals in early blind and sighted control subjects ( Table 4).
The lack of common neural activations between the two groups during social processing was somehow unexpected and is probably guided by the low number of studies included. In fact, the majority of the studies included in our analysis shows overlapping activations particularly in the STS, STS/TPJ and the MTG (see Table 5).
In order to shed light on a possible common neural pattern of activation in sighted and blind individuals during social tasks, we performed a third meta-analysis with both sighted and early blind individuals (33 experiments included, with 607 foci in 479 subjects). This additional analysis revealed consistent activation in the right pSTS, alongside the TPJ, and MTG during the representation of others, regardless of the group (sighted vs. early blind) (see Table 6 and Figure 2d).
F I G U R E 2 Neural processing of others in early blind (EB) individuals, sighted control (SC) individuals and differences between EB and SC participants. The figure reports the brain structures consistently associated with processing other individuals in EB (a), and SC subjects (b), and the results of direct comparisons and conjunction analysis between the meta-analyses separately performed on the two different groups (c); this analysis reported no significant common activation to the processing of others in EB and SC, likely due to the low number of studies included in each metaanalysis (17 studies for EB and 16 studies for the SC group). The last panel (d) shows the results of a third metaanalysis carried out considering all studies (both on EB and SC, 33 experiments included) to unveil brain regions consistently engaged during social processing in both groups. All the reported activations survived a statistical threshold of p < .05 corrected for multiple comparisons

| Others' representation in early blind versus sighted control individuals
In the early blind groups the processing of others was associated with stronger consistent bilateral activity in left fusiform gyrus and in the left middle temporal cortex compared to sighted individuals (see Table 4 and Figure 2c). The reverse comparison highlighted the right middle/superior temporal gyrus (see Table 4 and Figure 2c).

| DISCUSSION
The study of the neural bases of social cognition in the blind brain has been somehow neglected, with only a few studies specifically investigating whether and how the lack of visual input affects the functional architecture of the "social brain." Some studies showed similar patterns of brain activity in early blind and sighted individuals during tasks tapping on social cognition abilities (Bedny et al., 2009;Ricciardi et al., 2009), while other studies suggested that social brain networks develop differently following early visual deprivation (Gougoux et al., 2009;Holig et al., 2014). These inconsistencies reported in the neuroimaging literature on social processing in blind individuals may also reflect possible confounds associated with individual studies, for example, the influence of experimental and analytic procedures as well as that of the small sample sizes (Carp, 2012). Moreover, the effects reported by individual studies are harder to generalize to the entire target group (here, the early blind), regardless the specific procedures used (Muller et al., 2018).
In light of this, we pursued a meta-analytic approach to isolate the most consistent results in the available literature, controlling for possible confounding effects via stringent criteria for study selection.
In particular, we aimed to investigate: (a) the neural coding of others' representation in early blind individuals, and (b) the specific brain regions recruited in early blind compared with sighted control  parsing of a stream of information, whether auditory or visual, into meaningful discrete elements, whose communicative meaning for decoding others' behavior and intentions involves more in-depth analysis associated with increased activation in the TPJ node (Arioli & Canessa, 2019;Bahnemann et al., 2010;Redcay, 2008). Ethofer et al. (2006 showed that the pSTS is the input of the prosody processing system and represents the input to higher-level social cognitive computations, associated with activity in the action observation system (Gardner et al., 2015), as well as in the mentalizing system (Schurz, Radua, Aichhorn, Richlan, & Perner, 2014). Accordingly, using visual stimuli  pointed to the pSTS as the input for the social interaction network, which includes key nodes of both action observation and theory of mind networks. Thus, the STS/TPJ regions may represent a domain-specific hub associated with the analysis of the meaning of others' actions, regardless of the stimulation modality, and highly interconnected with the action observation and the mentalizing networks.
The lack of activation of the parietal cortex in the early blind, with the parietal cortex being another key node of the AON in sighted individuals, is possibly due to the low number of studies specifically focused on hand representation in our database (see Pellijeff, Bonilha, Morgan, McKenzie, & Jackson, 2006; 3/17, see Table 1). Indeed, Caspers et al. (2010) reported that only observation of hand actions was consistently associated with activations within parietal cortex, the observation of non-hand actions was not. Moreover, although Ricciardi et al. (2009)  (vs. objects moved by physical forces). These results may explain why early blind individuals are able to efficiently interact in the social context and to learn by imitation of others' (e.g., Gamond et al., 2017;Oleszkiewicz et al., 2017;Ricciardi et al., 2009). Our findings suggest that regions of the social brain may work on the basis of different sensory inputs, depending on which sensory modality is available. Moreover, our findings are consistent with the results of a recent metaanalysis by Zhang et al. (2019) and with prior fMRI studies with blind individuals in the social domain (e.g., Bedny et al., 2009;Ricciardi et al., 2009) suggesting that brain regions that are consistently recruited for different functions in sighted individuals, such as the dorsal fronto-parietal network for spatial function and ventral occipitotemporal network for object function, and-as shown here-the AON for social function, maintain their specialization despite the lack of a normal visual experience. This observation on the "social blind brain" is in line with the current, more general perspective on the blind brain that undergoes a functional reorganization due to the lack of visual experience, but whose large-scale architecture appears to be significantly preserved (e.g., Ricciardi, Papale, Cecchetti, & Pietrini, 2020).
Interestingly, we did not find evidence for any cross-modal con-  Collignon, 2020;Bauer et al., 2015). Alternatively, recruitment of the occipital cortex in the blind has been proposed to also subserve highlevel (cognitive) processing (e.g., Amedi, Raz, Pianka, Malach, & Zohary, 2003;Bedny, Pascual-Leone, Dodell-Feder, Fedorenko, & Saxe, 2011;Lane, Kanjlia, Omaki, & Bedny, 2015) suggesting that cortical circuits that are thought to have evolved for visual perception may come to participate in abstract and symbolic higher-cognitive functions (see Bedny, 2017). Indeed, recent evidence has shown that during high-level cognitive tasks (i.e., memory, language and executive control tasks), there is an increased connectivity between occipital cortex and associative cortex in the lateral prefrontal, superior parietal, and mid-temporal areas (Abboud & Cohen, 2019), with these regions being also possibly involved in social perception (Caspers et al., 2010). In line with this, we would have expected social tasks to drive activations in the occipital cortex. This was not the case. The only region that showed a sort of cross-modal response was the fusiform face area, in the ventral stream, probably guided by a high number of studies included in our meta-analysis focusing on voice processing (cf. Holig et al., 2014;von Kriegstein, Kleinschmidt, Sterzer, & Giraud, 2005). In this regard, it is also worth noting that haptic perception by blind individuals of facial expressions and hand shapes (Kitada et al., 2013(Kitada et al., , 2014 as well as whole-body shape recognition via a visual-to-auditory sensory substitution device (SSD; Striem-Amit & Amedi, 2014) led to activations in face and bodydedicated circuits in the fusiform gyrus, showing that these dedicated circuits develop even in the absence of a normal visual experience.
Our meta-analysis shows that processes related to representation of others do not recruit the occipital cortex in the early blind, suggesting that differently to other cognitive tasks, social tasks may be mediated by higher-level regions without the need to recruit additional occipital resources. Even if this might be related to the experimental heterogeneities that we highlighted above, the lack of a consistent recruitment of occipital cortex for social tasks we reported contributes to a better understanding of the functional role of "visual" areas in the blind brain.
In conclusion, our findings support the view that the brain of early blind individuals is functionally organized in the same way of the brain of sighted individuals although relying on different types of input (auditory and haptic) (see Bedny et al., 2009;Ricciardi et al., 2009 Lumbreras, 1999), and haptic virtual perception may be a valid and effective assistive technology for the education of blind children in domains like math learning (e.g., Espinosa-Castaneda & Medellin-Castillo, 2020). This approach-especially audio-based virtual environments-may thus be extended to the social domain to allow the safe and non-threatening practice of particular social skills in an educational setting. In this respect, and considering the importance for visually impaired children to study in a mainstream school (e.g., Davis & Hopwood, 2007;Parvin, 2015),