Parallel systems in the control of speech

Authors

  • Anna J. Simmonds,

    Corresponding author
    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    • Correspondence to: Anna Simmonds, Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Centre for Restorative Neurosciences, Division of Brain Sciences, Imperial College London, 3rd Floor, Burlington Danes Building, Hammersmith Hospital, Du Cane Road, London, W12 0NN, UK. E-mail: anna.simmonds08@imperial.ac.uk

    Search for more papers by this author
  • Richard J.S. Wise,

    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    Search for more papers by this author
  • Catherine Collins,

    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    Search for more papers by this author
  • Ozlem Redjep,

    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    Search for more papers by this author
  • David J. Sharp,

    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    Search for more papers by this author
  • Paul Iverson,

    1. Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, United Kingdom
    Search for more papers by this author
  • Robert Leech

    1. Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
    Search for more papers by this author

Abstract

Modern neuroimaging techniques have advanced our understanding of the distributed anatomy of speech production, beyond that inferred from clinico-pathological correlations. However, much remains unknown about functional interactions between anatomically distinct components of this speech production network. One reason for this is the need to separate spatially overlapping neural signals supporting diverse cortical functions. We took three separate human functional magnetic resonance imaging (fMRI) datasets (two speech production, one “rest”). In each we decomposed the neural activity within the left posterior perisylvian speech region into discrete components. This decomposition robustly identified two overlapping spatio-temporal components, one centered on the left posterior superior temporal gyrus (pSTG), the other on the adjacent ventral anterior parietal lobe (vAPL). The pSTG was functionally connected with bilateral superior temporal and inferior frontal regions, whereas the vAPL was connected with other parietal regions, lateral and medial. Surprisingly, the components displayed spatial anti-correlation, in which the negative functional connectivity of each component overlapped with the other component's positive functional connectivity, suggesting that these two systems operate separately and possibly in competition. The speech tasks reliably modulated activity in both pSTG and vAPL suggesting they are involved in speech production, but their activity patterns dissociate in response to different speech demands. These components were also identified in subjects at “rest” and not engaged in overt speech production. These findings indicate that the neural architecture underlying speech production involves parallel distinct components that converge within posterior peri-sylvian cortex, explaining, in part, why this region is so important for speech production. Hum Brain Mapp 35:1930–1943, 2014. © 2013 Wiley Periodicals, Inc.

Ancillary