Get access

Implicit and explicit categorization of speech sounds – dissociating behavioural and neurophysiological data

Authors

  • Heidrun Bien,

    1. Psychological Institute II, University of Muenster, Fliednerstrasse 21, 48149 Münster, Germany
    Search for more papers by this author
  • Lothar Lagemann,

    1. Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Germany
    Search for more papers by this author
  • Christian Dobel,

    1. Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Germany
    2. Otto-Creutzfeldt Centre for Cognitive and Behavioural Neuroscience, University of Muenster, Germany
    Search for more papers by this author
  • Pienie Zwitserlood

    1. Psychological Institute II, University of Muenster, Fliednerstrasse 21, 48149 Münster, Germany
    2. Otto-Creutzfeldt Centre for Cognitive and Behavioural Neuroscience, University of Muenster, Germany
    Search for more papers by this author

Dr H. Bien, as above.
E-mail: heidrun.bien@uni-muenster.de

Abstract

During speech perception, sound is mapped onto abstract phonological categories. Assimilation of place or manner of articulation in connected speech challenges this categorization. Does assimilation result in categorizations that need to be corrected later on, or does the system get it right immediately? Participants were presented with isolated nasals (/m/ labial, /n/ alveolar, and /n’/ assimilated towards labial place of articulation), extracted from naturally produced German utterances. Behavioural two-alternative forced-choice tasks showed that participants could correctly categorize the /n/s and /m/s. The assimilated nasals were predominantly categorized as /m/, indicative of a perceived change in place. A pitch variation additively influenced the categorizations. Using magnetoencephalography (MEG), we analysed the N100m elicited by the same stimuli without a categorization task. In sharp contrast to the behavioural data, this early, automatic brain response ignored the assimilation in the surface form and reflected the underlying category. As shown by distributed source modelling, phonemic differences were processed exclusively left-laterally (temporally and parietally), whereas the pitch variation was processed in temporal regions bilaterally. In conclusion, explicit categorization draws attention to the surface form – to the changed place and acoustic information. The N100m reflects automatic categorization, which exploits any hint of an underlying feature.

Get access to the full text of this article

Ancillary