A comparison of top-down effects on phonological processing induced by lexical context and by situational language context: An investigation using electrocortikography and magnetoencephalography
General, Cognitive and Mathematical Psychology
Cognitive, Systems and Behavioural Neurobiology
Final Report Abstract
What neural architecture and computations underlie the human ability to perceive and understand spoken language? The complexity of this task is even higher in bilingual individuals, who can map identical sounds onto different linguistic units, depending on the language that they believe to be hearing. In two distinct lines of research, I investigated the neural representation of continuous speech in monolinguals, and the effects of language-experience on speech sound representations. For this, I used intracranial electrophysiological recordings (electrocorticography, ECoG) from the lateral temporal lobe, in particular the superior temporal gyrus (STG), as well as magnetoencephalography (MEG). The superior temporal gyrus is the main cortical area that is specialized for speech sound processing, thus a natural focus of this research. Neural representation of speech temporal dynamics in STG: The continuous speech signal is characterized by prominent temporal modulations of its overall amplitude (amplitude envelope), reflective of its syllabic structure. A major theory in speech neuroscience proposes that the speech envelope is represented in cortex via oscillatory entrainment. However, experimental support for this theory is mixed. I probed this theory using ECoG recordings from the STG of neurosurgical patients. We discovered that neural populations in speech cortex represent the speech amplitude envelope through encoding of rapid increases in the envelope (acoustic edges). This representation reflects the rate of amplitude change, cueing the timing and stress of syllables. This result establishes acoustic edge encoding as an alternative to oscillatory entrainment to speech. To further test this, we expanded our investigation to probe neural encoding of the speech envelope in MEG (with Prof. S. Nagarajan). In line with my intracranial results, we find that evoked responses to acoustic edges explain neural data better than oscillatory entrainment. In an ongoing collaboration with the Dyslexia Center at UCSF (with Prof. M.L. Gorno-Tempini), we are currently using fMRI to explore whether deficient speech envelope representations in developmental dyslexia can be traced back to the cortical areas identified in ECoG (ongoing data collection). Effects of language experience on speech sound representation in STG: In the auditory domain, language-specificity arises early in the processing hierarchy, in the realization of single speech sounds (e.g., /b/ and /p/ differ in the acoustic structure between English and Spanish). Although behaviorally bilinguals are able to differentiate between languagespecific speech sound realizations, how and at what stage this occurs neurally is less clear. At UCSF, I collect intracranial recordings to English and Spanish speech, in bilinguals and monolinguals. Preliminary results point to language-experience specific representations in STG, as well as to language-context dependent representational shifts. While these data are preliminary, if further analyses will uphold these results, this data set will provide support for the role of STG as a higher-level speech cortical area that represents speech sounds in their linguistic context.