S'identifier

Un abonnement à JoVE est nécessaire pour voir ce contenu. Connectez-vous ou commencez votre essai gratuit.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Representative Results
  • Discussion
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

The study introduces a training-testing paradigm to investigate old/new effects of event-related potentials in confident and doubtful prosodic scenarios. Data reveals an enhanced late positive component between 400-850 ms at Pz and other electrodes. This pipeline can explore factors beyond speech prosody and their influence on cue-binding target identification.

Abstract

Recognizing familiar speakers from vocal streams is a fundamental aspect of human verbal communication. However, it remains unclear how listeners can still discern the speaker's identity in expressive speech. This study develops a memorization-based individual speaker identity recognition approach and an accompanying electroencephalogram (EEG) data analysis pipeline, which monitors how listeners recognize familiar speakers and tell unfamiliar ones apart. EEG data captures online cognitive processes during new versus old speaker distinction based on voice, offering a real-time measure of brain activity, overcoming limits of reaction times and accuracy measurements. The paradigm comprises three steps: listeners establish associations between three voices and their names (training); listeners indicate the name corresponding to a voice from three candidates (checking); listeners distinguish between three old and three new speaker voices in a two-alternative forced-choice task (testing). The speech prosody in testing was either confident or doubtful. EEG data were collected using a 64-channel EEG system, followed by preprocessing and imported into RStudio for ERP and statistical analysis and MATLAB for brain topography. Results showed an enlarged late positive component (LPC) was elicited in the old-talker compared to the new-talker condition in the 400-850 ms window in the Pz and other wider range of electrodes in both prosodies. Yet, the old/new effect was robust in central and posterior electrodes for doubtful prosody perception, whereas the anterior, central, and posterior electrodes are for confident prosody condition. This study proposes that this experiment design can serve as a reference for investigating speaker-specific cue-binding effects in various scenarios (e.g., anaphoric expression) and pathologies in patients like phonagnosia.

Introduction

Human vocal streams are rich in information, such as emotion1,2, health status3,4, biological sex5, age6, and, more importantly, the individual vocal identity7,8. Studies have suggested that human listeners have a robust capacity to recognize and differentiate their peers' identities through voices, overcoming within-speaker variations around speaker identity's average-based representation in the acoustic space9. Such varia....

Protocol

The Ethics Committee of the Institute of Linguistics, Shanghai International Studies University, has approved the experiment design described below. Informed consent was obtained from all participants for this study.

1. Preparation and validation of the audio library

  1. Audio recording and editing
    1. Create a Chinese vocal database following the standard procedure of making a previous English version while making adaptations where needed to fit into the conte.......

Representative Results

The classic old/new effect is characterized by a significant increase in listeners' brain activity on the Pz electrode (between 300 to 700 ms) when the speech content of the testing session matches that of the training session, particularly in the old talker condition compared to the new talker condition22. The protocol unveils an updated version of this effect: Firstly, observing larger positive trends in the Pz electrode and across the entire brain region for the old condition compared to th.......

Discussion

The study presents a pipeline for EEG data collection and analysis, focusing on recognizing previously learned speaker identities. This study addresses variations between learning and recognition phases, including differences in speech content22 and prosody10. The design is adaptable to a range of research fields, including psycholinguistics, such as pronoun and anaphoric processing41.

The training-testing paradigm is a c.......

Acknowledgements

This work was supported by the Natural Science Foundation of China (Grant No. 31971037); the Shuguang Program supported by the Shanghai Education Development Foundation and Shanghai Municipal Education Committee (Grant No. 20SG31); the Natural Science Foundation of Shanghai (22ZR1460200); the Supervisor Guidance Program of Shanghai International Studies University (2022113001); and the Major Program of the National Social Science Foundation of China (Grant No. 18ZDA293).

....

Materials

NameCompanyCatalog NumberComments
64Ch Standard BrainCap for BrainAmpEasycap GmbHSteingrabenstrasse 14 DE-82211https://shop.easycap.de/products/64ch-standard-braincap
Abrasive Electrolyte-GelEasycap GmbHAbralyt 2000https://shop.easycap.de/products/abralyt-2000
actiCHamp PlusBrain Products GmbH64 channels + 8 AUXhttps://www.brainproducts.com/solutions/actichamp/
Audio InterfaceNative Instruments GmbHKomplete audio 6https://www.native-instruments.com/en/products/komplete/audio-interfaces/komplete-audio-6/
Foam EartipsNeuronixER3-14 https://neuronix.ca/products/er3-14-foam-eartips
Gel-based passive electrode systemBrain Products GmbHBC 01453https://www.brainproducts.com/solutions/braincap/
High-Viscosity Electrolyte Gel Easycap GmbHSuperVischttps://shop.easycap.de/products/supervisc

References

  1. Larrouy-Maestri, P., Poeppel, D., Pell, M. D. The sound of emotional prosody: Nearly 3 decades of research and future directions. Perspect Psychol Sci. , 17456916231217722 (2024).
  2. Pell, M. D., Kotz, S. A. Comment:....

Explore More Articles

BehaviorSpeaker recognitionvocal expressionspeech prosodyevent related potentialsvoice

This article has been published

Video Coming Soon

JoVE Logo

Confidentialité

Conditions d'utilisation

Politiques

Recherche

Enseignement

À PROPOS DE JoVE

Copyright © 2025 MyJoVE Corporation. Tous droits réservés.