JoVE Logo

Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

The present study proposes a protocol to investigate the adaptation to left-right reversed audition achieved only by wearable devices, using neuroimaging, which can be an effective tool for uncovering the adaptability of humans to a novel environment in the auditory domain.

Abstract

An unusual sensory space is one of the effective tools to uncover the mechanism of adaptability of humans to a novel environment. Although most of the previous studies have used special spectacles with prisms to achieve unusual spaces in the visual domain, a methodology for studying the adaptation to unusual auditory spaces has yet to be fully established. This study proposes a new protocol to set-up, validate, and use a left-right reversed stereophonic system using only wearable devices, and to study the adaptation to left-right reversed audition with the help of neuroimaging. Although individual acoustic characteristics are not yet implemented, and slight spillover of unreversed sounds is relatively uncontrollable, the constructed apparatus shows high performance in a 360° sound source localization coupled with hearing characteristics with little delay. Moreover, it looks like a mobile music player and enables a participant to focus on daily life without arousing curiosity or drawing attention of other individuals. Since the effects of adaptation were successfully detected at the perceptual, behavioral, and neural levels, it is concluded that this protocol provides a promising methodology for studying adaptation to left-right reversed audition, and is an effective tool for uncovering the adaptability of humans to a novel environments in the auditory domain.

Introduction

Adaptability to a novel environment is one of the fundamental functions for humans to live robustly in any situation. One effective tool for uncovering the mechanism of environmental adaptability in humans is an unusual sensory space that is artificially produced by apparatuses. In the majority of the previous studies dealing with this topic, special spectacles with prisms have been used to achieve left-right reversed vision1,2,3,4,5 or up-down reversed vision6,7. Furthermore, exposure to such vision from a few days to more than a month has revealed perceptual and behavioral adaptation1,2,3,4,5,6,7 (e.g., capability to ride a bicycle2,5,7). Moreover, periodic measurements of the brain activity using neuroimaging techniques, such as electroencephalography (EEG)1, magnetoencephalography (MEG)3, and functional magnetic resonance imaging (fMRI)2,4,5,7, have detected changes in the neural activity underlying the adaptation (e.g., bilateral visual activation for unilateral visual stimulation4,5). Although the participant's appearance becomes strange to some extent and great care is needed for the observer to maintain the participant's safety, reversed vision with prisms provides precise three-dimensional (3D) visual information without any delay in a wearable manner. Therefore, the methodology for uncovering the mechanism of environmental adaptability is relatively established in the visual domain.

In 1879, Thompson proposed a concept of pseudophone, "an instrument for investigating the laws of binaural audition by means of the illusions it produces in the acoustic perception of space"8. However, in contrast to the visual cases1,2,3,4,5,6,7, few attempts have been made to study the adaptation to unusual auditory spaces, and no noticeable knowledge has been obtained to date. Despite a long history of developing virtual auditory displays9,10, wearable apparatuses for controlling 3D audition have rarely been developed. Hence, only a few reports examined the adaptation to left-right reversed audition. One traditional apparatus consists of a pair of curved trumpets that are crossed and inserted into a participant's ear canals in a contrariwise manner11,12. In 1928, Young first reported the use of these crossed trumpets and wore them continuously for 3 days at most or a total of 85 h to test adaptation to left-right reversed audition. Willey et al.12 retested the adaptation in three participants wearing the trumpets for 3, 7, and 8 days, respectively. The curved trumpets easily provided left-right reversed audition, but had an issue with the reliability of spatial accuracy, wearability, and strange appearance. A more advanced apparatus for the reversed audition is an electronic system in which left and right lines of head/earphones and microphones are reversely connected13,14. Ohtsubo et al.13 achieved auditory reversal using the first ever binaural headphone-microphones that were connected to a fixed amplifier and evaluated its performance. More recently, Hofman et al.14 cross-linked complete-in-canal hearing aids and tested adaptation in two participants that wore the aids for 49 h in 3 days and 3 weeks, respectively. Although these studies have reported high performance of sound source localization in the front auditory field, the sound source localization in the backfield and a potential delay of electrical devices have never been evaluated. Especially in Hofman et al.'s study, the spatial performance of the hearing aids was guaranteed for the front 60° in the head-fixed condition and for the front 150° in the head-free condition, suggesting unknown omniazimuth performance. Moreover, the exposure period may be too short to detect phenomena related to the adaptation as compared with the longer cases of reversed vision2,4,5. None of these studies have measured brain activity using neuroimaging techniques. Therefore, the uncertainty in spatiotemporal accuracy, the short exposure periods, and the non-utilization of neuroimaging could be reasons for the small number of reports and the limited amount of knowledge on adaptation to left-right reversed audition.

Thanks to the recent advances in wearable acoustic technology, Aoyama and Kuriki15 succeeded in constructing a left-right reversed 3D audition using only wearable devices that recently became available and achieved the omniazimuth system with high spatiotemporal accuracy. Moreover, approximately a 1-month exposure to reversed audition using the apparatus exhibited some representative results for MEG measurements. Based on this report, we describe, in this article, a detailed protocol to set-up, validate and use the system, and to test the adaptation to left-right reversed audition with the help of neuroimaging that is performed periodically without the system. This approach is effective for uncovering the adaptability of humans to a novel environment in the auditory domain.

Protocol

All methods described here have been approved by the Ethics Committee of Tokyo Denki University. For every participant, informed consent was obtained after the participant received a detailed explanation of the protocol.

1. Setup of the Left-Right Reversed Audition System

  1. Setup of the Reversed Audition System without a Participant
    1. Prepare a linear pulse-code-modulation (LPCM) recorder, binaural microphones, and binaural in-ear earphones.
    2. Connect the left and right lines of the microphones crossly to the LPCM recorder so that left-right reversed analogue sound signals are digitalized. Moreover, connect the left and right lines of the earphones straight through to the recorder so that the reversed digitalized signals are immediately played.
      NOTE: In the case of employing the binaural earphone-microphones as binaural earphones, do not use the earphone parts in order to reduce the spillover of the sounds that go through the microphone parts.
    3. Put the bodies of the microphones and the earphones together for each ear with slight isolation by sound proofing materials, and cover the microphones with dedicated windscreens for suppressing the wind noise.
    4. Insert rechargeable batteries and a large-capacity high-speed memory card into the LPCM recorder and turn it on. Set the recording conditions properly in such a manner that the sound signals are recorded on the memory card as an LPCM format at a sampling rate of 96 kHz with a 24-bit depth.
    5. Place the body of the system into a pocket-sized bag.
  2. Setup of the Reversed Audition System with a Participant
    1. Instruct a participant to insert the earphones of the reversed audition system tightly into the ear canals.
    2. Disconnect the lines for the left and right microphones and connect the dominant-ear side of the microphone straight through to the recorder.
    3. Instruct the participant to take off and put on the dominant-ear side of the system repetitively while adjusting the sound volume of the recorder to make the subjective loudness of direct (normal) and indirect (reversed) sounds equal (as close as possible).
    4. Check the loudness for the non-dominant ear as well, and connect all the lines of the system back again.
    5. Place the system into the participant’s pocket, fix the cords on the participant’s clothes appropriately to prevent them from becoming entangled, and pick up unwanted noises.

2. Validation of the Left-Right Reversed Audition System

NOTE: Perform the following steps to validate the left-right reversed audition system, irrespective of experiments studying adaptation to left-right reversal.

  1. Validation of the Sound Source Localization of the Reversed Audition System
    1. Locate a digital angle protractor whose initial direction is defined as 0° at the center of an anechoic room, and assume a virtual circle centered at this point with a radius of 2 m. Along the virtual circle, mark 72 possible sound sources at every 5° from -180° to 175° in a clockwise manner, and set up plane-wave speakers at these points directed towards the center of the circle.
    2. Set up a video camera near the center of the room to record the display of the digital protractor.
      NOTE: Since the display of the protractor moves with the protractor’s body, the field of view of the video should be large enough to cover all the possible areas. Moreover, the video camera should be carefully placed in order to not disturb the participant’s sitting position and the sound presentation.
    3. Prepare for two sessions of sound source localization: in the first session, the participant does not put on the reversed audition system. In the second session, the participant puts on the equipment, calibrates it, and checks the system (as explained in step 1.2) as quickly as possible.
    4. Guide the participants to sit comfortably and blindfolded at the center of the circle facing a 0° sound source and wait for the experiment to start.
    5. Conduct two sessions of sound source localization. In both sessions, have the participant use the protractor to indicate the perceived sound direction as precisely as possible without moving the head.
    6. For each session, start video-recording the angle display of the protractor, and present 1000-Hz sounds at 65-dB sound pressure level (SPL) from any of the sound sources: the sound at one location is randomly switched to the sound at another location every 10 s in such a way that each location is used once.
      NOTE: Here we use MATLAB with the Psychophysics Toolbox16,17,18. Although this toolbox is commonly used to present sounds, any reliable stimulation software can also be used.
    7. After each session, stop the video-recording and instruct the participants to take a break for sufficient amount of time.
    8. Read the trial-by-trial perceptual angles displayed on the protractor from the recorded video, and evaluate the spatial performance of the reversed audition system by comparing the perceptual angles in the normal and the reversed conditions against the physical angles defined by the direction of sound sources.
  2. Validation of the Delay of the Reversed Audition System
    1. Put the reversed audition system on a desk in a calm room with no participants.
    2. Disconnect a line to the left microphone, and place a plane-wave speaker and the left earphone as close as possible to the right microphone.
    3. Start recording direct (normal) sounds from the speaker and indirect (reversed) sounds from the left earphone simultaneously through the right microphone.
    4. Present 1-ms click sounds from the speaker with a moderate inter-stimulus interval at 65-dB SPL.
    5. After a sufficient number of trials, stop presenting and recording the sounds.
    6. In order to confirm the symmetrical configuration of the system, repeat the same steps above using the right earphone and the left microphone.
    7. Read the recorded sound data using software (e.g., MATLAB) and evaluate the difference between the onset timings of the direct (normal) sounds and indirect (reversed) sounds, which corresponds to a potential delay caused by the time spent passing through the electrical path in the system.

3. Studying the Adaptation to Left-Right Reversed Audition

  1. Procedure of the Exposure to Reversed Audition
    1. Remind the participants repeatedly of their right to quit the exposure at any time.
      NOTE: Stop the exposure as soon as possible if the participant reports sickness or if an observer notices any sign that the participant wants to quit the exposure for any reason.
    2. Prepare a sufficient number of spare rechargeable batteries and large-capacity high-speed memory cards to allow the participant to replace them at anytime.
    3. Instruct the participant to wear, calibrate, and check the reversed audition system by themselves during the exposure period, as explained in step 1.2. Perform the same procedure each time the participant wears the system after each interruption.
    4. Instruct the participant to perform daily-life activities while wearing the system continuously for approximately a month, except while sleeping, bathing, neuroimaging, and other emergency times. In these cases, ask participants to remove the system and immediately insert earplugs into their ears to prevent recovery of adaptation.
      NOTE: Although it is ideal for the participant to wear the system all day and night, it is strongly recommended that the system not be worn while sleeping and bathing in order to prevent unexpected loud noises and electrical shocks, respectively.
    5. Replace the batteries and memory cards routinely before battery exhaustion and memory overcapacity, respectively. Remove the system and replace it with earplugs quickly in a silent place without producing any sound.
    6. When a participant needs to move around outside, drive the participant in a car, accompany the participant on the move, or ask them to use safe means of transportation for acts performed alone.
      NOTE: Great care should be taken by the researcher in order to not endanger the participant’s safety during the exposure period, especially when the participant goes outside. Prohibit the participant from performing any dangerous behaviors.
    7. In order to facilitate adaptation, instruct the participant to experience situations involving high auditory input, such as walking in a shopping mall or a campus, having a conversation with more than two persons, and playing 3D video games, for as long as possible.
    8. Instruct the participant to keep a diary or provide a subjective report to an observer as frequently as possible about perceptual and behavioral changes, experienced events, and anything that the participant notices.
    9. After the target exposure period, instruct the participant to take off the reversed audition system.
      NOTE: It is also important to follow up about the perceptual and behavioral changes in order to examine the recovery process from the adaption to left-right reversed audition.
  2. Neuroimaging During the Exposure to Reversed Audition
    1. Instruct the participant to train on a task that will be used during the neuroimaging experiments as sufficiently as possible.
      1. For example, train the participant to perform a selective reaction time task in two conditions, compatible and incompatible15. The compatible condition consists of responding immediately to the right-ear sound with the right index finger and to the left-ear sound with the left index finger. The incompatible condition consists of responding immediately to the right-ear sound with the left index finger and to the left-ear sound with the right index finger.
      2. Use 1000-Hz sounds at 65-dB SPL for 0.1 s with an inter-stimulus interval of 2.5 – 3.5 s, which appears pseudorandomly on either ear side.
    2. Before the exposure to reversed audition, conduct a neuroimaging experiment under the trained task.
      1. For example, record either MEG or EEG responses, as well as the left and right finger responses under the selective reaction time task15. The task consists of two compatible and two incompatible blocks that are alternatively arranged with an inter-block interval of at least 30 s, and with sounds appearing 80 times for each block through the inserted earphones with plastic ear tubes.
        NOTE: Although a 122-channel MEG system was used in Aoyama and Kuriki15, a multi-channel EEG system is also suitable for this protocol.
      2. For the MEG/EEG recording, set the sampling rate at 1 kHz and the analog recording passband at 0.03 – 200 Hz.
    3. During approximately a 1-month exposure to reversed audition, conduct neuroimaging experiments under the trained task every week without the reversed audition system in exactly the same way as in the pre-exposure experiment (step 3.2.2).
      NOTE: The system is removed immediately before and put on immediately after each experiment.
    4. One week after the exposure, conduct a neuroimaging experiment under the trained task in exactly the same way as the pre-exposure experiment (step 3.2.2).
    5. Analyze the collected data before, during, and after the exposure to left-right reversed audition.
      1. For example, after rejecting the epochs contaminated with eye-related artifacts, removing the offset in the pre-stimulus interval, and setting the low-pass filtering at 40 Hz, average the MEG/EEG data from 100 ms before to 500 ms after the sound onset for the stimulus-response compatible and incompatible conditions15.
      2. Using an MNE software package19,20, estimate the sources of the brain activity with dynamic statistical parametric maps (dSPMs) overlaid on cortical surface images and quantify the intensities of brain activity with minimum-norm estimates (MNEs) for each time point of the averaged data.
      3. Calculate the auditory-motor functional connectivity from single-trial zero-mean MEG/EEG data from 90 to 500 ms after the sound onset for each condition
        NOTE: Here we use MATLAB with the Multivariate Granger Causality Toolbox21.
      4. For the behavioral data, calculate the mean reaction times for the stimulus-response compatible and incompatible conditions.

Results

The representative results shown here are based on Aoyama and Kuriki15. The present protocol achieved left-right reversed audition with high spatiotemporal accuracy. Figure 1 shows the sound source localization in directions over 360° before and immediately after putting on the left-right reversed audition system (Figure 1A), in six participants, as indicated by the cosine similarity. As shown in

Discussion

The proposed protocol aimed to establish a methodology for studying adaptation to left-right reversed audition as an effective tool for uncovering the adaptability of humans to a novel auditory environment. As evidenced by the representative results, the constructed apparatus achieved left-right reversed audition with high spatiotemporal accuracy. Although the previous apparatuses for reversed audition11,12,13,

Disclosures

The author has nothing to disclose.

Acknowledgements

This work was partially supported by a grant from JSPS KAKENHI Grant Number JP17K00209. The author thanks Takayuki Hoshino and Kazuhiro Shigeta for technical assistance.

Materials

NameCompanyCatalog NumberComments
Linear pulse-code-modulation recorderSonyPCM-M10
Binaural microphonesRolandCS-10EM
Binaural in-ear earphonesEtymotic ResearchER-4B
Digital angle protractorWenzhou Sanhe Measuring Instrument5422-200
Plane-wave speakerAlphagreenSS-2101
Video cameraSonyHDR-CX560
MATLABMathworksR2012a, R2015aR2012a for stimulation and R2015a for analysis
Psychophysics ToolboxFreeVersion 3http://psychtoolbox.org
Insert earphonesEtymotic ResearchER-2
Magnetoencephalography systemNeuromagNeuromag-122 TM
Electroencephalography systemBrain Productsacti64CHamp
MNEFreeMNE Software Version 2.7,
MNE 0.13
https://martinos.org/mne/stable/index.html
The Multivariate Granger Causality ToolboxFreemvgc_v1.0http://www.sussex.ac.uk/sackler/mvgc/

References

  1. Sugita, Y. Visual evoked potentials of adaptation to left-right reversed vision. Perceptual and Motor Skills. 79 (2), 1047-1054 (1994).
  2. Sekiyama, K., Miyauchi, S., Imaruoka, T., Egusa, H., Tashiro, T. Body image as a visuomotor transformation device revealed in adaptation to reversed vision. Nature. 407 (6802), 374-377 (2000).
  3. Takeda, S., Endo, H., Honda, S., Weinberg, H., Takeda, T. MEG recording for spatial S-R compatibility task under adaptation to right-left reversed vision. Proceedings of the 12th International Conference on Biomagnetism. , 347-350 (2001).
  4. Miyauchi, S., Egusa, H., Amagase, M., Sekiyama, K., Imaruoka, T., Tashiro, T. Adaptation to left-right reversed vision rapidly activates ipsilateral visual cortex in humans. Journal of Physiology Paris. 98 (1-3), 207-219 (2004).
  5. Sekiyama, K., Hashimoto, K., Sugita, Y. Visuo-somatosensory reorganization in perceptual adaptation to reversed vision. Acta psychologica. 141 (2), 231-242 (2012).
  6. Stratton, G. M. Some preliminary experiments on vision without inversion of the retinal image. Psychological Review. 3 (6), 611-617 (1896).
  7. Linden, D. E., Kallenbach, U., Heinecke, A., Singer, W., Goebel, R. The myth of upright vision. A psychophysical and functional imaging study of adaptation to inverting spectacles. Perception. 28 (4), 469-481 (1999).
  8. Thompson, S. P. The pseudophone. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science: Series 5. 5 (50), 385-390 (1879).
  9. Wenzel, E. M. Localization in virtual acoustic displays. Presence: Teleoperators & Virtual Environments. 1 (1), 80-107 (1992).
  10. Carlile, S. . Virtual Auditory Space: Generation and Applications. , (2013).
  11. Young, T. P. Auditory localization with acoustical transposition of the ears. Journal of Experimental Psychology. 11 (6), 399-429 (1928).
  12. Willey, C. F., Inglis, E., Pearce, C. H. Reversal of auditory localization. Journal of Experimental Psychology. 20 (2), 114-130 (1937).
  13. Ohtsubo, H., Teshima, T., Nakamizo, S. Effects of head movements on sound localization with an electronic pseudophone. Japanese Psychological Research. 22 (3), 110-118 (1980).
  14. Hofman, P. M., Vlaming, M. S., Termeer, P. J., van Opstal, A. J. A method to induce swapped binaural hearing. Journal of Neuroscience Methods. 113 (2), 167-179 (2002).
  15. Aoyama, A., Kuriki, S. A wearable system for adaptation to left-right reversed audition tested in combination with magnetoencephalography. Biomedical Engineering Letters. 7 (3), 205-213 (2017).
  16. Brainard, D. H. The Psychophysics Toolbox. Spatial Vision. 10 (4), 433-436 (1997).
  17. Pelli, D. G. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision. 10 (4), 437-442 (1997).
  18. Kleiner, M., Brainard, D., Pelli, D. What's new in Psychtoolbox-3?. Perception. 36 (14), (2007).
  19. Gramfort, A., et al. MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience. 7, 267 (2013).
  20. Gramfort, A., et al. MNE software for processing MEG and EEG data. NeuroImage. 86, 446-460 (2014).
  21. Barnett, L., Seth, A. K. The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. Journal of Neuroscience Methods. 223, 50-68 (2014).
  22. Green, D. M. Temporal auditory acuity. Psychological Review. 78 (6), 540-551 (1971).
  23. He, S., Cavanagh, P., Intriligator, J. Attentional resolution and the locus of visual awareness. Nature. 383 (6598), 334-337 (1996).
  24. Anton-Erxleben, K., Carrasco, M. Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence. Nature Reviews Neuroscience. 14 (3), 188-200 (2013).
  25. Perrott, D. R., Saberi, K. Minimum audible angle thresholds for sources varying in both elevation and azimuth. Journal of the Acoustical Society of America. 87 (4), 1728-1731 (1990).
  26. Grantham, D. W., Hornsby, B. W., Erpenbeck, E. A. Auditory spatial resolution in horizontal, vertical, and diagonal planes. Journal of the Acoustical Society of America. 114 (2), 1009-1022 (2003).
  27. Xie, B. . Head-Related Transfer Function and Virtual Auditory Display. , (2013).
  28. Stenfelt, S. Acoustic and physiologic aspects of bone conduction hearing. Advances in Oto-Rhino-Laryngology. 71, 10-21 (2011).
  29. Zwiers, M. P., van Opstal, A. J., Paige, G. D. Plasticity in human sound localization induced by compressed spatial vision. Nature Neuroscience. 6 (2), 175-181 (2003).
  30. Huster, R. J., Debener, S., Eichele, T., Herrmann, C. S. Methods for simultaneous EEG-fMRI: an introductory review. Journal of Neuroscience. 32 (18), 6053-6060 (2012).
  31. Veniero, D., Vossen, A., Gross, J., Thut, G. Lasting EEG/MEG aftereffects of rhythmic transcranial brain stimulation: level of control over oscillatory network activity. Frontiers in Cellular Neuroscience. 9, 477 (2015).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Here Are The Keywords From The Given Text Left right Reversed AuditionAdaptationNeural ImagingBinaural MicrophonesBinaural In ear EarphonesLinear Pulse Code Modulation RecorderReversed Analog Sound SignalsReversed Digitalized SignalsSoundproofing MaterialsWind NoiseRechargeable BatteriesHigh speed Memory CardSelective Reaction Time TaskPsychophysics Software Toolbox1000 Hertz Sounds65 Decibels Sound Pressure Level0 1 Seconds2 5 To 3 5 SecondsCompatible ConditionIncompatible Condition

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved