Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

This protocol is designed to explore underlying learning-related electrophysiological changes in subjects with profound deafness after a short training period in audio-tactile sensory substitution by applying the event-related potential technique.

Abstract

This paper examines the application of electroencephalogram-based methods to assess the effects of audio-tactile substitution training in young, profoundly deaf (PD) participants, with the aim of analyzing the neural mechanisms associated with vibrotactile complex sound discrimination. Electrical brain activity reflects dynamic neural changes, and the temporal precision of event-related potentials (ERPs) has proven to be key in studying time-locked processes while performing behavioral tasks that involve attention and working memory.

The current protocol was designed to study electrophysiological activity in PD subjects while they performed a continuous performance task (CPT) using complex-sound stimuli, consisting of five different animal sounds delivered through a portable stimulator system worn on the right index finger. As a repeated-measures design, electroencephalogram (EEG) recordings in standard conditions were performed before and after a brief training program (five 1 h sessions over 15 days), followed by offline artifact correction and epoch averaging, to obtain individual and grand-mean waveforms. Behavioral results show a significant improvement in discrimination and a more robust P3-like centroparietal positive waveform for the target stimuli after training. In this protocol, ERPs contribute to the further understanding of learning-related neural changes in PD subjects associated with audio-tactile discrimination of complex sounds.

Introduction

Early profound deafness is a sensory deficit that strongly impacts oral language acquisition and the perception of environmental sounds that play an essential role in navigating everyday life for those with normal hearing. A preserved and functional auditory sensory pathway allows us to hear footsteps when someone is approaching out of visual range, react to oncoming traffic, ambulance sirens, and security alarms, and respond to our own name when someone needs our attention. Audition is, therefore, a vital sense for speech, communication, cognitive development, and timely interaction with the environment, including the perception of potential threats in one's surroundings. For decades, the viability of audio-tactile substitution as an alternative sound perception method with the potential to complement and facilitate language development in severely hearing-impaired individuals has been explored with limited results1,2,3. Sensory substitution aims to provide users with environmental information through a human sensory channel different from the one normally used; it has been demonstrated to be possible across different sensory systems4,5. Specifically, audio-tactile sensory substitution is achieved when skin mechanoreceptors can transduce the physical energy of soundwaves that compose auditory information into neuronal excitation patterns that can be perceived and integrated with the somatosensory pathways and higher order somatosensory cortical areas6.

Several studies have demonstrated that profoundly deaf individuals can distinguish musical timbre solely through vibrotactile perception7 and discriminate between same-sex speakers using spectral cues of complex vibrotactile stimuli8. More recent findings have shown that deaf individuals concretely benefitted from a brief, well-structured audio-tactile perception training program, as they significantly improved their ability to discriminate between different pure-tone frequencies9 and between pure-tones with different temporal duration10. These experiments used event-related potentials (ERPs), graph connectivity methods, and quantitative electroencephalogram (EEG) measurements to depict and analyze functional brain mechanisms. However, the neural activity associated with the discrimination of complex environmental sounds has not been examined prior to this paper.

ERPs have proven useful for studying time-locked processes, with incredible time resolution in the order of milliseconds, while performing behavioral tasks that involve attention allocation, working memory, and response selection11. As described by Luck, Woodman, and Vogel12, ERPs are intrinsically multidimensional processing measures and are therefore well suited to separately measure the subcomponents of cognition. In an ERP experiment, the continuous ERP waveform elicited by the presentation of a stimulus can be used to directly observe neural activity that is interposed between the stimulus and the behavioral response. Other advantages of the technique, such as its cost-effectiveness and non-invasive nature, make it a perfect fit to study the precise time course of cognitive processes in clinical populations. Furthermore, ERP tools applied in a repeated-measures design, in which patients' electrical brain activity is recorded more than once to study changes in electrical activity after a training program or intervention, provide further insight into neural changes over time.

The P3 component, being the most extensively researched cognitive potential13, is currently recognized to respond to all kinds of stimuli, most apparently to stimuli of low probability, or of high intensity or significance, or ones that require some behavioral or cognitive response14. This component has also proven extremely useful in evaluating general cognitive efficiency in clinical models15,16. A clear advantage of assessing changes in the P3 waveform is that it is an easily observable neural response because of its greater amplitude compared to other smaller components; it has a characteristic centroparietal topographical distribution and is also relatively easy to elicit using the appropriate experimental design17,18,19.

In this context, the aim of this study is to explore the learning-related electrophysiological changes in patients with profound deafness after training for a short period in vibrotactile sound discrimination. In addition, ERP tools are applied to depict the functional brain dynamic underlying the temporary engagement of the cognitive resources demanded by the task.

Protocol

The study was reviewed and approved by the Neuroscience Institute's Ethics Committee (ET062010-88, Universidad de Guadalajara), ensuring all procedures were conducted in accordance with the Declaration of Helsinki. All participants agreed to participate voluntarily and gave written informed consent (when underaged, parents signed consent forms).

1. Experimental design

  1. Stimulus preparation
    1. Search in Creative Commons licensed sound databases to select a set of animal sounds in .wav format. The stimuli in this study consisted of five different animal sounds: dog barking, cow mooing, horse neighing, donkey braying, and elephant trumpeting.
      NOTE: The sound stimuli used here were previously selected as a collection of sounds for the vibrotactile discrimination training program in our earlier studies9,10.
    2. Edit the sound files using a free, open-source audio editor to standardize the intensity and length of the stimuli to 1500 ms. For this protocol, standardize at a linear scale from 0 to 8000 Hz, at a gain of 20 dB, and at a range of 80 dB based on the parameters established in the previous studies9,10 using the same vibrotactile stimulation system.
    3. Save the formatted audio files in a 32-bit float format with a 48,000 Hz project rate.
  2. Paradigm setup in the electrophysiology presentation software
    1. Design a continuous performance task (CPT) using an experimental design and stimulus presentation software, assigning the stimuli to one of the two conditions: (a) target (T) stimulus (dog barking in 20% of trials) and (b) non-target (NT) stimuli (the remaining four animal sounds for the other 80%).
      NOTE: Each condition was labeled with the same code to synchronize stimulus presentation marks when programming the EEG protocol in the recording software.
    2. Build a pseudo-randomized stimulus-presentation using the software platform in which the five animal sounds (dog, cow, horse, donkey, and elephant) are each presented 20% of the time. Check that the target stimulus (dog barking) never occurs more than twice in succession.
    3. Specify the desired interstimulus interval (ISI) and the total response time, and select the response keys that will be used to automatically collect behavioral data for target (T) stimuli responses. Here, a fixed 2000 ms ISI list for 150 trials and the correct response for the T stimuli were programmed via the left control key on a standard computer keyboard. Participants were given a 3500 ms time window for a behavioral response (starting at stimulus presentation).

2. Participant selection

  1. Recruit potential participants with profound bilateral sensorineural hearing loss diagnosis and collect demographic data, including age, sex, hand preference, and educational history.
  2. Conduct semi-structured clinical interviews to screen participants for personal or family history of psychiatric, neurological, or neurodegenerative illness and to collect information pertaining to deafness clinical history: the age of onset, etiology, and hearing-aid use history, as well as their preferred communication mode (oral, manual, or bilingual).
  3. Conduct audiological tests (pure-tone air hearing-thresholds) using an audiometer to confirm the severity of hearing loss.
    1. In a sound-attenuated room, sit directly in front of the participant and properly place headphones on them.
    2. Instruct the participants to raise their dominant hand to signal whenever they can hear the tone being presented through the headphones.
    3. Ranging from 20 dB to 110 dB intensity levels, present a pure-tone at six octaves in the following ascending order: 250, 500, 1000, 2000, 4000, and 8000 Hz, starting with the left ear and repeating the same steps for the right ear.
      1. Calculate the patient pure-tone average (PTA) by averaging the hearing thresholds at 500, 1000, 2000, and 4000 Hz for each ear. The hearing-loss severity inclusion criteria for the study is a bilateral pure tone average (PTA) greater than 90 dB.
      2. Select participants based on the eligibility criteria. Inclusion criteria additionally include no personal or family history of psychiatric, neurological, or neurodegenerative illness and non-syndromic, prelingual profound bilateral deafness. Obtain informed consent and explain the experimental procedures to the participants.
        ​NOTE: All the forms, questionnaires, and instructions used in the study were translated to Mexican Sign Language (MSL) by a professional MSL interpreter and were presented in video format using a tablet computer. In addition, an MSL interpreter was present during all study procedures.

3. Pre-training EEG recording session

  1. Participant preparation
    1. Verify that the participants have come to the recording session with clean and dry hair, having not used any hair gel, conditioner, or other hair products that affect electrode impedance.
    2. Ask the participants to sit in a comfortable position, approximately 60 cm away from the stimulus screen, and use the tablet device to play the MSL videoclip with the preparation procedure description.
    3. Clean the areas where reference and electrooculogram (EOG) electrodes will be placed (earlobes, forehead, outer canthus, infraocular orbital ridges, etc.). First, wipe the skin with an alcohol swab, and then apply EEG abrasive prepping gel gently with a cotton swab to exfoliate dead skin cells on the surface.
    4. Fill the electrode gold cup with conductive electrode paste and place an electrode on each reference site, usually on the right and the left earlobes or mastoids. Repeat the steps to place at least one vertical EOG at the outer canthus and one horizontal EOG at the infraocular orbital ridge to monitor oculomotor activity (blinks and saccades). Hold the single electrodes in place with a piece of 1 in micropore tape.
    5. Ask the participants to hold their arms straight horizontally and then fit the body harness tightly but comfortably around the chest under the armpits with the snaps in the middle of the chest.
    6. Place the EEG commercial electro-cap with 19 Ag/AgCl electrodes (Fp1, Fp2, F3, F4, F7, F8, C3, C4, P3, P4, O1, O2, T3, T4, T5, T6, Fz, Cz, and Pz) topographically arranged according to the International 10-20 system. Use a measuring tape to check the participant's head circumference to ensure you use the proper cap size.
    7. Align the Cz electrode with the nose and then measure the distance from the nasion to the inion so that the Cz electrode falls precisely in the middle. Button the adjustable straps on the sides of the cap to the body harness so that the electro-cap is firmly tightened.
    8. Place the gel-filled blunt-needle syringe inside the electrode, circle the needle to remove hair, and then gently abrade the scalp region under the electrode before applying the conductive gel. Don't apply too much gel to avoid electrical bridging with neighboring electrode sites.
    9. Allow the EEG conductive gel to dry at cool room temperature.
  2. Setting up the EEG recording equipment
    1. Calibrate the EEG system as per the instrument's instructions, then connect the electro-cap to the amplifier set at a bandpass of 0.05-30 Hz (3 dB cutoff points of 6 dB/octave roll-off curves), a 60 Hz notch-filter, and a 200 Hz sampling rate equal to a 5 ms sampling period.
    2. Check that the impedance is below 5 KΩ (for a low impedance system) in all electrode sites and check on the monitor that all channels are smoothly registering the electrical signals.
  3. Running the experimental task
    1. Position the participant in front of the computer monitor and place the keyboard at a comfortable distance.
    2. Connect the cable of the portable stimulator device (see Figure 1) to the computer system speaker's outlet and set the speaker volume to the maximum intensity level.
    3. Adjust the portable stimulator system on the participant's right index fingertip and test.
    4. Using the tablet device, play the experiment instructions and execute a practice trial to familiarize the subject with the portable stimulator device, the audio-tactile stimuli, and the task. Repeat the MSL instructions and verify comprehension.
    5. Remind the participant to respond to the dog bark stimulus by pressing the left control key with their left index finger only upon target stimulus detection and to withhold their response when any of the other four animal sounds are perceived. The CPT experimental paradigm is represented in Figure 2.
    6. Provide clear instructions for how to minimize artifacts and demonstrate the effect of artifacts on the EEG in real-time before you begin recording (recommended as a standard recording procedure in research with clinical populations20).
    7. Before starting the CPT task, check that the event-synchronization between the cognitive stimulation computer and the EEG recording computer is working properly. To do so, start recording the EEG signal and click on the communication icon in the stimulus presentation software interface. Upon clicking, the event-synchronized pulses appear at the bottom of the EEG recording screen.
    8. Run the experimental task. Carefully observe the participant and monitor alertness, response execution, and excessive movement or blinking.
    9. Pause and allow the participant a short break in the middle of the experiment (at 4 min in the experiment) to allow them to blink, relax, and move around if needed. Finish running the experiment.

4. Audio-tactile sensory substitution training program

  1. Consult Supplementary File 1, which contains a detailed description of the five-session program, to perform the training. Automatize the activities described using a spreadsheet to make the training more systematic and engaging for the participants. Use original images and audio recordings from9 and ask the participants to respond by tapping on a laptop touch-screen monitor.
    ​NOTE: The content and tables in this file have been reprinted with permission from9.

5. Post-training EEG recording session

  1. Repeat the exact same steps as specified in section 3.

6. EEG analysis

NOTE: The EEG acquisition steps were done using the EEG recording software, and the EEG processing steps were done using a separate EEG analysis software.

  1. EEG raw signal pre-processing
    1. Define and select epochs of 1100 ms in the continuous EEG data, without the use of additional digital filters, using stimulus onset as the initial time instant (t0), and including a 100 ms pre-stimulus used for baseline correction. Supplementary Figure 1 illustrates how the 1100 ms epochs were selected according to the EEG analysis commercial software installed in the EEG recording equipment.
    2. During artifact rejection, exclude epochs of data on all channels when the voltage in a given recording epoch exceeds 100 µV on any EEG or EOG channel. Also, reject artifacts by visual inspection of the epochs. See Supplementary Figure 2, which provides an example of epochs that were manually rejected due to ocular artifacts.
  2. Signal averaging
    1. Select an equal number of artifact-free epochs for each stimulus condition (target and non-target) in both the pre- and post-training conditions. Select the maximum epochs possible to improve the signal-to-noise ratio. Do this for each EEG record.
      NOTE: In this protocol, we selected an average of 25 correct response epochs per condition at each timepoint since we were interested in evaluating target discrimination. Keep in mind that some ERP components do not require overt behavioral responses to be observed. Participants with less than 15 artifact-free epochs in each condition were excluded from the study.
    2. Click on the Operations menu and select the EEG window averaging option to average individual ERPs.
    3. First, select the Independent Average option to average target trials only. Then, select the other four non-target stimuli and click on the Average Together option to average.
    4. Repeat steps 6.2.2 and 6.2.3 for each participant's EEG recording in the pre-training condition and then for the post-training condition.
    5. Once all the individual ERPs are calculated, average them together to obtain the grand-mean waveforms per stimulus condition for pre- and post-training. Open any individual EP average, then go to the Operations menu and select Grand-mean averaging option. Select the participant's individual averages to be included in the group average.
    6. Choose all pre-training target averages from the drop-down list, then click the Average button, type the desired file name, and press the Return key to save. Then select all pre-training non-target averages from the drop-down list, click the Average button, type the desired file name, and again press the Return key to save.
    7. Repeat the previous steps for the post-training condition.
  3. ERP visualization and analyses
    1. Select the Operations menu to see the list of saved grand means. Then click on the group averages that you wish to plot. Next, click the Montage button to select the channels you want to plot.
    2. Go to the Tools menu, then click on Visualize Options to select each waveform's color and line width. Then click on the Signal menu, check the DC correction box, type in the desired baseline stimulus interval, then press the Return key.
    3. Carefully inspect the plotted grand-mean waveforms to identify the components of interest and their corresponding time windows.
      NOTE: For this experiment, we knew that the waveforms, because of the task design and sensory pathways understudy for P3, would very likely be a positive component appearing later than 300 ms in centroparietal electrodes and with greater voltage amplitudes in the target condition.
    4. Export individual peak amplitude latencies and voltages, and then import data on a spreadsheet to build the database. Conduct a repeated-measures Analysis of Variance (ANOVA) using a statistics software.

Results

To illustrate how the effect of the audio-tactile sensory substitution discrimination training in PD individuals can be assessed by evaluating changes in P3 in a group of 17 PD individuals (mean age = 18.5 years; SD = 7.2 years; eight females and 11 males), we created several figures to portray the ERP waveforms. The results shown in the ERP plots reveal changes in a P3-like centroparietal positive waveform which is more robust for the target stimuli after training. In the pre-training condition, ERPs suggest that the T ...

Discussion

Using ERP tools, we designed a protocol to observe and evaluate the gradual development of vibrotactile discrimination skills for distinguishing vibrotactile representations of different pure tones. Our prior work has demonstrated that vibrotactile stimulation is a viable alternative sound perception method for profoundly deaf individuals. However, because of the complexity of natural sounds compared to pure tones, the potential for language sound discrimination warrants a separate exploration.

Disclosures

We confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

Acknowledgements

We thank all the participants and their families, as well as the institutions that made this work possible, in particular, Asociación de Sordos de Jalisco, Asociación Deportiva, Cultural y Recreativa de Silentes de Jalisco, Educación Incluyente, A.C., and Preparatoria No. 7. We also thank Sandra Márquez for her contribution to this project. This work was funded by GRANT SEP-CONACYT-221809, GRANT SEP-PRODEP 511-6/2020-8586-UDG-PTC-1594, and the Neuroscience Institute (Universidad de Guadalajara, Mexico).

Materials

NameCompanyCatalog NumberComments
AudacityAudacity teamaudacityteam.orgFree, open source, cross-platform audio editing software
AudiometerResonancer17a
EEG analysis SoftwareNeuronic , S.A.
EEG recording SoftwareNeuronic , S.A.
Electro-Cap Electro-cap International, Inc.E1-MCap with 19 active electrodes, adjustable straps and chest harness. 
Electro-gelElectro-cap International, Inc.
External computer speakers
Freesound Music technology groupfreesound.orgDatabase of Creative Commons Licensed sounds
Hook and loop fastnerVelcro
IBM SPSS (Statistical Package for th Social Sciences)IBM
Individual electrodes CadwellGold Cup, 60 in
MEDICID-5Neuronic, S.A.EEG recording equipment (includes amplifier and computer).
NuprepWeaver and companyECG & EEG abrasive skin prepping gel
Portable computer with touch screenDell
SEVITAC-DCentro Camac, Argentina. Patented by Luis Campos (2002).http://sevitac-d.com.ar/Portable stimulator system is worn on the index-finger tip and it consists of a tiny flexible plastic membrane with a 78.5 mm2 surface area that vibrates in response to sound pressure waves via analog transmission. It has a sound frequency range from 10 Hz to 10 kHz. 
Stimulus presentation Software MindtracerNeuronics, S.A.
Stimulation computer monitor and keyboard
Tablet computerLenovo
Ten20 Conductive Neurodiagnostic Electrode pasteweaver and company

References

  1. Rothenberg, M., Richard, D. M. Encoding fundamental frequency into vibrotactile frequency. The Journal of the Acoustical Society of America. 66 (4), 1029-1038 (1979).
  2. Plant, G., Arne, R. The transmission of fundamental frequency variations via a single channel vibrotactile aid. Speech Transmission Laboratories Quarterly Progress Report. 24 (2-3), 61-84 (1983).
  3. Bernstein, L. E., Tucker, P. E., Auer, E. T. Potential perceptual bases for successful use of a vibrotactile speech perception aid. Scandinavian Journal of Psychology. 39 (3), 181-186 (1998).
  4. Bach-y-Rita, P., Kercel, S. W. Sensory substitution and the human-machine interface. Trends in Cognitive Sciences. 7 (12), 541-546 (2003).
  5. Bach-y-Rita, P. Tactile sensory substitution studies. Annals of New York Academy of Sciences. 1013 (1), 83-91 (2004).
  6. Kaczmarek, K. A., Webster, J. G., Bach-y-Rita, P., Tompkins, W. J. Electrotactile and vibrotactile displays for sensory substitution systems. IEEE Transactions on Biomedical Engineering. 38 (1), 1-16 (1991).
  7. Russo, F. A., Ammirante, P., Fels, D. I. Vibrotactile discrimination of musical timbre. Journal of Experimental Psychology Human Perception Performance. 38 (4), 822-826 (2012).
  8. Ammirante, P., Russo, F. A., Good, A., Fels, D. I. Feeling voices. PloS One. 8 (1), 369-377 (2013).
  9. González-Garrido, A. A., et al. Vibrotactile discrimination training affects brain connectivity in profoundly deaf individuals. Frontiers in Human Neuroscience. 11, 28 (2017).
  10. Ruiz-Stovel, V. D., Gonzalez-Garrido, A. A., Gómez-Velázquez, F. R., Alvarado-Rodríguez, F. J., Gallardo-Moreno, G. B. Quantitative EEG measures in profoundly deaf and normal hearing individuals while performing a vibrotactile temporal discrimination task. International Journal of Psychophysiology. 166, 71-82 (2021).
  11. Polich, J. Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiology. 118 (10), 2128-2148 (2007).
  12. Luck, S. J., Woodman, G. F., Vogel, E. K. Event-related potential studies of attention. Trends in Cognitive Sciences. 4 (11), 432-440 (2000).
  13. Kelly, S. P., O'Connell, R. G. The neural processes underlying perceptual decision making in humans: recent progress and future directions. Journal of Physiology-Paris. 109 (1-3), 27-37 (2015).
  14. Barry, R. J., et al. Components in the P300: Don't forget the Novelty P3. Psychophysiology. 57 (7), 13371 (2020).
  15. Polich, J. P300 clinical utility and control of variability. Journal of Clinical Neurophysiology. 15 (1), 14-33 (1998).
  16. Polich, J., Criado, J. R. Neuropsychology and neuropharmacology of P3a and P3b. International Journal of Psychophysiology. 60 (2), 172-185 (2006).
  17. Polich, J., Kok, A. Cognitive and biological determinants of P300: an integrative review. Biological Psychology. 41 (2), 103-146 (1995).
  18. Nieuwenhuis, S., Aston-Jones, G., Cohen, J. D. Decision making, the P3, and the locus coeruleus--norepinephrine system. Psychological Bulletin. 131 (4), 510 (2005).
  19. Luck, S. J. . An Introduction to the Event-Related Potential Technique. , (2014).
  20. Kappenman, E. S., Luck, S. J. Best practices for event-related potential research in clinical populations. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 1 (2), 110-115 (2016).
  21. Rac-Lubashevsky, R., Kessler, Y. Revisiting the relationship between the P3b and working memory updating. Biological Psychology. 148, 107769 (2019).
  22. Twomey, D. M., Murphy, P. R., Kelly, S. P., O'Connell, R. G. The classic P300 encodes a build-to-threshold decision variable. European Journal of Neuroscience. 42 (1), 1636-1643 (2015).
  23. Boudewyn, M. A., Luck, S. J., Farrens, J. L., Kappenman, E. S. How many trials does it take to get a significant ERP effect? It depends. Psychophysiology. 55 (6), 13049 (2018).
  24. Cohen, J., Polich, J. On the number of trials needed for P300. International Journal ofPsychophysiology. 25 (3), 249-255 (1997).
  25. Duncan, C. C., et al. Event-related potentials in clinical research: guidelines for eliciting, recording, and quantifying mismatch negativity, P300, and N400. Clinical Neurophysiology. 120 (11), 1883-1908 (2009).
  26. Thigpen, N. N., Kappenman, E. S., Keil, A. Assessing the internal consistency of the event-related potential: An example analysis. Psychophysiology. 54 (1), 123-138 (2017).
  27. Huffmeijer, R., Bakermans-Kranenburg, M. J., Alink, L. R., Van IJzendoorn, M. H. Reliability of event-related potentials: the influence of number of trials and electrodes. Physiology & Behavior. 130, 13-22 (2014).
  28. Rietdijk, W. J., Franken, I. H., Thurik, A. R. Internal consistency of event-related potentials associated with cognitive control: N2/P3 and ERN/Pe. PloS One. 9 (7), 102672 (2014).
  29. Alsuradi, H., Park, W., Eid, M. EEG-based neurohaptics research: A literature review. IEEE Access. 8, 49313-49328 (2020).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Audio tactile Sensory SubstitutionProfound DeafnessEvent related PotentialsERPsVibrotactile DiscriminationCognitive ProcessingOral Language DevelopmentSensory DeficitsSensory TransductionClinical InterviewsAudiological TestsHearing Loss DiagnosisPure Tone AverageIntensity LevelsSound Attenuated Room

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved