JoVE Logo

Zaloguj się

Aby wyświetlić tę treść, wymagana jest subskrypcja JoVE. Zaloguj się lub rozpocznij bezpłatny okres próbny.

W tym Artykule

  • Podsumowanie
  • Streszczenie
  • Wprowadzenie
  • Protokół
  • Wyniki
  • Dyskusje
  • Ujawnienia
  • Podziękowania
  • Materiały
  • Odniesienia
  • Przedruki i uprawnienia

Podsumowanie

The neural correlates of listening to consonant and dissonant intervals have been widely studied, but the neural mechanisms associated with production of consonant and dissonant intervals are less well known. In this article, behavioral tests and fMRI are combined with interval identification and singing tasks to describe these mechanisms.

Streszczenie

The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.

Wprowadzenie

Certain combinations of musical pitches are generally acknowledged to be consonant, and they are typically associated with a pleasant sensation. Other combinations are generally referred to as dissonant and are associated with an unpleasant or unresolved feeling1. Although it seems sensible to assume that enculturation and training play some part in the perception of consonance2, it has been recently shown that the differences in perception of consonant and dissonant intervals and chords probably depend less on musical culture than was previously thought3 and may even derive from simple biological bases4,5,6. In order to prevent an ambiguous understanding of the term consonance, Terhardt7 introduced the notion of sensory consonance, as opposed to consonance in a musical context, where harmony, for example, may well influence the response to a given chord or interval. In the present protocol, only isolated, two-note intervals were used precisely to single out activations solely related to sensory consonance, without interference from context-dependent processing8.

Attempts to characterize consonance through purely physical means began with Helmholtz9, who attributed the perceived roughness associated with dissonant chords to the beating between adjacent frequency components. More recently, however, it has been shown that sensory consonance is not only associated with the absence of roughness, but also with harmonicity, which is to say the alignment of the partials of a given tone or chord with those of an unheard tone of a lower frequency10,11. Behavioral studies confirm that subjective consonance is indeed affected by purely physical parameters, such as frequency distance12,13, but a wider range of studies have conclusively demonstrated that physical phenomena cannot solely account for the differences between perceived consonance and dissonance14,15,16,17. All of these studies, however, report these differences when listening to a variety of intervals or chords. A variety of studies using Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) have revealed significant differences in the cortical regions that become active when listening to either consonant or dissonant intervals and chords8,18,19,20. The purpose of the present study is to explore the differences in brain activity when producing, rather than listening to, consonant and dissonant intervals.

The study of sensory-motor control during musical production typically involves the use of musical instruments, and very often it then requires the fabrication of instruments modified specifically for their use during neuroimaging21. Singing, however, would seem to provide from the start an appropriate mechanism for the analysis of sensory-motor processes during music production, as the instrument is the human voice itself, and the vocal apparatus does not require any modification in order to be suitable during imaging22. Although the neural mechanisms associated with aspects of singing, such as pitch control23, vocal imitation24, training-induced adaptive changes25, and the integration of external feedback25,26,27,28,29, have been the subject of a number of studies over the past two decades, the neural correlates of singing consonant and dissonant intervals were only recently described30. For this purpose, the current paper describes a behavioral test designed to establish the adequate recognition of consonant and dissonant intervals by participants. This is followed by an fMRI study of participants singing a variety of consonant and dissonant intervals. The fMRI protocol is relatively straightforward, but, as with all MRI research, great care must be taken to correctly set up the experiments. In this case, it is particularly important to minimize head, mouth, and lip movement during singing tasks, making the identification of effects not directly related to the physical act of singing more straightforward. This methodology may be used to investigate the neural mechanisms associated with a variety of activities involving musical production by singing.

Access restricted. Please log in or start a trial to view this content.

Protokół

This protocol has been approved by the Research, Ethics, and Safety Committee of the Hospital Infantil de México "Federico Gómez".

1. Behavioral Pretest

  1. Perform a standard, pure-tone audiometric test to confirm that all prospective participants possess normal hearing (20-dB Hearing Level (HL) over octave frequencies of -8,000 Hz). Use the Edinburgh Handedness Inventory31 to ensure that all participants are right-handed.
  2. Generation of interval sequences.
    1. Produce pure tones spanning two octaves, G4-G6, using a sound-editing program.
      NOTE: Here, the free, open-source sound editing software Audacity is described. Other packages may be used for this purpose.
      1. For each tone, open a new project in the sound-editing software.
      2. Under the "Generate" menu, select "Tone." In the window that appears, select a sine waveform, an amplitude of 0.8, and a duration of 1 s. Enter the value of the frequency that corresponds to the desired note (e.g., 440 Hz for A4). Click on the "OK" button.
      3. Under the "File" menu, select "Export Audio." In the window that opens, enter the desired name for the audio file and choose WAV as the desired file type. Click "Save."
    2. Select two consonant and two dissonant intervals, according to Table 1, in such a way that each consonant interval is close to a dissonant interval.
      NOTE: As an example, consider the consonant intervals of a perfect fifth and an octave and the dissonant intervals of an augmented fourth (tritone) and a major seventh. These are the intervals chosen for the study conducted by the authors.
    3. Generate all possible combinations of notes corresponding to these four intervals in the range between G4 and G6.
      1. For each interval, open a new project in the sound-editing software and use "Import Audio" under the "File" menu to import the two WAV files to be concatenated.
      2. Place the cursor at any point over the second tone and click to select. Click on "Select All" under the "Edit" menu. Under the same menu, click on "Copy."
      3. Place the cursor at any point over the first tone and click. Under the "Edit" menu click on "Move Cursor to Track End" and then click "Paste" under the same menu. Export the audio as described in step 1.2.1.3.
    4. Use a random sequence generator to produce sequences consisting of 100 intervals generated pseudorandomly in such a way that each of the four different intervals occurs exactly 25 times30. To do this, use the random permutation function in the statistical analysis software (see the Table of Materials). Input the four intervals as arguments and create a loop that repeats this process 25 times.
    5. Use behavioral research software to generate two distinct runs. Load a sequence of 100 intervals in WAV format for each run and associate the identification of each interval with a single trial30.
      NOTE: Here, E-Prime behavioral research software is used. Other equivalent behavioral research software can be used.
  3. Explain to participants that they will listen to two sequences of 100 intervals each, where each sequence is associated with a different task and with its own set of instructions. Tell participants that, in both runs, the next interval will be played only when a valid key is pressed.
    NOTE: Once the interval recognition sequence commences, it should not be interrupted so that the course of action should be as clear as possible to all participants.
    1. Have the participants sit down in front of a laptop computer and wear the provided headphones. Use good-quality over-the-ear headphones. Adjust the sound level to a comfortable level for each subject.
    2. If using the behavioral research software described here, open the tasks created in step 1.2.5 with E-Run. In the window that appears, enter the session and subject number and click "OK." Use the session number to distinguish between runs for each participant.
      NOTE: The instructions for the task at hand will appear on screen, followed by the beginning of the task itself.
      1. First, in a 2-alternative forced-choice task, simply have the participants identify whether the intervals they hear are consonant or dissonant. Have the participant press "C" on the computer keyboard for consonant and "D" for dissonant.
        NOTE: Since all participants are expected to have musical training at a conservatory level, they are all expected to be able to distinguish between patently consonant and patently dissonant intervals. The first task serves, in a sense, as confirmation that this is indeed the case.
      2. Second, in a 4-alternative forced-choice task, have the participants identify the intervals themselves. Have the participants press the numerals "4," "5," "7," and "8" on the computer keyboard to identify the intervals of an augmented fourth, perfect fifth, major seventh, and octave, respectively.
    3. At the end of each task, press "OK" to automatically save the results for each participant in an individual E-DataAid 2.0 file labeled with the subject and session numbers and with the extension .edat2.
    4. Use statistical analysis software (e.g., Matlab, SPSS Statistics, or an open-source alternative) to calculate the success rates for each task (i.e. the percentage of successful answers when identifying whether the intervals were consonant or dissonant, and also when identifying the intervals themselves), both individually and as a group32.

2. fMRI Experiment

  1. Preparation for the fMRI session.
    1. Generate sequences of the same intervals as in step 1.2.3, again composed of two consecutive pure tones with a duration of 1 s each.
      NOTE: The vocal range of the participants must now be taken into account, and all notes must fall comfortably within the singing range of each participant.
      1. Use a random sequence generator to create a randomized sequence of 30 intervals for the listen-only trials30. For the singing trials, create a pseudorandomized sequence of 120 intervals for the participants to listen to a specific interval and then match this target interval with their singing voices. For the pseudorandomized sequence, use the same method as described in step 1.2.4, with the 4 intervals as arguments once again, but now repeating this process 30 times.
      2. Following the same procedure as in step 1.2.5, use behavioral research software to generate three distinct runs, each consisting initially of 10 silent baseline trials, followed by 10 consecutive listen-only trials, and finally by 40 consecutive singing trials.
        NOTE: During the baseline trials, the four intervals appear in random order, while during the singing trials, the four intervals appear in pseudorandomized order, in such a manner that each interval is eventually presented exactly 10 times. The duration of each trial is 10 s, so one whole run lasts 10 min. Since each subject goes through 3 experimental runs, the total duration of the experiment is 30 min. However, allowing for the participants to enter and exit the scanner, for time to set up and test the microphone, for time to obtain the anatomical scan, and for time between functional runs, approximately 1 h of scanner time should be allotted to each participant.
    2. Explain to the participants the sequences of trials to be presented, as described in step 2.1.1.2, and respond to any doubts they might have. Instruct the participants to hum the notes without opening their mouths during the singing trials, keeping the lips still while producing an "m" sound.
    3. Connect a non-magnetic, MR-compatible headset to a laptop. Adjust the sound level to a comfortable level for each subject.
    4. Connect a small condenser microphone to an audio interface that is in turn connected to the laptop using a shielded twisted-triplet cable.
      NOTE: The microphone power supply, the audio interface, and the laptop should all be located outside the room housing the scanner.
    5. Check the microphone frequency response.
      NOTE: The purpose of this test is to confirm that the microphone behaves as expected inside the scanner.
      1. Start a new project in the sound-editing software and select the condenser microphone as the input device.
      2. Generate a 440 Hz test tone with a duration of 10 s, as described in section 1.2.1, with the appropriate values for frequency and duration.
      3. Using the default sound reproduction software on the laptop, press "Play" to send the test tone through the headphones at locations inside (on top of the headrest) and outside (in the control room) the scanner, with the microphone placed between the sides of the headset in each case.
      4. Press "Record" in the sound-editing software to record approximately 1 s of the test tone at each location.
      5. Select "Plot Spectrum" from the "Analyze" menu for each case and compare the response of the microphone to the test tone, both inside and outside the scanner, by checking that the fundamental frequency of the signal received at each location is 440 Hz.
    6. Tape the condenser microphone to the participant's neck, just below the larynx.
    7. Have the participant wear the headset. Place the participant in a magnetic resonance (MR) scanner.
  2. fMRI session.
    1. At the beginning of the session, open the magnetic resonance user interface (MRUI) software package. Use the MRUI to program the acquisition paradigm.
      NOTE: Some variation in the interface is to be expected between different models.
      1. Select the "Patient" option from the onscreen menu. Enter the participant's name, age, and weight.
      2. Click on the "Exam" button. First, choose "Head" and then "Brain" from the available options.
      3. Select "3D" and then "T1 isometric," with the following values for the relevant parameters: Repetition Time (TR) = 10.2 ms, Echo Time (TE) = 4.2 ms, Flip Angle = 90°, and Voxel Size = 1 x 1 x 1 mm3.
        NOTE: For each participant, a T1-weighted anatomical volume will be acquired using a gradient echo pulse sequence for anatomical reference.
      4. Click on "Program" and select EchoPlanaImage_diff_perf_bold (T2*), with the values of the relevant parameters as follows: TE = 40 ms, TR = 10 s, Acquisition Time (TA) = 3 s, Delay in TR = 7 s, Flip Angle = 90°, Field of View (FOV) = 256 mm2, and Matrix Dimensions = 64 x 64. Use the "Dummy" option to acquire 5 volumes while entering a value of "55" for the total number of volumes.
        NOTE: These values permit the acquisition of functional T2*-weighted whole-head scans according to the sparse sampling paradigm illustrated in Figure 1, where an echo-planar imaging (EPI) "dummy" scan is acquired and discarded to allow for T1 saturation effects. Note that in some MRUIs, the value of TR should be 3 s, as it is taken to be the total time during which the acquisition takes place.
      5. Click "Copy" to make a copy of this sequence. Place the cursor at the bottom of the list of sequences and then click "Paste" twice to set up three consecutive sparse sampling sequences.
      6. Click "Start" to begin the T1-weighted anatomical volume acquisition.
      7. Present three runs to the participant, with the runs as described in step 2.1.1.2. Synchronize the start of the runs with the acquisition by the scanner using the scanner trigger-box.
        1. Follow the same procedure as described in section 1.3.2 to begin each one of the three runs, differentiating between runs using the session number. Save the results of three complete runs using the same procedure described in step 1.3.3.
          NOTE: The timing of the trial presentations is systematically jittered by ±500 ms.

figure-protocol-12909
Figure 1: Sparse-sampling Design. (A) Timeline of events within a trial involving only listening to a two-tone interval (2 s), without subsequent overt reproduction. (B) Timeline of events within a trial involving listening and singing tasks. Please click here to view a larger version of this figure.

3. Data Analysis

  1. Preprocess the functional data using software designed for the analysis of brain imaging data sequences following standard procedures33.
    NOTE: All of the data processing is done using the same software.
    1. Use the provided menu option to realign the images to the first volume, resampled and spatially normalized (final voxel size: 2 x 2 x 2 mm3) to standard Montreal Neurological Institute (MNI) stereotactic space34.
    2. Use the provided menu option to smooth the image using an isotropic, 8 mm, Full Width at Half Maximum (FWHM) Gaussian kernel.
    3. To model the BOLD response, select a single-bin Finite Impulse Response (FIR) as a basis function (order 1) or boxcar function, spanning the time of volume acquisition (3 s)28.
      NOTE: Sparse-sampling protocols, such as this one, do not generally require the FIR to be convolved with the hemodynamic response function, as is commonly the case for event-related fMRI.
    4. Apply a high-pass filter to the BOLD response for each event (1,000 s for the "singing network" and 360 s elsewhere).
      NOTE: Modelling all singing tasks together will amount to a block of 400 s35.

Access restricted. Please log in or start a trial to view this content.

Wyniki

All 11 participants in our experiment were female vocal students at the conservatory level, and they performed well enough in the interval recognition tasks to be selected for scanning. The success rate for the interval identification task was 65.72 ±21.67%, which is, as expected, lower than the success rate when identifying dissonant and consonant intervals, which was 74.82 ±14.15%.

In order to validate the basic desi...

Access restricted. Please log in or start a trial to view this content.

Dyskusje

This work describes a protocol in which singing is used as a means of studying brain activity during the production of consonant and dissonant intervals. Even though singing provides what is possibly the simplest method for the production of musical intervals22, it does not allow for the production of chords. However, although most physical characterizations of the notion of consonance rely, to some degree, on the superposition of simultaneous notes, a number of studies have shown that intervals c...

Access restricted. Please log in or start a trial to view this content.

Ujawnienia

The authors declare no conflicts of interest.

Podziękowania

The authors acknowledge financial support for this research from Secretaría de Salud de México (HIM/2011/058 SSA. 1009), CONACYT (SALUD-2012-01-182160), and DGAPA UNAM (PAPIIT IN109214).

Access restricted. Please log in or start a trial to view this content.

Materiały

NameCompanyCatalog NumberComments
Achieva 1.5-T magnetic resonance scannerPhilipsRelease 6.4
AudacityOpen source2.0.5
Audio interfaceTascamUS-144MKII
AudiometerBrüel & KjaerType 1800
E-Prime ProfessionalPsychology Software Tools, Inc.2.0.0.74
MatlabMathworksR2014A
MRI-Compatible Insert EarphonesSensimetricsS14
PraatOpen source5.4.12
Pro-audio condenser microphoneShureSM93
SPSS StatisticsIBM20
Statistical Parametric MappingWellcome Trust Centre for Neuroimaging8

Odniesienia

  1. Burns, E. Intervals, scales, and tuning. The psychology of music. Deutsch, D. , Academic Press. London. 215-264 (1999).
  2. Lundin, R. W. Toward a cultural theory of consonance. J. Psychol. 23, 45-49 (1947).
  3. Fritz, T., Jentschke, S., et al. Universal recognition of three basic emotions in music. Curr. Biol. 19, 573-576 (2009).
  4. Schellenberg, E. G., Trehub, S. E. Frequency ratios and the discrimination of pure tone sequences. Percept. Psychophys. 56, 472-478 (1994).
  5. Trainor, L. J., Heinmiller, B. M. The development of evaluative responses to music. Infant Behav. Dev. 21 (1), 77-88 (1998).
  6. Zentner, M. R., Kagan, J. Infants' perception of consonance and dissonance in music. Infant Behav. Dev. 21 (1), 483-492 (1998).
  7. Terhardt, E. Pitch, consonance, and harmony. J. Acoust. Soc. America. 55, 1061(1974).
  8. Minati, L., et al. Functional MRI/event-related potential study of sensory consonance and dissonance in musicians and nonmusicians. Neuroreport. 20, 87-92 (2009).
  9. Helmholtz, H. L. F. On the sensations of tone. , New York: Dover. (1954).
  10. McDermott, J. H., Lehr, A. J., Oxenham, A. J. Individual differences reveal the basis of consonance. Curr. Biol. 20, 1035-1041 (2010).
  11. Cousineau, M., McDermott, J. H., Peretz, I. The basis of musical consonance as revealed by congenital amusia. Proc. Natl. Acad. Sci. USA. 109, 19858-19863 (2012).
  12. Plomp, R., Levelt, W. J. M. Tonal Consonance and Critical Bandwidth. J. Acoust. Soc. Am. 38, 548-560 (1965).
  13. Kameoka, A., Kuriyagawa, M. Consonance theory part I: Consonance of dyads. J. Acoust. Soc. Am. 45, 1451-1459 (1969).
  14. Tramo, M. J., Bharucha, J. J., Musiek, F. E. Music perception and cognition following bilateral lesions of auditory cortex. J. Cogn. Neurosci. 2, 195-212 (1990).
  15. Schellenberg, E. G., Trehub, S. E. Children's discrimination of melodic intervals. Dev. Psychol. 32 (6), 1039-1050 (1996).
  16. Peretz, I., Blood, A. J., Penhune, V., Zatorre, R. J. Cortical deafness to dissonance. Brain. 124, 928-940 (2001).
  17. Mcdermott, J. H., Schultz, A. F., Undurraga, E. A., Godoy, R. A. Indifference to dissonance in native Amazonians reveals cultural variation in music perception. Nature. 535, 547-550 (2016).
  18. Blood, A. J., Zatorre, R. J., Bermudez, P., Evans, A. C. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat. Neurosci. 2, 382-387 (1999).
  19. Pallesen, K. J., et al. Emotion processing of major, minor, and dissonant chords: A functional magnetic resonance imaging study. Ann. N. Y. Acad. Sci. 1060, 450-453 (2005).
  20. Foss, A. H., Altschuler, E. L., James, K. H. Neural correlates of the Pythagorean ratio rules. Neuroreport. 18, 1521-1525 (2007).
  21. Limb, C. J., Braun, A. R. Neural substrates of spontaneous musical performance: An fMRI study of jazz improvisation. PLoS ONE. 3, (2008).
  22. Zarate, J. M. The neural control of singing. Front. Hum. Neurosci. 7, 237(2013).
  23. Larson, C. R., Altman, K. W., Liu, H., Hain, T. C. Interactions between auditory and somatosensory feedback for voice F0 control. Exp. Brain Res. 187, 613-621 (2008).
  24. Belyk, M., Pfordresher, P. Q., Liotti, M., Brown, S. The neural basis of vocal pitch imitation in humans. J. Cogn. Neurosci. 28, 621-635 (2016).
  25. Kleber, B., Veit, R., Birbaumer, N., Gruzelier, J., Lotze, M. The brain of opera singers: Experience-dependent changes in functional activation. Cereb. Cortex. 20, 1144-1152 (2010).
  26. Jürgens, U. Neural pathways underlying vocal control. Neurosci. Biobehav. Rev. 26, 235-258 (2002).
  27. Kleber, B., Birbaumer, N., Veit, R., Trevorrow, T., Lotze, M. Overt and imagined singing of an Italian aria. Neuroimage. 36, 889-900 (2007).
  28. Kleber, B., Zeitouni, A. G., Friberg, A., Zatorre, R. J. Experience-dependent modulation of feedback integration during singing: role of the right anterior insula. J. Neurosci. 33, 6070-6080 (2013).
  29. Zarate, J. M., Zatorre, R. J. Experience-dependent neural substrates involved in vocal pitch regulation during singing. Neuroimage. 40, 1871-1887 (2008).
  30. González-García, N., González, M. A., Rendón, P. L. Neural activity related to discrimination and vocal production of consonant and dissonant musical intervals. Brain Res. 1643, 59-69 (2016).
  31. Oldfield, R. C. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia. 9, 97-113 (1971).
  32. Samuels, M. L., Witmer, J. A., Schaffner, A. Statistics for the Life Sciences. , Pearson. Harlow. (2015).
  33. Eickhoff, S. B., et al. A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage. 25, 1325-1335 (2005).
  34. Evans, A. C., Kamber, M., Collins, D. L., MacDonald, D. An MRI-based probabilistic atlas of neuroanatomy. Magnetic Resonance Scanning and Epilepsy. Shorvon, S. D., Fish, D. R., Andermann, F., Bydder, G. M., Stefan, H. 264, 263-274 (1994).
  35. Ashburner, J., et al. SPM8 Manual. , Wellcome Trust Centre for Neuroimaging. London. (2013).
  36. Özdemir, E., Norton, A., Schlaug, G. Shared and distinct neural correlates of singing and speaking. Neuroimage. 33, 628-635 (2006).
  37. Brown, S., Ngan, E., Liotti, M. A larynx area in the human motor cortex. Cereb. Cortex. 18, 837-845 (2008).
  38. Worsley, K. J. Statistical analysis of activation images. Functional MRI: An introduction to methods. , Oxford University Press. Oxford. 251-270 (2001).
  39. FSL Atlases. , Available from: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases (2015).
  40. Bidelman, G. M., Krishnan, A. Neural correlates of consonance, dissonance, and the hierarchy of musical pitch in the human brainstem. J. Neurosci. 29, 13165-13171 (2009).
  41. McLachlan, N., Marco, D., Light, M., Wilson, S. Consonance and pitch. J. Exp. Psychol. – Gen. 142, 1142-1158 (2013).
  42. Thompson, W. F. Intervals and scales. The psychology of music. Deutsch, D. , Academic Press. London. 107-140 (1999).
  43. Hurwitz, R., Lane, S. R., Bell, R. A., Brant-Zawadzki, M. N. Acoustic analysis of gradient-coil noise in MR imaging. Radiology. 173, 545-548 (1989).
  44. Ravicz, M. E., Melcher, J. R., Kiang, N. Y. -S. Acoustic noise during functional magnetic resonance imaging. J Acoust. Soc. Am. 108, 1683-1696 (2000).
  45. Cho, Z. H., et al. Analysis of acoustic noise in MRI. Magn. Reson. Imaging. 15, 815-822 (1997).
  46. Belin, P., Zatorre, R. J., Hoge, R., Evans, A. C., Pike, B. Event-related fMRI of the auditory cortex. Neuroimage. 429, 417-429 (1999).
  47. Hall, D. A., et al. "Sparse" temporal sampling in auditory fMRI. Hum. Brain Mapp. 7, 213-223 (1999).
  48. Ternström, S., Sundberg, J. Acoustical factors related to pitch precision in choir singing. Speech Music Hear. Q. Prog. Status Rep. 23, 76-90 (1982).
  49. Ternström, S., Sundberg, J. Intonation precision of choir singers. J. Acoust. Soc. Am. 84, 59-69 (1988).

Access restricted. Please log in or start a trial to view this content.

Przedruki i uprawnienia

Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE

Zapytaj o uprawnienia

Przeglądaj więcej artyków

FMRIBrain ActivityVocal ProductionConsonant IntervalsDissonant IntervalsAuditory Cognitive NeuroscienceMotor SystemAuditory SystemVocal AccuracyPure TonesSine WaveformConsonant IntervalsDissonant IntervalsSound Editing SoftwareWav FilesParticipantHeadphones

This article has been published

Video Coming Soon

JoVE Logo

Prywatność

Warunki Korzystania

Zasady

Badania

Edukacja

O JoVE

Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone