3.0K Views
•
11:12 min
•
November 4th, 2021
DOI :
November 4th, 2021
•0:05
Introduction
1:00
Affective Modulation Experiment
3:32
EEG Data Analysis
6:45
ERP Source Analysis
8:01
ECG Data and Behavioral Assessment Analysis
8:35
Results: Effective Modulation of Late Positive Potential (LPP) by Repetitive Religious Chanting
10:30
Conclusion
Trascrizione
Chanting and praying are among the most popular religious practices. This protocol could help scientists examine the neuro-physiological response of repetitive religious chanting using event-related potentials. The ERP technique can differentiate between early-and late-stage neural information processing, analoguing the first and second thoughts of mind processing as explained in Buddhist teachings.
Following this protocol, researchers can examine the effect of religious chanting or other traditional practices to identify visible ways to help people to ameliorate their emotional suffering. To begin this study, recruit participants with at least one year of experience in chanting the name of Amitabha Buddha. During the experiment, record the EEG data using a 128-channel EEG system consisting of an amplifier, head box, EEG cap, and two desktop computers and record the ECG data using a physiological data recording system.
For showing neutral and negative pictures from the International Effective Picture System, or IAPS. Use stimulus presentation software on a desktop computer. Present the pictures on a monitor at 75 centimeters from the participant's eyes, with visual angles of 15 degrees vertically and 21 degrees horizontally.
Use a block design for the experiment, as it may more effectively elicit emotion-related components. Provide a brief practice run to allow the participants to familiarize themselves with each condition, and use video monitor to ensure that the participants do not fall asleep. Begin the experiment with the religious chanting condition.
Ask the participants to chant four characters of the name of Amitabha Buddha for 40 seconds while imagining the Amitabha following the script in Pure Land school. During the first 20 seconds, show the participants the image of Amitabha-Amitabha, Amitabha. And for the next 20 seconds, show them the IAPS images.
Ask the participants to observe the pictures carefully. Show each picture for approximately 1.8 to 2.2 seconds, with an inter-stimulus interval of 0.4 to 0.6 seconds. After each session, allow a rest period of 20 seconds to counter the potential residual effects of chanting or picture-viewing on the next session.
For the nonreligious chanting condition, ask the participants to chant four characters of the name of Santa Claus for 40 seconds while imagining the Santa Claus. During the first 20 seconds, show the participants the image of Santa Claus, and for the next 20 seconds, show them the IAPS images. For the control condition, ask the participants to keep silent for 40 seconds.
During the first 20 seconds, show the participants a blank image and for the next 20 seconds, show them the IAPS images. To process and analyze the EEG data, use the open source software EEGLAB. To maintain a reasonable data file size, use the EEGLAB function pop_resample.
Click on Tools followed by Change sampling rate to resample the data from 1000 Hertz to 250 Hertz. Next, filter the data using the EEGLAB function pop_eegfiltnew. Click on Tools followed by Filter the data, then select Basic FIR filter new, default to filter the data with a finite impulse response filter with a 0.1 to 100 Hertz pass band.
To reduce the noise from the alternating current, click on Tools followed by Filter the data, and select Notch filter the data instead of pass band. Then, filter the data with a nonlinear impulse response filter with a 47 to 53 Hertz stop band. Next, click on Plot and then Channel data scroll to visually inspect the data and remove strong artifacts generated by eye and muscle movements.
Then, click on Tools, Interpolate electrodes, and select from the data channels to reconstruct the bad channels using spherical interpolation. Next, click on Tools and Run ICA to run an independent component analysis with the open source algorithm runica. Then, click on tools again, followed by Reject data using ICA and Reject components by map to remove the independent components corresponding to eye movements, blinks, muscle movement, and line noise.
To reconstruct the data using the remaining independent components, Click on Tools followed by Remove components. Next, click on Tools followed by Filter the data and select Basic FIR filter new, default to filter the data with a 30 Hertz low-pass filter. Then, click on Tools followed by Extract epochs to obtain ERP data by extracting and averaging time-locked epochs for each condition with a time window of negative 200 to zero milliseconds as the baseline, and zero to 800 milliseconds as the ERP.
Next, click on Tools followed by Re-reference to re-reference the ERP data with the average of the left and right mastoid channels. After repeating the above steps for the data sets from all participants, define time windows for N1 and late positive potential, or LPP, based on established theories and the current data. Then, using a paired T-test, find the neutral versus negative picture difference at the N1 component and the LPP component among the three conditions.
Next, perform a region-of-interest analysis on the N1 and LPP components by averaging relevant channels to represent a region. Then, compare the difference at N1 and LPP separately using repeated measures, ANOVA, and post-hoc statistics in statistical analysis software. Use the SPM open source software to perform the ERP source analysis.
Link the coordinate system of the EEG cap sensor to that of a standard structural MRI image by landmark-based co-registration. In SPM, click on Batch, then SPM, M/EEG, source reconstruction, and Head model specification. Next, perform forward computation to calculate the effect of each dipole on the cortical mesh imposed on the EEG sensors.
Under the same Batch Editor, click on SPM, then M/EEG, Source reconstruction, and Source inversion. To perform the inverse reconstruction, use the greedy-search-based multiple sparse priors algorithm in the third step. Choose MSP/GS for the Inversion type in the Source Inversion window.
Determine the difference between conditions using general linear modeling in SPM. After setting the significance level to P 0.05, under Batch Editor, click on SPM, then Stats and Factorial design specification. To process and analyze the ECG data, use physiological and data processing software.
To calculate the mean scores for each condition in EEGLAB, click on on Tools followed by FMRIB Tools and Detect QRS events. For behavioral assessment analysis, ask the participants to rate their belief in the efficacy of chanting the subject's name on a one to nine scale, where one is considered the weakest, and nine, the strongest. Results for participants'belief in chanting revealed an average score of 8.16 for Amitabha Buddha, 3.26 for Santa Claus, and 1.95 for the blank control condition.
The representative channel of the parietal lobe demonstrated that the chanting conditions had different effects on the early and late processing of neutral and negative pictures, showing the time window of N1 and LPP, respectively. The ERP results showed an increased N1 while viewing the fearful pictures in the three chanting conditions. The negative images induced stronger central brain activities than neutral images, and the increases are comparable in the three conditions.
The ERP also demonstrated an increased LPP in the nonreligious chanting and no chanting conditions. However, the LPP induced by fearful pictures is barely visible when the participants chant Amitabha Buddha's name. A region-of-interest analysis revealed that the differences in the N1 component were similar across the three conditions.
However, the difference in the LPP component is much smaller in the religious chanting condition than in the non-religious chanting condition and the silent viewing condition. Source analysis revealed that when compared with neutral pictures, negative pictures induced more parietal activation in the non-religious chanting condition and no chanting condition. In contrast, this negative picture-induced activation largely disappears in the religious chanting condition.
A significant change in the heart rate was detected between the fearful and neutral pictures in the non-religious and no chanting conditions. However, no such difference was found in the religious chanting condition. This same protocol can also be used in functional neuroimaging studies to reveal more specifically the brain regions involved in religious chanting.
This study demonstrates a method for examining how repetitive religious chanting and other similar practice can influence neurophysiological response and reduce suffering induced by a negative stimuli.
The present event-related potential (ERP) study provides a unique protocol for investigating how religious chanting can modulate negative emotions. The results demonstrate that the late positive potential (LPP) is a robust neurophysiological response to negative emotional stimuli and can be effectively modulated by repetitive religious chanting.