A subscription to JoVE is required to view this content. Sign in or start your free trial.
Method Article
The current study aims to provide a step-by-step tutorial for calculating the magnitude of multisensory integration effects in an effort to facilitate the production of translational research studies across diverse clinical populations.
Multisensory integration research investigates how the brain processes simultaneous sensory information. Research on animals (mainly cats and primates) and humans reveal that intact multisensory integration is crucial for functioning in the real world, including both cognitive and physical activities. Much of the research conducted over the past several decades documents multisensory integration effects using diverse psychophysical, electrophysiological, and neuroimaging techniques. While its presence has been reported, the methods used to determine the magnitude of multisensory integration effects varies and typically faces much criticism. In what follows, limitations of previous behavioral studies are outlined and a step-by-step tutorial for calculating the magnitude of multisensory integration effects using robust probability models is provided.
Interactions across sensory systems are essential for everyday functions. While multisensory integration effects are measured across a wide array of populations using assorted sensory combinations and different neuroscience approaches [including but not limited to the psychophysical, electrophysiological, and neuroimaging methodologies]1,2,3,4,5,6,7,8,9, currently a gold standard for quantifying multisensory integration is lacking. Given that multisensory experiments typically contain a behavioral component, reaction time (RT) data is often examined to determine the existence of a well-known phenomenon called the redundant signals effect10. As its name suggests, simultaneous sensory signals provide redundant information, which typically yield quicker RTs. Race and co-activation models are used to explain the above mentioned redundant signals effect11. Under race models, the unisensory signal that is processed the fastest is the winner of the race and is responsible for producing the behavioral response. However, evidence for co-activation occurs when responses to multisensory stimuli are quicker than what race models predict.
Earlier versions of the race model are inherently controversial12,13 as they are referred to by some as overly conservative14,15 and purportedly contain limitations regarding the independence between the constituent unisensory detection times inherent in the multisensory condition16. In an effort to address some of these limitations, Colonius & Diederich16 developed a more conventional race model test:
,
where the cumulative distribution frequencies (CDFs) of the unisensory conditions (e.g., A & B; with an upper limit of one) are compared to the CDF of the simultaneous multisensory condition (e.g., AB) for any given latency (t)11,16,17. In general, a CDF determines how often an RT occurs, within a given range of RTs, divided by the total number of stimulus presentations (i.e., trials). If the CDF of the actual multisensory condition is less than or equal to the predicted CDF derived from the unisensory conditions
,
then the race model is accepted and there is no evidence for sensory integration. However, when the multisensory CDF is greater than the predicted CDF derived from the unisensory conditions, the race model is rejected. Rejection of the race model indicates that multisensory interactions from redundant sensory sources combine in a non-linear manner, resulting in a speeding up of RTs (e.g., RT facilitation) to multisensory stimuli.
One main hurdle that multisensory researchers face is how to best quantify integration effects. For instance, in the case of the most basic behavioral multisensory paradigm, where participants are asked to perform a simple reaction time task, information regarding accuracy and speed is collected. Such multisensory data can be used at the face-value or manipulated using various mathematical applications including but not limited to Maximum Likelihood Estimation18,19, CDFs11, and various other statistical approaches. The majority of our previous multisensory studies employed both quantitative and probabilistic approaches where multisensory integrative effects were calculated by 1) subtracting the mean reaction time (RT) to a multisensory event from the mean reaction time (RT) to the shortest unisensory event, and 2) by employing CDFs to determine whether RT facilitation resulted from synergistic interactions facilitated by redundant sensory information8,20,21,22,23. However, the former methodology was likely not sensitive to the individual differences in integrative processes and researchers have since posited that the later methodology (i.e., CDFs) may provide a better proxy for quantifying multisensory integrative effects24.
Gondan and Minakata recently published a tutorial on how to accurately test the Race Model Inequality (RMI) since researchers all too often make countless errors during the acquisition and pre-processing stages of RT data collection and preparation25. First, the authors posit that is unfavorable to apply data trimming procedures where certain a priori minimum and maximum RT limits are set. They recommend that slow and omitted responses be set to infinity, rather than excluded. Second, given that the RMI may be violated at any latency, multiple t-tests are often used to test the RMI at different time points (i.e., quantiles); unfortunately, this practice leads to the increased Type I error and substantially reduced statistical power. To avoid these issues, it is recommended that the RMI be tested over one specific time range. Some researchers have suggested that it makes sense to test the fastest quartile of responses (0-25%)26 or some pre-identified windows (i.e., 10-25%)24,27 as multisensory integration effects are typically observed during that time interval; however, we argue that the percentile range to be tested must be dictated by the actual dataset (see Protocol Section 5). The problem with relying on published data from young adults or computer simulations is that older adults manifest very different RT distributions, likely due to the age-related declines in sensory systems. Race model significance testing should only be tested over violated portions (positive values) of group-averaged difference wave between actual and predicted CDFs from the study cohort.
To this end, a protective effect of multisensory integration in healthy older adults using the conventional test of the race model16 and the principles set forth by Gondan and colleagues25 has been demonstrated. In fact, greater magnitude of visual-somatosensory RMI (a proxy for multisensory integration) was found to be linked to better balance performance, lower probability of incident falls and increased spatial gait performance28,29.
The objective of the current experiment is to provide researchers with a step-by-step tutorial to calculate the magnitude of multisensory integration effects using the RMI, to facilitate the increased production of diverse translational research studies across many different clinical populations. Note that data presented in the current study are from recently published visual-somatosensory experiments conducted on healthy older adults28,29, but this methodology can be applied to various cohorts across many different experimental designs, utilizing a wide-array of multisensory combinations.
Access restricted. Please log in or start a trial to view this content.
All participants provided written informed consent to the experimental procedures, which were approved by the institutional review board of the Albert Einstein College of Medicine.
1. Participant Recruitment, Inclusion Criteria, and Consent
2. Experimental Design
3. Apparatus & Task
4. Race Model Inequality Data Preparation (Individual Level)
5. Quantification of the Multisensory Effect (Group Level).
Access restricted. Please log in or start a trial to view this content.
The purpose of this study was to provide a step-by-step tutorial of a methodical approach to quantify the magnitude of VS integration effects, to foster the publication of new multisensory studies using similar experimental designs and setups (see Figure 1). Screenshots of each step and calculation needed to derive magnitude of multisensory integration effects, as measured by RMI AUC, are delineated above and illustrated in Figures 2-8.
Access restricted. Please log in or start a trial to view this content.
The goal of the current study was to detail the process behind the establishment of a robust multisensory integration phenotype. Here, we provide the necessary and critical steps required to acquire multisensory integration effects that can be utilized to predict important cognitive and motor outcomes relying on similar neural circuitry. Our overall objective was to provide a step-by-step tutorial for calculating the magnitude of multisensory integration in an effort to facilitate innovative and novel translational multi...
Access restricted. Please log in or start a trial to view this content.
There are no conflicts of interest to report and the authors have nothing to disclose.
The current body of work is supported by the National Institute on Aging at the National Institute of Health (K01AG049813 to JRM). Supplementary funding was provided by the Resnick Gerontology Center of the Albert Einstein College of Medicine. Special thanks to all the volunteers and research staff for exceptional support with this project.
Access restricted. Please log in or start a trial to view this content.
Name | Company | Catalog Number | Comments |
stimulus generator | Zenometrics, LLC; Peekskill, NY, USA | n/a | custom-built |
Excel | Microsoft Corporation | spreadsheet program | |
Eprime | Psychology Software Tools (PST) | stimulus presentation software |
Access restricted. Please log in or start a trial to view this content.
Request permission to reuse the text or figures of this JoVE article
Request PermissionThis article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. All rights reserved