The lexical decision task is used to measure word recognition speed. This task can reveal characteristics of the mental lexicon and how the lexicon changes with aging and with neurodegenerative disorders. Many language tasks require the coordination of language and other aspects of cognition.
The lexical decision task does not rely heavily on other cognitive abilities, which may be compromised in some populations, such as patients with dementia. Helping to demonstrate the procedure will be Dalia Garcia, our lab manager. Place the participant in front of a computer monitor at a viewing distance of about 80 centimeters in a normally lit room.
How was the ride? It was good. Press the left button if the word is a real word.
Instruct the participant to decide as quickly and accurately as possible whether the letter string on the screen is a real word or not by pressing one of two corresponding buttons. Start the experiment with a practice session that includes a small number of trials with one word presented horizontally per trial subtending a visual angle of about five degrees. Divide the experiment into blocks and give short breaks after the practice session and between the blocks to allow participants to rest their eyes and reduce fatigue.
Next, start each new block with a few filler items of common nouns, such as dog, sister, or year, that will not be included in the analysis. Present the items in a random order. Begin each experiment trial with a fixation mark appearing in the center of the screen for 500 milliseconds followed by a blank screen for another 500 milliseconds.
Immediately after the blank screen, present a letter string for 1, 500 milliseconds or until the participant responds. Finally, after a response is made, follow again with a blank screen until 3, 000 milliseconds has passed from the beginning of the trial. Begin by opening the output file of the presentation program and obtain the reaction time, measured in milliseconds, for each trial.
Import data into R by using, for example, the read. table function. Install the packages lme4 and lmerTest.
Attach packages with the function library or require. Check the need for transformation using the boxcox function as the distribution of reaction time data is typically highly skewed. Transform the reaction time values using inverted transformed reaction times or binary logarithms of reaction times since these transformations tend to provide more normal-like distributions for reaction times in lexical decision experiments than raw reaction time values.
Next, exclude pseudo word and filler trials as well as incorrect responses and omissions. Exclude trials with response times faster than 300 milliseconds because they typically indicate that the participant was too late responding to a previous stimulus. Next, build a basic linear mixed effects model that identifies reaction times as the outcome measure and subject item and trial as random effects.
Add the random effects in order to estimate random intercepts for each of the random effects. Add explanatory variables in a theoretically-motivated order. For instance, add words-based frequency as a fixed effect.
Insert variables, such as base or surface frequency, into the model with a transformation that results in a more Gaussian distribution shape. Check with the anova function if adding each predictor significantly improved the predictive power of the model compared to a model without the predictor. If there is no significant difference in the fit of the new model over the simpler model, choose the simplest model with fewer predictors.
Then, check the Akaike information criterion of each model using the AIC function. Lower values indicate a better fit for the data. Next, check for theoretically-motivated interactions between predictors.
For instance, add a term of interaction, the log of base frequency by age. Then add by participant random slopes for predictors by including one plus before the variable name, then a vertical bar, then subject, because participants'reaction times might be affected by their individual characteristics or by words'lexical characteristics in different ways. Run the analysis for each participant group separately or run an analysis on all data with group as a fixed effect predictor and then test for an interaction of group by significant predictors.
In order to remove the influence of possible outliers, exclude data points with absolute standardized residuals exceeding 2.5 standard deviations and refit the model with the new data. Finally, in the case of exploratory data-driven analysis, use a backwards stepwise regression. Include all variables in the initial analysis and then remove non-significant variables from the model in a step-by-step fashion.
These results indicate how word recognition speed might be different for younger adults and older adults. Only two principle components, PC1 and PC4, were significant in the young adults. Three components were significant predictors in the models for elderly controls, individuals with mild cognitive impairment, and individuals with Alzheimer's disease.
This third component, PC2, is interpreted as reflecting the influence of form-based aspects of a word on word recognition speed. Further, one interesting difference between the three elderly groups emerged. Education significantly predicted speed of word recognition for elderly controls and individuals with mild cognitive impairment, but not for individuals with Alzheimer's disease.
This methodology can be applied to other types of questions about the mental lexicon and to other populations, such as multilinguals and people with aphasia.