This method can help answer key questions in online processing of spoken language, including semantically complex sentences, such as disjunctions and conditionals. The main advantage of this technique is that it can be performed on a wide range of populations to study most topics in areas of language processing. Demonstrating the procedure will be Xu Jing, a post graduate student.
Cao Yifan, Wang Meng, Yang Xiaotong, Zhan Min, and Zhou Xuehan are the students from my research group. To prepare the visual stimuli, download 60 copyright free clip art images of animals from the Internet and open the images in an appropriate image editor. Click Tools and use quick selection to select and delete the background from each image.
Click Image and Image Size to resize the images to 120 by 120 pixels and draw one big open 320 x 240 pixels box, one small closed 160 x 160 pixels box, and two small, open 160 x 240 pixels boxes. Open a new file to create a 1, 024 by 768 pixels template for the first test image, and drag the boxes to the indicated locations. Drag two clip art images into the big open box and one each of the same two images into the two small open boxes.
Then create 59 more test images as just demonstrated with each animal image being used twice per test image, counterbalancing the spatial locations of the four boxes among the images. To prepare the spoken language stimuli, design four test sentences corresponding to each test image in the native language of the participants for a total of 240 test sentences to be recorded, with at least three of the four sentences and one of each of the forms as illustrated. Recruit a female native speaker to record four example statements, as well as audio for all of the animals being used in the experiment.
Divide the 240 test sentences into four groups, with each group containing 15 conjunctive statements, 15 disjunctive statements, 15 but statements, and 15 filler sentences. Then save all of the important information regarding the test stimuli into a tab delimited text file, with each row corresponding to each of the 240 trials. To build the experiment sequence, open the experiment builder and create a new project.
Drag a display screen object into the workspace, and rename the workspace Instruction. Click Insert Multi Line Text Resource and double click to open the block sequence to build the block sequence. Drag an EI Camera Setup node into the block sequence to bring up a camera setup screen on EyeLink host PC to facilitate the camera setup calibration and validation.
Click the Calibration Type field in the properties panel and select HV5 from the drop down list. To build the trial sequence drag a Display Screen node into the trial sequence and rename it Animal_1_Image. Drag a Play Sound node into the trial sequence and rename it Animal_1_Audio.
Drag a Timer node into the trial sequence and rename it Animal_1_Audio_Length. Continue dragging and renaming additional Display Screen, Play Sound, and Timer nodes into the trial sequence until three nodes have been created for each clip art image. Then drag a Drift Correct node into the trial sequence to introduce the drift correction.
To build the recording sequence, drag a new Display Screen node into the record sequence. Rename the node Test_Image. Double click the Display Screen node to open the screen builder, click the Insert Rectangle Interest Area Region, and draw four rectangular areas of interest.
Drag a Timer node into the workspace, rename the node Pause, and change the duration property to 500 milliseconds. Drag a Play Sound node into the workspace and rename the node Test_Audio, and drag a Timer node into the workspace and rename it Test_Audio_Length. Then add a Keyboard node into the workspace, rename the node Behavioral Responses, and change the Acceptable Keys property to Up, Down, Right, Left.
To conduct an experiment, boot the system on the host PC to start the host application of the camera, and click the executable version of the experimental project on the display PC.Input the participant's name and click Select Condition Value to run to select a group from the prompt window. Place a small target sticker on the participant's forehead to track the head position even when the pupil image is lost such as during blinks or sudden movements. Seat the participant approximately 60 centimeters from a 21-inch 4:3 color monitor with a 1, 024 by 769 pixel resolution, for which 27 pixels equals one degree angle.
Adjust the height of the display PC monitor to ensure that when the participant is looking straight ahead, they are looking vertically at the middle to top 75%of the monitor, then rotate the focusing arm on the desk mount to bring the eye image into focus. Next, click Calibrate on the host PC and ask the participant to fixate on a grid of five fixation targets, in random succession with no overt behavioral responses, to map the participant's eye movements to the gaze of regard in the visual world. Click Validate and ask the participant to fixate on the same grid of fixation targets to validate the calibration results.
Perform a drift check by asking the participant to press the space bar on the keyboard while fixating on the black dot in the center of the screen. Then present the visual stimuli via the display PC monitor while playing the auditory stimuli via a pair of external speakers situated to the left and right of the monitor. For data coding and analysis open the data viewer and click File, Import File, and import Multiple EyeLink Data Files to import all of the recorded eye tracker files.
Save the files in a single evs file, then open the evs file and click Analysis, Reports, and Sample Report to export the raw sample data with no aggregation. The correct response to a conjunctive statement is the big open box, while the correct response to a but statement is the small open box containing the first mentioned animal within the statement. Critically, which box is chosen for the conjunctive statements depends on how a conditional statement is processed.
For example, the small closed box is chosen only when the scalar implicature and the ignorance inferences are both computed, which results from the comprehension of a disjunctive statement. As illustrated in the panel, eye fixation on the small closed box doesn't increase unless the sentential connective is the disjunctive connective or with the increase beginning no later than the offset of the disjunctive connective. The Visual World Paradigm is a versatile eye tracking technique for inferring participant's online comprehension of spoken language from their eye movements in the real world.
When recruiting participants remember they should have normal or correct normal visual ability and a normal hearing ability. To design an eligible real world study, other factors that might affect participants'eye movements should be controlled or ruled out. When statistically analyzing the results, problem that may arise from bounded responses, all auto correlation or automatic comparisons should be considered.