This method can help researchers understand what the world looks like from a young child's perspective, as well as how they allocate their real attention within that view. Compared with screen-based eye tracking, which is widely used in behavioral science, head-mounted eye tracking allows us to monitor where children look during everyday activities like toy play and picture book reading. Demonstrating the procedure will be graduate students Catalina Suarez-Rivera and Yayun Zhang, and lab manager Daniel Pearcy.
Before beginning an experiment, modify a system to work with a custom-made infant cap. Select a scene camera that is adjustable in terms of positioning, and has a wide enough angle to capture a field of view appropriate for addressing the research questions of interest. Select an eye camera that is adjustable in terms of positioning, and has an infrared LED positioned in such a way that the cornea of the child's eye will reflect this light.
The eye tracking system should be as unobtrusive and lightweight as possible, to provide the greatest chance that young children will tolerate wearing the equipment. Then attach the scene and the eye cameras to a hook and loop strap that is affixed to the opposite side of a piece of hook and loop strap sewn onto a child-sized cap, to embed the system into the cap, and position the cameras so that they will be out of the center of the child's view. For eye-tracking data collection, have two researchers present:one to interact with, and to occupy the child, and one to place and position the eye tracking system.
Fully engage the child in an activity that occupies the child's hands, so that the child does not reach up to move or grab the eye-tracking system, and place the eye-tracking system onto the child's head. Position the scene camera low on the forehead to best approximate the child's field of view, and center the scene camera view on what the child will be looking at during the study. To obtain high-quality gaze data, position the eye camera to detect both the pupil and corneal reflection with no cheek or eyelash occlusion throughout the eye's full range of motion.
The trickiest part of this protocol is placing the equipment on the child's head and adjusting the cameras without upsetting the child. Speed, confidence, and practice are essential. Once the scene and eye images are as high-quality as can be obtained, draw the child's attention to different locations in their field of view to collect calibration data.
Take care that the child's body positioning during the calibration matches the position that will be used during the study. When all the calibration points have been obtained, begin collecting the eye-tracking data. Taking note of any points at which the eye-tracking system gets bumped or misaligned, to allow recalibration as necessary, and to allow separate coding of the data before and after the misalignment.
To calibrate the eye-tracking data at the end of the study, open an appropriate calibration software program, and adjust the thresholds of the various detection parameters within the calibration software to obtain a good eye image. During the first round of calibration, identify calibration points at moments when the child is clearly looking to a distinct point in the scene image, keeping in mind that these can be points intentionally created by the researcher during the data collection, or points from within the study, in which the point of gaze is easily identifiable, as long as the pupil is accurately detected for those frames. Create a series of calibration points to establish the mapping between the scene and the eye.
If the eye-tracking system changed position at any time during the study, create separate calibrations for the portions before and after the change in position. To code the regions of interest, compile a list of all the regions of interest that should be coded based on the research questions, and use the child's eye image, scene image, and point of gaze track to determine which region of interest is being visually attended. Scroll through frames one by one to watch for moments of the pupil within the eye image as the primary cue that the region of interest may have changed.
When a visible movement of the eye occurs, check whether the child was shifting the point of gaze to a new region of interest, or to no defined region of interest. Although the regions of interest are coded separately for each frame, use frames before and after the frame being analyzed to gain contextual information that may aid in determining the correct region of interest. Here, sample region of interest streams for two 18-month old children are shown.
Each colored block represents continuous frames during which the child looked at a particular region of interest. Children showed individual differences in their selectivity for different subsets of toys, as evidenced by the differences in proportion of the interactions that each child spent looking at each of the toy regions of interest. Although the total proportion of time both children spent looking at all of the toys was somewhat similar, the proportions of time spent on individual toys varied greatly, both within and between subjects.
Moreover, how these proportions of looking time were achieved also differed, with Child Two's mean look duration almost double that of Child One. Another property demonstrated by these data is that both children rarely looked to the faces of their parents during the session, and that when they did, each gaze duration was typically for less than one second. Researchers can place head-mounted eye trackers on children and their social partner simultaneously, as well as integrate this procedure with techniques such as motion-tracking and heart rate monitoring, to provide high-density, multi-modal datasets for answering a variety of questions.
The use of these techniques has transformed our understanding of many topics in the developmental literature, including joint and sustained attention, changing visual experiences with age and motor development, and the role of visual experiences in word learning. This protocol has been successfully employed with clinical populations, including children with cochlear implants and children diagnosed with autism spectrum disorders.