The overall goal of the following experiment is to measure path navigation in three dimensions while controlling visual and vestibular sensory input to the participant. This is accomplished using a modified robotic chair with six degrees of freedom that stimulates the vestibular system seated in the chair. The participant views of virtual star field By moving the robotic chair and simultaneously altering the star field, the system provides visual and vestibular cues to the participant.
The participants provide path navigation feedback by the accuracy and the speed with which they can point back to the remembered starting position. The results show that inaccurate estimation of self motion is dependent on the movement plane and the angle through which they are moved. The main advantage of this technique over existing methods like motion platforms is that the MPI Cyber Motion simulator has a large workspace capable of moving observers in different dimensions, particularly downward.
This method can answer key questions in the field of neuroscience, such as whether the brain equally represents self motion in different dimensions. We got the idea to these experiments from a study of our colleague Manuel Vidal. He moved people through virtual maces with visual presentation only.
Here he found that medicine navigation is impaired when the MACES include a vertical component. The implications of this technique extend towards diagnosis of spatial disorientation because they provide a benchmark for path navigation in the normal brain. The NPI Cyber motion stimulator consists of a six joint serial robot in a 3, 2, 1 configuration.
It is based on an industrial robot with the 500 kilogram payload to make the robot safe for experimentation. Modifications are made both to the hardware and software complex motion Profiles that combine lateral movements with rotations are possible. With the MPI cyber motion simulator.
Axis one, four and six can rotate continuously four pairs of hardware and stops limit axis two, three, and five in both directions. The maximum range of linear movements is strongly dependent on the position from which the movement begins. Before performing any experiments, each experimental motion trajectory must undergo a test phase.
Trajectories are programmed using a KUKA designed office PC to configure the MPI cyber motion simulator. In this open loop configuration trajectories are put in Cartesian coordinates and are converted to joint space angles through inverse kinematics every 12 milliseconds. The current joint angle positions are transmitted from the control system to the MPI cyber motion simulator via an ethernet connection where they're incrementally read and recorded to disc on the robot.
A race car seat equipped with the five point safety belt system is attached to a chassis, which includes a foot rest. The chassis is mounted to the flange of the robot arm. Participants must wear noise canceling headphones, equipped with a microphone for two-way communication with the experimenter.
They should also be naive to the experimental setup. Continuous noise is played through the headphones, which masks the noise of the robot. Experiments are also possible by seating participants within an enclosed cabin as the experiment is performed in darkness.
Infrared cameras allow visual monitoring from the control room. Multiple visualization configurations are possible, including an LCD screen, a stereo front projection, a mono front projection, or a head-mounted display. In this experiment, visual cues to self motion are provided by an LCD display, placed 50 centimeters in front of the observers to deliver visual cues.
The software presents a virtual OID space filled with 200, 000 dots to the participant.each. in the space is drawn as a white circle against a black background. The screen displays dots corresponding to visual angles of 13 to 0.3 degrees.
These dots are at a distance of 0.085 and four. Virtual units from the participant. Virtual field movement is synchronized with physical motion via motion trajectories from the MPI control computer to create parallax between optic flow and motion.
The dots deeper in the field of view are drawn smaller regardless of the participant's movements.each. is shown for two seconds asynchronously before it is randomly reassigned. Thus, a total of a hundred thousand dots move every second.
A custom built joystick equipped with response buttons allows participants to transmit data by ethernet connection to the control system. Sensory information can be manipulated by providing just visual cues from the limited lifetime star field. Just vestibular kinesthetic cues from the passive self motion with the participant's eyes closed, or both cues with the participant's eyes open.
In this experiment, movement trajectories consisted of two segment lengths. The first is 0.4 meters, and the second is one meter. The angle of any two movement segments is transmitted as either 45 degrees or 90 degrees.
For example, movement in the horizontal plane consists of either forward rightward movement through 90 degrees forward, rightward movement through 45 degrees rightward, forward movement through 90 degrees or rightward forward movement through 45 degrees. These types of movements are also performed in the sagittal and frontal planes. Trajectories are delivered as translations without rotation.
Every trajectory is followed by a repositioning sequence, followed by a 15 second pause to reduce any possible interference from motion prior to each trial, and to ensure that the vestibular system is tested from a steady state to provide feedback of their perceived movement, participants move an arrow with a joystick to indicate their movement relative to their origin. The origin is presented as an avatar from three points of view, and the arrow is always randomly positioned prior to adjustment prior to the trials. It is vital to train the participant to use the feedback system accurately.
They should be able to point the arrow to objects in their surroundings, such as the joystick resting on their lap during the trial. Movement of the joystick is constricted to the trajectory plane, and participants can use any or all viewpoints when collecting data. Each experimental condition is repeated three times and presented in random order.
Data from the 16 participants was analyzed.One. Extreme outlier scores were emitted modality and angle had no significant effect on estimated movement. However, participants underestimated the movement angle size in the horizontal plane by almost nine degrees and overestimated angle size in the frontal plane by about five degrees.Here.
The angle factor was found to significantly interact with the frontal plane factor, such that overestimates were larger for movements through 45 degrees than movements through 90 degrees. In addition, modality was found to significantly interact with angle such that underestimates from vestibular information alone for movements through 90 degrees were significantly larger compared to the visual and combined conditions. Such discrepancies were absent for movements through 45 degrees.
Response time was found to be significantly slower when feedback was provided on vestibular kinesthetic cues alone compared to visual and combined conditions. Participants were also significantly slower when moved in the horizontal plane compared to other planes. These results are really surprising as they suggest that the brain's representation of space is not symmetric across dimensions.
We've known for a while that people tend to underestimate their movement in the horizontal plane here for the first time, we're showing that this is not the case in the vertical dimension. In the future, we'll be able to use these methods to construct paths in all three dimensions, including those which are curved. This will enable us to answer additional questions such as how the brain is able to integrate movement across planes, as well as how it navigates turns.