My research in neuro rehabilitation centers around improving upper limb function for individuals with neurological injuries or disorders using a combination of EEG, motor imagery, and virtual reality technologies. The primary goal is to understand how these technologies can be integrated to enhance the effectiveness of motor skills rehabilitation. Virtual reality environments have become more sophisticated, enabling realistic simulations, tailored to individual rehabilitation needs, which help in refining motor skills in a nuanced manner.
Hybrid systems that combine VR with other technologies like robotic arms and haptic feedback devices, are also on the rise. One of the current experimental challenges with the use of brain-computer interfaces revolves around the issue of intersubject variability and the need for extensive training. Each individual's brain signal can be vastly different, which means that BCIs often need to be extensively customized or calibrated for each user.
Current treatments often lack the immersion and interactivity necessary for maximum efficacy. To fill this gap, our protocol utilizes motor imagery with an innovative twist, integrating digital twins represented by personalized 3D avatars in a virtual reality setting. This integration enhances immersion, making the rehabilitation process not just a mental exercise, but also an engaging experience.
Our research protocol offers significant advantages over other techniques, particularly in terms of ease of setup and cost effectiveness. A standout feature is the creation and utilization of 3D avatars that closely resemble the subjects. This is achieved using simple, readily available tools and software that can generate personalized avatars from basic input data such as photographs.
Begin by assembling the 16 channel EEG data acquisition system. Attach the DAISY module, which has eight EEG channels to the board containing eight EEG channels. Using a Y-splitter cable, connect the reference electrode to the bottom reference pin on the DAISY board and the board, both labeled as SRB.
Connect the ground electrode to the bias pin on the bottom board. Next, connect the 16 EEG electrodes to the bottom board pins and the DAISY bottom pins labeled N1P to N8P. Insert the electrodes on the gel-free cap at the labeled locations as per the international 10-20.
Soak 18 sponges provided for the EEG electrodes in a saline solution for 15 minutes. Insert the soaked sponges on the underside of each electrode to establish contact between the scalp and the electrode. Then let the participants sit comfortably in a quiet room.
Place the gel-free EEG cap on the participant's scalp and ensure the cap is aligned to fit over the participant's ears. Connect the USB dongle to the laptop. Open the EEG GUI, click on EEG System.
Under the data source option, select serial from dongle, 16 channels, and auto connect. Inside the data acquisition screen, select the signal widget to check the signal quality of the connected electrodes. At each electrode site, verify an optimal impedance level of less than 10k ohm.
If the impedance is higher than 10k ohm, add a few drops of saline solution to the sponge under the electrode. Then close the GUI. Next, open the acquisition server software and select the appropriate EEG board under driver.
Click connect and then play to establish a connection with the EEG system. For game design, open the game engine software and select motor imagery training project. Enable VR support by clicking on edit, then project settings, followed by XR plugin management.
Check the box for the VR headset listed under virtual reality SDKs. Delete the default camera and drag the VR camera from the VR integration package into the scene. Also, place the imported animation file in the scene and adjust the scale and orientation as needed.
For motor imagery training, set the OSC listener game object with pre-written scripts to trigger model animations for left and right hand movements based on OSC messages. Next, in the game engine software, open file and click on build settings. Select PC, Mac and Linux standalone, then target Windows, followed by clicking build and run.
For the motor imagery testing project, use the OSC listener game object configured with scripts to receive OSC signals indicative of the participant's imagined hand movements and make the avatar perform the imagined movement. To begin, open the software tool to design and run motor imagery scenarios. Navigate to file and load the six motor imagery BCI scenarios labeled signal verification, acquisition, CSP training, classifier training, testing, and confusion matrix.
Navigate to the signal verification scenario and apply a band pass filter between one to 40 hertz with a filter order of four to the raw signals using designer boxes. Guide the participants to undergo motor imagery tasks, imagining hand movements in response to visual cues. Open the file for motor imagery training and display the prepared 3D avatar standing over a set of bongos through the VR headset.
Navigate to the acquisition scenario and double click the Graz motor imagery stimulator to configure the box. Configure 50 trials of five second each for both left and right hand movements. Incorporate a 20 second baseline period followed by intervals of 10 seconds rest after every 10 trials to avoid mental fatigue.
Configure the left and right hand trials to be randomized and add a cue before the trial, indicating the hand to be imagined. Connect an OSC box with the IP address and port to transmit the cue for the hand to be imagined to the motor imagery training game engine program. Then sanitize the VR headset with wipes and place it on the participant's head to facilitate an immersive interaction while capturing EEG data.
Direct the participants to imagine executing the movement of their hand along with the 3D avatar, following the same pace as the avatar when it hits the bongo with the corresponding hand, with a text cue displaying which hand is to be imagined. Following the acquisition, run the CSP training scenario to analyze the EEG data from the acquisition stage. Create filters to distinguish between left and right hand imagery and compute CSP.
After the CSP training, navigate to the classifier training scenario and run it to prepare the system for real-time avatar control. Then navigate to the testing scenario and allow the participants to control their 3D avatars in real-time using brain-computer interface technology. To interpret the imagined actions in real-time, load the classifiers trained during the scenario on EEG data in the appropriate boxes.
Brief participants on the testing procedure, emphasizing the need to clearly imagine hand movements as prompted by text cues. Conduct 20 trials for each participant, divided equally between imagining movements of the left and right hand and randomized. Connect and configure an OSC box to transmit the cue information, which will be displayed as text and indicate the hand to be imaged in the game engine program.
Connect to another OSC box to transmit the predicted value for the left and right hand movements for the game engine program. Run the testing scenario and the motor imagery testing game engine program. Observe that the program plays the corresponding animation based on hand movement.
Five healthy adults, aged 21 to 38, participated in the study under both motor imagery training and testing conditions. An average confusion matrix for all subjects was used to evaluate the classifier's accuracy in distinguishing between left and right motor imagery signals during both sessions. Topographical patterns of CSP weights for motor imagery training were visualized for both motor imagery directions.
A time frequency analysis was conducted on EEG data from contralateral sensor motor areas to identify event related spectral perturbations during motor tasks.