评估参与的多维度来表征学习:神经生理学角度
12.6K Views
•
13:57 min
•
July 1st, 2015
July 1st, 2015
•副本
The overall goal of this procedure is to gather and synchronize data on behavioral, emotional, and cognitive engagement during learning tasks. This is accomplished by first connecting all of the data collection systems through a synchronization device that will send a time marker within the collected data. The second step is to collect eye tracking and software interactions data for behavioral engagement.
Next, EEG data will be gathered for cognitive engagement characterization. The final step is to collect emotional engagement data coming from ELECTRODERMAL activity or a ED for arousal measurement and automatic facial emotion recognition or A FER to generate valence data. Ultimately, these indexes characterizing the three dimensions of engagement will be re synchronized in a synchronization software according to the time markers initially sent by the syncing device.
The main advantage of this technique is that it quantifies three dimension of learners engagement during a learning task as opposed to classical research methods that solely focus on self-assess instruments of engagement after the tasks. This method can help answer key questions in the field of indication, such as what are the dynamics of affecting cognition in various complex learning settings and what is the impact of various pedagogical settings on effect and cognition? There are implications for this technique in other fields.
For example, in information technology, we're using this technique to measure the engagement of a user while browsing a website or using a software application. This technique is inspired by recent work in neuro omics where vigilance data of long distance driver was gathered using EEG devices. Demonstrating this procedure will be Gabrielle Zu with a research assistant at Tech three Lab of HEC Montreal, but also Igo and Yig, who are research assistant from Neuro Lab of Begin by turning on the eye tracker, EEG amplifier for recording computers and speakers.
Prepare the EEG solution with the required material according to the manufacturer's recommended procedures. Then prepare the EEG and eye tracking software for the upcoming participant. Then start the video recording software and the cameras start the synchronization software with the specific sub-routine created for the project, which allows for the synchronization markers to be sent every 60 seconds.
Then start the physiological measurement software and open the specific layout created for the project. Next, bring the participant into the data collection room and find the cz electrode location on the participant's head according to the 10 20 reference system. Place the EEG net on the participant's head and ensure each electrode is positioned perpendicular to the head surface.
Plug in the net connector and perform an impedance check with a threshold at 40 kilo ohms. Finally, place two physiological sensors on the top of the participant's left hand. Begin data collection.
Once all recording software is ready to be started in Synchrony, perform a nine point calibration procedure and observe the participant while they follow the red dots on the screen. Repeat this procedure until sufficient accuracy is achieved according to the manufacturer's standards, instruct the participant to solve 10 Newtonian physics problems on the computer. Stop data acquisitions on all computers.
Once the task is complete, remove all sensors from the participant. First, import the EEG data into the EEG data analysis software. Next, click file, then new project, and choose the raw data location by clicking browse.
Select the newly created raw data folder. Choose the location of two more folders named history and export in the same way. Then click okay.
Next, click on transformations and IIR filters to pre-process the brain signal by applying a filter and a notch in the software window. Set cutoff limits at 1.5 and 50 hertz with a slope of 12 decibels for each and enable a notch at a frequency of 60 hertz. Then DC D trend the signal by clicking transformations and DC dre.
Select the option based on time at 100 milliseconds before marker and 100 milliseconds before DC connection. Perform a raw data inspection by clicking transformations and raw data inspection. Select semi-automatic inspection and choose the parameters for voltage, maximum minimum and amplitude seen here.
Perform an automatic independent component analysis or ICA with classic sphere for I blink removal by clicking transformations and ICA once completed. Process the inverse ICA by clicking transformations ICA. Then inverse ICA.
Finally export the signal and markers by clicking on export and generic data export. Select the boxes labeled right header file, and right marker file. In addition, select text format for an eventual MATLAB construction of the engagement index.
Next, import the EEG signal into matlab. To do this, start the E-E-G-L-A-B script. So the EEG lab GUI appears in matlab.
Import the data for one participant at a time by selecting file import data using EEG lab functions and plugins and from brain vis rec VDR file paste a script in the command window that will generate an engagement index. This script will output a text file. Open the engagement index text file in Microsoft Excel.
Apply a Z-score normalization to the EEG data to allow intersubject comparison. Once EEG pre-processing is complete. Pre-process the physiology data by importing the ELECTRODERMAL activity or EDA data into the physiological data analysis software.
Then compute a z-score normalization on the EDA data by clicking on transformation and waveform math to allow for intersubject comparison. This is done in two steps. First, select the EDA channel for source one, the minus sign in the mathematical operation window, and K for source two.
Then select new in the destination menu. Enter the mean value of the EDA channel and select transform entire wave. Click okay for the second step, open transformation waveform math again, and select the EDAK channel for source one, the division sign in the mathematical operation window and K for source two.
Then select new in destination menu. Enter the standard deviation value of the EDA channel and select transform entire wave. Finally, click okay for the final pre-processing step.
Import video data from the media recorder into automatic facial emotion recognition software by clicking on file new and then participant. Select a new participant in the project menu. Then click file new analysis and video.
Click on the magnifying glass next to analysis one and choose the desired video file. Finish by calculating the zco of the valence data in SPSS by clicking on analyze descriptive statistics and descriptives. Select save standardized value in variables and a column with Z-score will appear.
Begin by importing the I tracking videos by clicking file, import, and video in a new observation. Name the new observation and choose the desired video file to import. Import all external data by clicking file, import, and external data.
Include the Z-score of the EEG signal zco of the EDA signal and the Z-score of the valence data. If desired, add event markers for further analysis. To synchronize timing between computers, open the offset menu by pressing control shift and the equal sign select numerical offset to enter the time in seconds between each pair of data sources.
Click okay. Next, select interesting variables to be generated in the report by clicking analyze select data and new data profile. Select the desired event to analyze and click okay.
Then add a connection line between the start and nest table, as well as between the Nest and results table. Generate the report by clicking, analyze numerical analysis and new click statistics and check the box next to mean under the external data menu. Then click okay.
Click layout and make sure that the external data are on columns and that the result containers appear on rows. Finish by clicking.Okay. Finally, click on the calculate button so the software updates the calculations involved in the report that is being generated.
Many dimensions of engagement data are synchronized in this experiment, including behavioral, cognitive, and emotional through the use of multiple computers. In order to establish the subject's baseline, it is useful to use data from a point in the task where there is a pause here at the end of the pause. The EEG cognitive engagement is re rising slightly in anticipation of the next task, and arousal is still declining.
Valence data are missing because the participant's eyes are closed during the problem solving task. As the subject reads the third line, you can see their arousal has peaked. Emotional valence is neutral, and EEG cognitive engagement is at a maximum Once mastered.
This technique involves about 30 minutes of subject preparation before the experimentation begins After its development. This technique will pave the way for researchers and practitioner an information technology, but also in the video game industry to monitor in real time the behavioral, the emotional, and the cognitive aspect of users. A key step in any regulation approach is synchronization, and the precision of this procedure should be tested before and after every experiment in order to have an accurate timing between all devices.
After watching this video, you should now have a good understanding of how to gather synchronized psychophysiological data in order to better understand the dynamic of interaction of behavioral, emotional, and cognitive dimensions of engagement during a learning task.
This paper aims to describe the techniques involved in the collection and synchronization of the multiple dimensions (behavioral, affective and cognitive) of learners’ engagement during a task.
探索更多视频
此视频中的章节
0:05
Title
2:20
Equipment and Participant Setup
3:51
Data Collection
4:36
Pre-processing of EEG, Physiology, and Facial Emotion Data
9:51
Data Integration and Synchronization
11:54
Results: Multiple Dimensions of Engagement Assess Learning During a Problem-solving Task
12:50
Conclusion
相关视频

利用脑电图测量任务特定神经效率的比较:空间智能任务
11.4K Views

一个操作性内/外维特班任务的小鼠
12.3K Views

利用脑电图测量和高质量视频记录分析媒体内容的视觉感知
6.9K Views

评估学习困难成人元认知和自我调节的多式联运协议
8.5K Views

衡量社交数字游戏观众的参与度
3.5K Views

用于对从事学习过程的成年人进行行为分析的眼跟踪技术和数据挖掘技术
5.4K Views

基于人工智能的系统,用于检测学生的注意力水平
3.8K Views

基于人工智能的系统,用于检测学生的注意力水平
3.8K Views

基于人工智能的系统,用于检测学生的注意力水平
3.8K Views

通过在线平台和神经成像技术测量学龄儿童跨模式和领域的统计学习
7.6K Views