8.2K Views
•
12:55 min
•
September 27th, 2020
DOI :
September 27th, 2020
•Transkrypcja
Multimodal protocol for assessing metacognition and self-regulation in adults with learning difficulties. Specific learning disabilities encompass different disorders of those who have difficulties in learning and applying academic skills. Showing a performance below expectations in different areas, like reading, writing, or mathematics.
Each disorder implies different deficits but we can find some commonalities on that like metacognitive, self regulation, and emotional malfunctioning. However, there are hardly any evaluation protocol for adults with learning disabilities. In response to this, we want to propose a Multimodal Assessment Protocol focusing on self regulation, metacognition, and emotional processes happening during learning.
The evaluation is carried out through both online and offline methodology. We use different techniques, among others, eye tracking, face emotion recognition, galvanic skin response, log data analyze, and of course, interviews, questionnaires and self-reports. And soon we want to provide to the research community with some theoretically driving and empirically based guidelines for an accurate assessment of adults with learning disability in order to promote efficient intervention and prevention actions.
Informed consent. Explain to the participants the ethical and confidentiality aspect of the research. And lead them to acknowledge and sign the individual informed consent.
Structured interview. Explain to the participants how the session will be performed, and following the script, collect the biographical information along with the presence of symptoms related to a specific learning disabilities that are referred in the DSM V.First decision point. Finish the assessment if the participant meets the initial exclusion criteria There is, if they show a moderate disability in upper segments, a sensory disability, visual, or auditory, a diagnosis of intellectual disability and/or a serious mental disorder.
Continue the assessment if it seems that the participant has a specific learning disability and no meet exclusion criteria. Intellectual ability. Apply the Wechsler Adult Intelligence Scale, fourth edition for collecting information about participant's intellectual ability following the instructions of the manual.
Second decision point. Finish the assessment if the participant doesn't understand the instructions of the test or it cannot be evaluated or have an intelligence quotient less than 70. Attention Deficit Hyperactivity Disorder.
Ask the participant to complete the self-reported screening questionnaire for all those of the World Health Organization in order to collect information about the presence of symptoms related to ADHD disorder. If the participant scores 12 or more in this questionnaire, apply the test to the subject. Reading difficulties.
Apply the Revised screening test for reading difficulties following the instructions of the manual. Autism Spectrum Disorder. Ask the participant to complete the Autism Spectrum Quotient Questionnaire which provides information on the presence of symptoms related to social behavior, social skills, routines, switching, imagination, and numbers patterns.
Last step of the session one. Analyze the results. Analyze each participant's interview questionnaires and test results and decide if they have significant learning difficulties or not.
Two members of the experts committee, the evaluator and other member of the research team, analyze each participant's learning profile and decide if he or she is a student with a specific learning disabilities ADHD and/or Autism Spectrum Disorder or not. No tests can replace the expert's judgment. If the participant doesn't have any symptom the protocol is finished.
If the participant show learning difficulties go to session two. Prepare the participant. Ask the participants tie back the hair, clear the neck, remove their glasses, and take chewing gum out if applicable.
Clean the GSR and the participant's fingers with alcohol for skin. Galvanic Skin Response preparation and calibration. With the finger wristbands sensors on the index and ring fingers, with the connectors on the finger-tip side.
Ask the participant to leave his hand on the table quietly and try to relax. Open the software in the computer. Make sure that the registration graph is working so the GSR is recording.
Select the following sequence on the software. Run experiment, rate 10 per second, duration 10 minutes, and record. Now it's ready to record for 10 minutes to establish the baseline.
Eye tracking and webcam preparation and calibration. Open the software in the computer, check that the two computers are connected to each other and that the eye tracking infrared lights are on and ready to capture the movement of the eyes. Adjust the webcam on the computer to the subject's position and ask the participant to sit facing forward and be as natural as possible.
Click record. Write the registration data of the participant and then, press okay to start the calibration process. Ask the participant to press the space bar and follow the points on the screen with their eyes.
Make sure that the participant's eyes, while looking at the screen, are centered before moving into the next step. The participant's gaze is centered when the movement of their eyes are registered on the side laptop screen with two white circles. Multimodal tracking of the learning session.
Questionnaires and learning session in MetaTutor. Open the software and introduce the registration data of the participant. Every look will be recorded during the session in a data log file.
Ask the participant about demographic and academic information. Before clicking continue, explain to the participants that they must follow the instructions that the tool will give them. Also, that they will be only interacting with the computer during the learning session.
Ask the participant to answer the questionnaires about personality, self regulation, epistemological beliefs, and previous knowledge about the circulatory system. Show the participant the interface of MetaTutor and its different parts. The content area is where the learning content is displayed throughout the session in text form.
They can navigate through a table of contents at the side of the screen to go to different pages. The overall learning goal is displayed at the top of the screen during the session. The sub goals line is set and displayed at the top in the middle of the screen.
And they can manage sub goals or prioritize them here. A timer located at the top left corner of the screen displays the amount of time remaining in the session. As well as this, a list of self regulating processors are displayed in a pallet on the right hand side of the screen, and the participant can click on them for all the session to the deploy planning, monitoring, or learning strategies.
Static images relevant to content pages are displayed beside the text to help learners coordinate the information from different sources. Text entered on the keyboard and the student's interactions with aliens are displayed and recorded at this part of the interface. Four artificial aliens help students in their learning throughout this session.
These aliens are;Gavin the guy, Pam the planner, Mary the monitor, and Sam the strategizer. Ask the participant to click start the learning session whenever he's ready. The participant interacts with the tool.
Once the session is finished ask the participant to complete the questionnaires again.Logoff. At the end of the session, save the recorded data of GSR eye tracking, webcam, and MetaTutor with the registration number of the participant. Extract the data as CSV files for easier use.
Analysis of learning difficulties. Analyze each participant's learning performance based on the different reports obtained to result in a Multimodal profile. Remember, no report can replace the expert's judgment.
This section illustrates some representative examples of results that can be obtained from the measures of session two. Firstly, we obtain a measure of GSR as an indication of emotional arousal during the learning session. Abrogates needs individual analysis by the expert committee taking into account each participant's specific baseline.
Secondly, we obtain the predominant emotions during the learning process thanks to the Facial Emotion Recognition software. The results indicate the degree of coincidence with the analyzed emotions assigning values between zero and one to each of them. Thirdly, we extract eye tracking gaze information in terms of proportion of fixation time on pattern of fixations.
For that purpose, we define seven areas of interest for self relation assessment in the MetaTutor interface. For instance, the image A shows a participant spending too much time in text content area, which could be indicating malfunction in regulation. Conversely, image B shows a participant who uses the learning tool resources in a balanced way.
Fourthly, questionnaires are scored, according to the author's instructions. In this image, for example, we can observe the contours between the result obtained by different participants in self-esteem and even emotional regulation. Finally, all the learner's interactions with content, aliens and the learning environment, are recorded in logs for further detailed analysis.
Previous experience allowed many adults to compensate their deficits and show individual characteristics on testing. Therefore, it is difficult to provide with accurate graph points for some of the data sources like GSR or log data aside in the target population. So result, the judgment of the experts to interpret the results as a whole, is still not replaceable.
The current work proposes a multimodal evaluation protocol focused on metacognitive, self-regulation of learning, and emotional processes, which make up the basis of the difficulties in adults with LDs.
Przeglądaj więcej filmów
Rozdziały w tym wideo
0:00
Introduction
1:45
Protocol
10:19
Results
12:04
Conclusion
Powiązane Filmy
Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone