A subscription to JoVE is required to view this content. Sign in or start your free trial.
Method Article
This paper describes how to implement a battery of behavioral tasks to examine emotion recognition of isolated facial and vocal emotion expressions, and a novel task using commercial television and film clips to assess multimodal emotion recognition that includes contextual cues.
The current study presented 60 people with traumatic brain injury (TBI) and 60 controls with isolated facial emotion expressions, isolated vocal emotion expressions, and multimodal (i.e., film clips) stimuli that included contextual cues. All stimuli were presented via computer. Participants were required to indicate how the person in each stimulus was feeling using a forced-choice format. Additionally, for the film clips, participants had to indicate how they felt in response to the stimulus, and the level of intensity with which they experienced that emotion.
Traumatic brain injury (TBI) affects approximately 10 million people each year across the world 1. Following TBI, the ability to recognize emotion in others using nonverbal cues such as facial and vocal expressions (i.e., tone of voice), and contextual cues, is often significantly compromised 2-7. Since successful interpersonal interactions and quality relationships are at least partially dependent on accurate interpretation of others' emotions7-10, it is not surprising that difficulty with emotion recognition has been reported to contribute to the poor social outcomes commonly reported following TBI3,8,11,12.
Studies investigating emotion recognition by people with TBI have tended to focus on isolated cues, particularly recognition of facial emotion expressions using static photographs2. While this work has been important in informing the development of treatment programs, it does not adequately represent one's ability to recognize and interpret nonverbal cues of emotion in everyday interactions. Not only do static images provide increased time to interpret the emotion portrayed, they also typically only portray the apex (i.e., maximum representation) of the expressions13-16. Additionally, static images lack the temporal cues that occur with movement12,13,15, and are not representative of the quickly changing facial expressions of emotion encountered in everyday situations. Further, there is evidence to indicate that static and dynamic visual expressions are processed in different areas of the brain, with more accurate responses to dynamic stimuli17-24.
Vocal emotion recognition by people with TBI has also been studied, both with and without meaningful verbal content. It has been suggested that the presence of verbal content increases perceptual demands because accurate interpretation requires simultaneous processing of the nonverbal tone of voice with the semantic content included in the verbal message25. While many studies have shown improved recognition of vocals affect in the absence of meaningful verbal content, Dimoska et al. 25 found no difference in performance. They argue that people with TBI have a bias toward semantic content in affective sentences; so even when the content is semantically neutral or eliminated, they continue to focus on the verbal information in the sentence. Thus, their results did not clearly show whether meaningful verbal content helps or hinders vocal emotion expression.
In addition to the meaningful verbal content that can accompany the nonverbal vocal cues, context is also provided through the social situation itself. Situational context provides background information about the circumstances under which the emotion expression was produced and can thus greatly influence the interpretation of the expression. When evaluating how someone is feeling, we use context to determine if the situation is consistent with that person's wants and expectations. This ultimately affects how someone feels, so knowledge of the situational context should result in more accurate inferencing of others' emotion26. The ability to do this is referred to as theory of mind, a construct found to be significantly impaired following TBI6,27-31. Milders et al. 8,31 report that people with TBI have significant difficulty making accurate judgments about and understanding the intentions and feelings of characters in brief stories.
While these studies have highlighted deficits in nonverbal and situational emotion processing, it is difficult to generalize the results to everyday social interactions where these cues occur alongside one another and within a social context. To better understand how people with TBI interpret emotion in everyday social interactions, we need to use stimuli that are multimodal in nature. Only two previous studies have included multimodal stimuli as part of their investigation into how people with TBI process emotion cues. McDonald and Saunders15 and Williams and Wood16 extracted stimuli from the Emotion Evaluation Test (EET), which is part of The Awareness of Social Inference Test26. The EET consists of videotaped vignettes that show male and female actors portraying nonverbal emotion cues while engaging in conversation. The verbal content in the conversations is semantically neutral; cues regarding how the actors are feeling and contextual information are not provided. Thus, the need to process and integrate meaningful verbal content while simultaneously doing this with the nonverbal facial and vocal cues was eliminated. Results of these studies indicated that people with TBI were significantly less accurate than controls in their ability to identify emotion from multimodal stimuli. Since neither study considered the influence of semantic or contextual information on interpretation of emotion expressions, it remains unclear whether the addition of this type of information would facilitate processing because of increased intersensory redundancy or negatively affect perception because of increased cognitive demands.
This article outlines a set of tasks used to compare perception of facial and vocal emotional cues in isolation, and perception of these cues occurring simultaneously within meaningful situational context. The isolations tasks are part of a larger test of nonverbal emotion recognition — The Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2)32. In this protocol, we used the Adult Faces (DANVA-Faces) and Adult Paralanguage (DANVA-Voices) subtests. Each of these subtests includes 24 stimuli, depicting six representations each of happy, sad, angry, and fearful emotions. The DANVA2 requires participants to identify the emotion portrayed using a forced-choice format (4 choices). Here, a fifth option (I don't know) is provided for both subtests. The amount of time each facial expression was shown was increased from two seconds to five seconds to ensure that we were assessing affect recognition and not speed of processing. We did not alter the adult paralanguage subtest beyond adding the additional choice in the forced-choice response format. Participants heard each sentence one time only. Feedback was not provided in either task.
Film clips were used to assess perception of facial and vocal emotion cues occurring simultaneously within situational context. While the verbal content within these clips did not explicitly state what the characters in the clips were feeling, it was meaningful to the situational context. Fifteen clips extracted from commercial movies and television series were included in the study, each ranging from 45 to 103 (mean = 71.87) seconds. Feedback was not provided to participants during this task. There were three clips representing each of the following emotion categories: happy, sad, angry, fearful, neutral. These clips were chosen based on results of a study conducted with young adults (n = 70) Zupan, B. & Babbage, D. R. Emotion elicitation stimuli from film clips and narrative text. Manuscript submitted to J Soc Psyc. In that study, participants were presented with six film clips for each of the target emotion categories, and six neutral film clips, ranging from 24 to 127 seconds (mean = 77.3 seconds). While the goal was to include relatively short film clips, each clip chosen for inclusion needed to have sufficient contextual information that it could be understood by viewers without having additional knowledge of the film from which it was taken33.
The clips selected for the current study had been correctly identified in the Zupan and Babbage study Emotion elicitation stimuli from film clips and narrative text, cited above as the target emotion between 83 and 89% for happy, 86 - 93% for sad, 63 - 93% for angry, 77 - 96% for fearful, and 81 - 87% for neutral. While having the same number of exemplars of each emotion category as was in the DANVA2 tasks (n = 6) would have been ideal, we opted for only three due to the increased length of the film clip stimuli compared to stimuli in the DANVA-Faces and DANVA-Voices tasks.
Data from these tasks were collected from 60 participants with TBI and 60 age and gender matched controls. Participants were seen either individually or in small groups (max = 3) and the order of the three tasks was randomized across testing sessions.
This protocol was approved by the institutional review boards at Brock University and at Carolinas Rehabilitation.
1. Prior to Testing
2. Day of Testing
3. DANVA-Faces Task
4. DANVA-Voices Task
5. Film Clip Task
6. Moving from One Task to Another (Optional Step)
7. Scoring the Tasks
8. Data Analysis
This task battery was used to compare emotion recognition for isolated emotion expressions (i.e., face-only; voice-only) and combined emotion expressions (i.e., face and voice) that occur within a situational context. A total of 60 (37 males and 23 females) participants with moderate to severe TBI between the ages of 21 and 63 years (mean = 40.98) and 60 (38 males and 22 females) age -matched Controls (range = 18 to 63; mean = 40.64) completed the three tasks. To partici...
The manuscript describes three tasks used to assess emotion recognition abilities of people with TBI. The goal of the described method was to compare response accuracy for facial and vocal emotional cues in isolation, to perception of these cues occurring simultaneously within meaningful situational context. Film clips were included in the current study because their approximation of everyday situations made them more ecologically valid than isolated representations of emotion expressions. When carrying out this protocol...
The author has nothing to disclose.
This work was supported by the Humanities Research Institute at Brock University in St. Catharines, Ontario, Canada and by the Cannon Research Center at Carolinas Rehabilitation in Charlotte, North Carolina, USA.
Name | Company | Catalog Number | Comments |
Diagnostic Analysis of Nonverbal Accuracy-2 | Department of Pychology, Emory University. Atlanta, GA | DANVA-Faces subtest, DANVA-Voices subtest | |
Computer | Apple iMac Desktop, 27" display | ||
Statistical Analysis Software | SPSS | University Licensed software for data analysis | |
Happy Film Clip 1 | Sweet Home Alabama, D&D Films, 2002, Director: A Tennant | A man surprises his girlfriend by proposing in a jewelry store | |
Happy Film Clip 2 | Wedding Crashers, New Line Cinema, 2005, Director: D. Dobkin | A couple is sitting on the beach and flirting while playing a hand game | |
Happy Film Clip 3 | Reme-mber the Titans, Jerry Bruckheimer Films, 2000, Director: B. Yakin | An African American football coach and his family are accepted by their community when the school team wins the football championship | |
Sad Film Clip 1 | Grey's Anatomy, ABC, 2006, Director: P. Horton | A father is only able to communicate with his family using his eyes. | |
Sad Film Clip 2 | Armageddon, Touchstone Pictures, 1998, Director: M. Bay | A daughter is saying goodbye to her father who is in space on a dangerous mission | |
Sad Film Clip 3 | Grey's Anatomy, ABC, 2006, Director: M. Tinker | A woman is heartbroken her fiance has died | |
Angry Film Clip 1 | Anne of Green Gables, Canadian Broadcast Corporation, 1985, Director: K. Sullivan | An older woman speaks openly about a child's physical appearance in front of her | |
Angry Film Clip 2 | Enough, Columbia Pictures, 2002, Director: M. Apted | A wife confronts her husband about an affair when she smells another woman's perfume on his clothing | |
Angry Film Clip 3 | Pretty Woman, Touchstone Pictures, 1990, Director: G. Marshall | A call girl attempts to purchase clothing in an expensive boutique and is turned away | |
Fearful Film Clip 1 | Blood Diamond, Warner Bros Pictures, 2006, Director: E. Zwick | Numerous vehicles carrying militia approach a man and his son while they are out walking | |
Fearful Film Clip 2 | The Life Before Her Eyes, 2929 Productions, 2007, Directors: V. Perelman | Two teenaged gilrs are talking in the school bathroom when they hear gunshots that continue to get closer | |
Fearful Film Clip 3 | The Sixth Sense, Barry Mendel Productions, 1999, Director: M. N. Shyamalan | A young boy enters the kitchen in the middle of the night and finds his mother behaving very atypically | |
Neutral Film Clip 1 | The Other Sister, Mandeville Films, 1999, Director: G. Marshall | A mother and daughter are discussing an art book while sitting in their living room | |
Neutral Film Clip 2 | The Game, Polygram Filmed Entertainment, 1999, Director: D. Fincher | One gentleman is explaining the rules of a game to another | |
Neutral Film Clip 3 | The Notebook, New Line Cinema, 2004, Director: N. Cassavetes | An older gentleman and his doctor are conversing during a routine check-up |
Request permission to reuse the text or figures of this JoVE article
Request PermissionThis article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. All rights reserved