JoVE Logo

Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

This paper describes how to implement a battery of behavioral tasks to examine emotion recognition of isolated facial and vocal emotion expressions, and a novel task using commercial television and film clips to assess multimodal emotion recognition that includes contextual cues.

Abstract

The current study presented 60 people with traumatic brain injury (TBI) and 60 controls with isolated facial emotion expressions, isolated vocal emotion expressions, and multimodal (i.e., film clips) stimuli that included contextual cues. All stimuli were presented via computer. Participants were required to indicate how the person in each stimulus was feeling using a forced-choice format. Additionally, for the film clips, participants had to indicate how they felt in response to the stimulus, and the level of intensity with which they experienced that emotion.

Introduction

Traumatic brain injury (TBI) affects approximately 10 million people each year across the world 1. Following TBI, the ability to recognize emotion in others using nonverbal cues such as facial and vocal expressions (i.e., tone of voice), and contextual cues, is often significantly compromised 2-7. Since successful interpersonal interactions and quality relationships are at least partially dependent on accurate interpretation of others' emotions7-10, it is not surprising that difficulty with emotion recognition has been reported to contribute to the poor social outcomes commonly reported following TBI3,8,11,12.

Studies investigating emotion recognition by people with TBI have tended to focus on isolated cues, particularly recognition of facial emotion expressions using static photographs2. While this work has been important in informing the development of treatment programs, it does not adequately represent one's ability to recognize and interpret nonverbal cues of emotion in everyday interactions. Not only do static images provide increased time to interpret the emotion portrayed, they also typically only portray the apex (i.e., maximum representation) of the expressions13-16. Additionally, static images lack the temporal cues that occur with movement12,13,15, and are not representative of the quickly changing facial expressions of emotion encountered in everyday situations. Further, there is evidence to indicate that static and dynamic visual expressions are processed in different areas of the brain, with more accurate responses to dynamic stimuli17-24.

Vocal emotion recognition by people with TBI has also been studied, both with and without meaningful verbal content. It has been suggested that the presence of verbal content increases perceptual demands because accurate interpretation requires simultaneous processing of the nonverbal tone of voice with the semantic content included in the verbal message25. While many studies have shown improved recognition of vocals affect in the absence of meaningful verbal content, Dimoska et al. 25 found no difference in performance. They argue that people with TBI have a bias toward semantic content in affective sentences; so even when the content is semantically neutral or eliminated, they continue to focus on the verbal information in the sentence. Thus, their results did not clearly show whether meaningful verbal content helps or hinders vocal emotion expression.

In addition to the meaningful verbal content that can accompany the nonverbal vocal cues, context is also provided through the social situation itself. Situational context provides background information about the circumstances under which the emotion expression was produced and can thus greatly influence the interpretation of the expression. When evaluating how someone is feeling, we use context to determine if the situation is consistent with that person's wants and expectations. This ultimately affects how someone feels, so knowledge of the situational context should result in more accurate inferencing of others' emotion26. The ability to do this is referred to as theory of mind, a construct found to be significantly impaired following TBI6,27-31. Milders et al. 8,31 report that people with TBI have significant difficulty making accurate judgments about and understanding the intentions and feelings of characters in brief stories.

While these studies have highlighted deficits in nonverbal and situational emotion processing, it is difficult to generalize the results to everyday social interactions where these cues occur alongside one another and within a social context. To better understand how people with TBI interpret emotion in everyday social interactions, we need to use stimuli that are multimodal in nature. Only two previous studies have included multimodal stimuli as part of their investigation into how people with TBI process emotion cues. McDonald and Saunders15 and Williams and Wood16 extracted stimuli from the Emotion Evaluation Test (EET), which is part of The Awareness of Social Inference Test26. The EET consists of videotaped vignettes that show male and female actors portraying nonverbal emotion cues while engaging in conversation. The verbal content in the conversations is semantically neutral; cues regarding how the actors are feeling and contextual information are not provided. Thus, the need to process and integrate meaningful verbal content while simultaneously doing this with the nonverbal facial and vocal cues was eliminated. Results of these studies indicated that people with TBI were significantly less accurate than controls in their ability to identify emotion from multimodal stimuli. Since neither study considered the influence of semantic or contextual information on interpretation of emotion expressions, it remains unclear whether the addition of this type of information would facilitate processing because of increased intersensory redundancy or negatively affect perception because of increased cognitive demands.

This article outlines a set of tasks used to compare perception of facial and vocal emotional cues in isolation, and perception of these cues occurring simultaneously within meaningful situational context. The isolations tasks are part of a larger test of nonverbal emotion recognition — The Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2)32. In this protocol, we used the Adult Faces (DANVA-Faces) and Adult Paralanguage (DANVA-Voices) subtests. Each of these subtests includes 24 stimuli, depicting six representations each of happy, sad, angry, and fearful emotions. The DANVA2 requires participants to identify the emotion portrayed using a forced-choice format (4 choices). Here, a fifth option (I don't know) is provided for both subtests. The amount of time each facial expression was shown was increased from two seconds to five seconds to ensure that we were assessing affect recognition and not speed of processing. We did not alter the adult paralanguage subtest beyond adding the additional choice in the forced-choice response format. Participants heard each sentence one time only. Feedback was not provided in either task.

Film clips were used to assess perception of facial and vocal emotion cues occurring simultaneously within situational context. While the verbal content within these clips did not explicitly state what the characters in the clips were feeling, it was meaningful to the situational context. Fifteen clips extracted from commercial movies and television series were included in the study, each ranging from 45 to 103 (mean = 71.87) seconds. Feedback was not provided to participants during this task. There were three clips representing each of the following emotion categories: happy, sad, angry, fearful, neutral. These clips were chosen based on results of a study conducted with young adults (n = 70) Zupan, B. & Babbage, D. R. Emotion elicitation stimuli from film clips and narrative text. Manuscript submitted to J Soc Psyc. In that study, participants were presented with six film clips for each of the target emotion categories, and six neutral film clips, ranging from 24 to 127 seconds (mean = 77.3 seconds). While the goal was to include relatively short film clips, each clip chosen for inclusion needed to have sufficient contextual information that it could be understood by viewers without having additional knowledge of the film from which it was taken33.

The clips selected for the current study had been correctly identified in the Zupan and Babbage study Emotion elicitation stimuli from film clips and narrative text, cited above as the target emotion between 83 and 89% for happy, 86 - 93% for sad, 63 - 93% for angry, 77 - 96% for fearful, and 81 - 87% for neutral. While having the same number of exemplars of each emotion category as was in the DANVA2 tasks (n = 6) would have been ideal, we opted for only three due to the increased length of the film clip stimuli compared to stimuli in the DANVA-Faces and DANVA-Voices tasks.

Data from these tasks were collected from 60 participants with TBI and 60 age and gender matched controls. Participants were seen either individually or in small groups (max = 3) and the order of the three tasks was randomized across testing sessions.

Protocol

This protocol was approved by the institutional review boards at Brock University and at Carolinas Rehabilitation.

1. Prior to Testing

  1. Create 3 separate lists for the 15 film clips, each listing the clips in a different restricted order.
  2. Ensure that each list begins and ends with a neutral stimulus and that no two clips that target the same emotion occur consecutively.
  3. Create a separate folder on the desktop for each list of film order presentation and label the folder with the Order name (e.g., Order 1).
  4. Save the 15 clips into each of the three folders.
  5. Re-label each film clip so it reflects the presentation number and gives no clues regarding the target emotion (i.e., 01; 02; 03; etc.).
  6. Create six randomized orders of task (DANVA-Faces, DANVA-Voices, Film Clips) presentation (see Figure 1).
  7. Assign the incoming participant to one of the three restricted orders of film clips presentation.
  8. Assign the incoming participant to one of the six randomized orders of task presentation.

2. Day of Testing

  1. Bring participant(s) into the lab and seat them comfortably at the table.
  2. Review the consent form with the participant by reading each section with him/her.
  3. Answer any questions the participant may have about the consent form and have him/her sign it.
  4. Have participants complete the brief demographic questionnaire (Date of birth; gender). Have participants with TBI additionally complete the section of the questionnaire on relevant medical history questionnaire (date of injury, cause of injury, injury severity).
    NOTE: When participants are completing the questionnaire, inquire about any known visual processing difficulties so you may seat participants with TBI accordingly
  5. Position the participant in front of the computer in a chair. Ensure that all participants (if more than one) have a clear view of the screen.

3. DANVA-Faces Task

  1. Give each participant a clip board with the DANVA-Faces response sheet attached and a pen to circle responses.
  2. Provide the participant the following instructions: "After you see each item, circle your response (happy, sad, angry, fearful, or neutral) on the line that corresponds to the item number shown on each slide."
    NOTE: If participants with TBI have indicated visual processing difficulties or fine motor difficulties are evident, provide the participant with the alternate response page. This page lists the five responses in larger text in landscape format.
  3. Open the DANVA-Faces task in a presentation and play it using the Slide Show View.
  4. Give participants the following instructions: "For this activity, I am going to show you some peoples' faces and I want you to tell me how they feel. I want you to tell me if they are happy, sad, angry or fearful. Fearful is the same thing as afraid or scared. If you are unsure of the emotion, choose the 'I don't know' response. There are 24 faces altogether. Each face will be on the screen for five seconds. You must answer before the face disappears for your answer to count. Indicate your answer by circling it on the sheet in front of you".
    NOTE: If the participant is unable to circle his/her own responses and is participating in in the study as part of a small group, provide the following instruction: "Indicate your answer by pointing to it on the sheet in front of you". The examiner will then record the participant's response on the DANVA-Faces response sheet.
  5. Ensure the participants do not have any questions before starting.
  6. Start the task with the two practice trials. Give participants the following instructions: "We are going to complete two practice trials so that you get a sense of how long each face will appear on the screen, and how long you have to provide your answer".
  7. Hit enter on the keyboard when the face disappears to move to the next stimulus. Ensure all participants are looking up at the screen before doing so.
  8. Ensure that the participants do not have questions. If none, begin the test trials. Ensure all participants are looking at the screen. Hit enter to start the test stimuli.
  9. Continue the procedure outlined for the practice tasks until the 24 stimuli are complete.
  10. Collect the participant response sheets.

4. DANVA-Voices Task

  1. Give each participant a clip board with the DANVA-Voices response sheet attached and a pen to circle responses.
    NOTE: If participants with TBI have indicated visual processing difficulties or fine motor difficulties are evident, provide the participant with the alternate response page. This page lists the five responses in larger text in landscape format.
  2. Open the DANVA-Voices task by going to the following website:
    http://www.psychology.emory.edu/clinical/interpersonal/danva.htm
  3. Click on the link for Adult faces, voices, and postures.
  4. Fill in the login and password by entering any letter for the login and typing EMORYDANVA2 for the password.
  5. Click Continue.
  6. Click on Voices (green circle in the center of the screen).
  7. Test the volume level. Provide the following instructions: "I want to be sure that you can hear the sound on the computer and that it is at a comfortable volume for you. I am going to click test sound. Please tell me if you can hear the sound, and also if you need the sound to be louder or more quiet to be comfortable for you."
  8. Click test sound.
  9. Adjust the volume according to the participant's request (increase/decrease).
  10. Click 'test sound' again. Adjust the volume according to the participant's request. Do this until the participant reports it is at a comfortable volume.
  11. Review the task instructions with the participant. Give participants the following instructions: For this activity, you are going to hear someone say the sentence 'I'm going out of the room now, but I'll be back later.' I want you to tell me if they are happy, sad, angry or fearful. Fearful is the same thing as afraid or scared. If you are unsure of the emotion, choose the 'I don't know' response. There are 24 sentences. Before each sentence is spoken, a number will be announced. You need to listen to the sentence that follows. The sentences will not be repeated so you need to listen carefully. Tell me how the person is feeling by circling your answer on the sheet in front of you". Once you have made your selection, I need to select a response on the computer screen to move to the next item. I will always select the same emotion (fearful). My selection does not indicate the correct answer in anyway."
    NOTE: If the participant is unable to circle his/her own responses and is participating in in the study as part of a small group, provide the following instruction: "Indicate your answer by pointing to it on the sheet in front of you". The examiner will then record the participant's response on the DANVA-Voices response sheet.
  12. Click 'continue'.
  13. Direct participants to circle (or point to) their answer on their response sheet when the sentence has played.
  14. Click 'fearful'. Click 'next'.
  15. Continue this procedure until the 24 sentences are complete.
  16. Exit the website when you reach the end of the task.
  17. Collect the participants' response sheets.

5. Film Clip Task

  1. Seat participants comfortably in front of the computer. Ensure that the screen can be fully seen by all.
  2. Give each person a clipboard with a copy of the Emotional Film Clip Response sheet.
    NOTE: If participants with TBI have indicated visual processing difficulties or fine motor difficulties are evident, provide the participant with two alternate response pages. The first page lists the five forced-choice responses and the second provides the 0-9 scale. Both response pages are in larger text and in landscape format.
  3. Open the folder on the computer for the assigned order of film clip presentation.
  4. Provide participants with the following directions: "You are going to view a total of 15 film clips. In each film clip, you are going to see a character who looks like they are feeling a certain emotion. They might look 'happy', 'sad', 'angry', 'fearful', or 'neutral'. First, I want you to tell me what emotion the character is showing. Then tell me what emotion you experienced while watching the film clip. Finally, tell me the number that best describes the intensity of the emotion you felt while watching the clip. For example, if the person's face made you feel 'mildly' happy or sad, circle a number ranging from 1 to 3. If you felt 'moderately' happy or sad, circle a number ranging from 4 to 6. If you felt 'very' happy or sad, circle a number ranging from 7 to 9. Only circle 0 if you did not feel any emotion at all. Do you have any questions?"
    NOTE: Only the first question (Tell me what emotion the character is showing) was analyzed for the current study.
  5. Double click 01 to open the first film clip.
  6. Tell the participant which character to focus on while viewing the clip.
  7. Select 'View Full Screen' in the options menu to play the clip.
  8. Direct the participants to respond to line a (What emotion is the main character experiencing?) on their response sheet when the clip has finished playing.
  9. Direct the participants to respond to line b (Choose one emotion that best describes how you felt while watching the clip) on the response sheet once line a has been answered.
  10. Direct the participant to respond to line c (How would you rate the intensity of the emotion you experienced while watching the clip?) once line b has been answered.
  11. Direct the participant to re-direct his/her attention to the computer screen once the responses are complete.
  12. Return to the folder on the computer that contains the film clips in the assigned order for the participant.
  13. Double cck on 02 to open the second film clip. Repeat Steps 5.5 to 5.12 until the participant has viewed all 15 film clips.
  14. Collect the response sheet from the participant(s).

6. Moving from One Task to Another (Optional Step)

  1. Provide participants the option to take a break between Tasks 2 and 3.

7. Scoring the Tasks

  1. Refer to the answer key on page 25 of the DANVA2 manual32 to score the DANVA-Faces task.
  2. Score items that match the answer key as 1 and incorrect items as 0. 'I don't know' responses are scored as 0.
  3. Calculate a raw score by adding the total number of correct responses. Divide the raw score by 24 (total items) to obtain a percentage accuracy score.
  4. Refer to the answer key on page 27 of the DANVA2 manual32 and repeat steps 7.2 and 7.3 to score the DANVA-Voices task.
  5. Refer to the answer key for assigned order of film clip presentation for the participant. Score a 1 if the participant identified the target emotion when responding to the question about what emotion the character was showing, and 0 if the participant did not select the target emotion (including if the participant said 'I don't know').
  6. Calculate a raw score by adding the number of correct responses. Divide the raw score by 15 (total items) to obtain a percentage accuracy score.
  7. Create a subset score for the Film Clip task that contains only responses to the happy, sad, angry, and fearful film clips by adding correct responses to only these 12 clips. Divide the number of correct responses by 12 to obtain a percentage accuracy score.
  8. Create a score for responses to neutral film clips by adding the number of correct responses to the neutral clips. Divide the number of correct responses by 3 to obtain a percentage accuracy score.
    NOTE: A subset score needs to be created because the DANVA tasks did not include portrayals of Neutral while the Film Clips task did.
  9. Calculate a score for responses to positively valenced items (i.e., happy) and negatively valenced items (combine responses to angry, fearful and sad stimuli) for all three tasks.
  10. Divide the score for positively valenced items for each task by 3 to obtain percentage accuracy score.
  11. Divide the score for negatively valenced items for each task by 9 to obtain percentage accuracy scores.

8. Data Analysis

  1. Conduct a mixed model analyses of variance (ANOVA) to examine responses to isolated facial emotion stimuli, isolated vocal emotion stimuli, and multimodal film clip stimuli by participants with TBI and Control participants.
  2. Explore main effects found in the ANOVA using follow-up univariate comparisons.

Results

This task battery was used to compare emotion recognition for isolated emotion expressions (i.e., face-only; voice-only) and combined emotion expressions (i.e., face and voice) that occur within a situational context. A total of 60 (37 males and 23 females) participants with moderate to severe TBI between the ages of 21 and 63 years (mean = 40.98) and 60 (38 males and 22 females) age -matched Controls (range = 18 to 63; mean = 40.64) completed the three tasks. To partici...

Discussion

The manuscript describes three tasks used to assess emotion recognition abilities of people with TBI. The goal of the described method was to compare response accuracy for facial and vocal emotional cues in isolation, to perception of these cues occurring simultaneously within meaningful situational context. Film clips were included in the current study because their approximation of everyday situations made them more ecologically valid than isolated representations of emotion expressions. When carrying out this protocol...

Disclosures

The author has nothing to disclose.

Acknowledgements

This work was supported by the Humanities Research Institute at Brock University in St. Catharines, Ontario, Canada and by the Cannon Research Center at Carolinas Rehabilitation in Charlotte, North Carolina, USA.

Materials

NameCompanyCatalog NumberComments
Diagnostic Analysis of Nonverbal Accuracy-2Department of Pychology, Emory University. Atlanta, GADANVA-Faces subtest, DANVA-Voices subtest
ComputerApple iMac Desktop, 27" display
Statistical Analysis SoftwareSPSSUniversity Licensed software for data analysis
Happy Film Clip 1Sweet Home Alabama, D&D Films, 2002, Director: A TennantA man surprises his girlfriend by proposing in a jewelry store
Happy Film Clip 2Wedding Crashers, New Line Cinema, 2005, Director: D. DobkinA couple is sitting on the beach and flirting while playing a hand game
Happy Film Clip 3Reme-mber the Titans, Jerry Bruckheimer Films, 2000, Director: B. YakinAn African American football coach and his family are accepted by their community when the school team wins the football championship
Sad Film Clip 1Grey's Anatomy, ABC, 2006, Director: P. HortonA father is only able to communicate with his family using his eyes. 
Sad Film Clip 2Armageddon, Touchstone Pictures, 1998, Director: M. BayA daughter is saying goodbye to her father who is in space on a dangerous mission
Sad Film Clip 3Grey's Anatomy, ABC, 2006, Director: M. TinkerA woman is heartbroken her fiance has died
Angry Film Clip 1Anne of Green Gables, Canadian Broadcast Corporation, 1985, Director: K. SullivanAn older woman speaks openly about a child's physical appearance in front of her
Angry Film Clip 2Enough, Columbia Pictures, 2002, Director: M. AptedA wife confronts her husband about an affair when she smells another woman's perfume on his clothing
Angry Film Clip 3Pretty Woman, Touchstone Pictures, 1990, Director: G. MarshallA call girl attempts to purchase clothing in an expensive boutique and is turned away
Fearful Film Clip 1Blood Diamond, Warner Bros Pictures, 2006, Director: E. ZwickNumerous vehicles carrying militia approach a man and his son while they are out walking
Fearful Film Clip 2The Life Before Her Eyes, 2929 Productions, 2007, Directors: V. PerelmanTwo teenaged gilrs are talking in the school bathroom when they hear gunshots that continue to get closer
Fearful Film Clip 3The Sixth Sense, Barry Mendel Productions, 1999, Director: M. N. ShyamalanA young boy enters the kitchen in the middle of the night and finds his mother behaving very atypically
Neutral Film Clip 1The Other Sister, Mandeville Films, 1999, Director: G. MarshallA mother and daughter are discussing an art book while sitting in their living room
Neutral Film Clip 2The Game, Polygram Filmed Entertainment, 1999, Director: D. FincherOne gentleman is explaining the rules of a game to another
Neutral Film Clip 3The Notebook, New Line Cinema, 2004, Director: N. CassavetesAn older gentleman and his doctor are conversing during a routine check-up

References

  1. Langlois, J., Rutland-Brown, W., Wald, M. The epidemiology and impact of traumatic brain injury: A brief overview. J. Head Trauma Rehabil. 21 (5), 375-378 (2006).
  2. Babbage, D. R., Yim, J., Zupan, B., Neumann, D., Tomita, M. R., Willer, B. Meta-analysis of facial affect recognition difficulties after traumatic brain injury. Neuropsychology. 25 (3), 277-285 (2011).
  3. Neumann, D., Zupan, B., Tomita, M., Willer, B. Training emotional processing in persons with brain injury. J. Head Trauma Rehabil. 24, 313-323 (2009).
  4. Bornhofen, C., McDonald, S. Treating deficits in emotion perception following traumatic brain injury. Neuropsychol Rehabil. 18 (1), 22-44 (2008).
  5. Zupan, B., Neumann, D., Babbage, D. R., Willer, B. The importance of vocal affect to bimodal processing of emotion: implications for individuals with traumatic brain injury. J Commun Disord. 42 (1), 1-17 (2009).
  6. Bibby, H., McDonald, S. Theory of mind after traumatic brain injury. Neuropsychologia. 43 (1), 99-114 (2005).
  7. Ferstl, E. C., Rinck, M., von Cramon, D. Y. Emotional and temporal aspects of situation model processing during text comprehension: an event-related fMRI study. J. Cogn. Neurosci. 17 (5), 724-739 (2005).
  8. Milders, M., Fuchs, S., Crawford, J. R. Neuropsychological impairments and changes in emotional and social behaviour following severe traumatic brain injury. J. Clin. Exp. Neuropsychol. 25 (2), 157-172 (2003).
  9. Spikman, J., Milders, M., Visser-Keizer, A., Westerhof-Evers, H., Herben-Dekker, M., van der Naalt, J. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe tramatic brain injury. PloS One. 8 (6), e65581 (2013).
  10. Zupan, B., Neumann, D. Affect Recognition in Traumatic Brain Injury: Responses to Unimodal and Multimodal Media. J. Head Trauma Rehabil. 29 (4), E1-E12 (2013).
  11. Radice-Neumann, D., Zupan, B., Babbage, D. R., Willer, B. Overview of impaired facial affect recognition in persons with traumatic brain injury. Brain Inj. 21 (8), 807-816 (2007).
  12. McDonald, S. Are You Crying or Laughing? Emotion Recognition Deficits After Severe Traumatic Brain Injury. Brain Impair. 6 (01), 56-67 (2005).
  13. Cunningham, D. W., Wallraven, C. Dynamic information for the recognition of conversational expressions. J. Vis. 9 (13), 1-17 (2009).
  14. Elfenbein, H. A., Marsh, A. A., Ambady, W. I. N. Emotional Intelligence and the Recognition of Emotion from Facid Expressions. The Wisdom in Feeling: Psychological Processes in Emotional Intelligence. , 37-59 (2002).
  15. McDonald, S., Saunders, J. C. Differential impairment in recognition of emotion across different media in people with severe traumatic brain injury. J. Int Neuropsycho. Soc. : JINS. 11 (4), 392-399 (2005).
  16. Williams, C., Wood, R. L. Impairment in the recognition of emotion across different media following traumatic brain injury. J. Clin. Exp. Neuropsychol. 32 (2), 113-122 (2010).
  17. Adolphs, R., Tranel, D., Damasio, A. R. Dissociable neural systems for recognizing emotions. Brain Cogn. 52 (1), 61-69 (2003).
  18. Biele, C., Grabowska, A. Sex differences in perception of emotion intensity in dynamic and static facial expressions. Exp. Brain Res. 171, 1-6 (2006).
  19. Collignon, O., Girard, S., et al. Audio-visual integration of emotion expression. Brain Res. 1242, 126-135 (2008).
  20. LaBar, K. S., Crupain, M. J., Voyvodic, J. T., McCarthy, G. Dynamic perception of facial affect and identity in the human brain. Cereb. Cortex. 13 (10), 1023-1033 (2003).
  21. Mayes, A. K., Pipingas, A., Silberstein, R. B., Johnston, P. Steady state visually evoked potential correlates of static and dynamic emotional face processing. Brain Topogr. 22 (3), 145-157 (2009).
  22. Sato, W., Yoshikawa, S. Enhanced Experience of Emotional Arousal in Response to Dynamic Facial Expressions. J. Nonverbal Behav. 31 (2), 119-135 (2007).
  23. Schulz, J., Pilz, K. S. Natural facial motion enhances cortical responses to faces. Exp. Brain Res. 194 (3), 465-475 (2009).
  24. O'Toole, A., Roark, D., Abdi, H. Recognizing moving faces: A psychological and neural synthesis. Trends Cogn. Sci. 6 (6), 261-266 (2002).
  25. Dimoska, A., McDonald, S., Pell, M. C., Tate, R. L., James, C. M. Recognizing vocal expressions of emotion in patients with social skills deficits following traumatic brain injury. J. Int. Neuropsyco. Soc.: JINS. 16 (2), 369-382 (2010).
  26. McDonald, S., Flanagan, S., Rollins, J., Kinch, J. TASIT: A new clinical tool for assessing social perception after traumatic brain injury. J. Head Trauma Rehab. 18 (3), 219-238 (2003).
  27. Channon, S., Pellijeff, A., Rule, A. Social cognition after head injury: sarcasm and theory of mind. Brain Lang. 93 (2), 123-134 (2005).
  28. Henry, J. D., Phillips, L. H., Crawford, J. R., Theodorou, G., Summers, F. Cognitive and psychosocial correlates of alexithymia following traumatic brain injury. Neuropsychologia. 44 (1), 62-72 (2006).
  29. Martìn-Rodrìguez, J. F., Leòn-Carriòn, J. Theory of mind deficits in patients with acquired brain injury: a quantitative review. Neuropsychologia. 48 (5), 1181-1191 (2010).
  30. McDonald, S., Flanagan, S. Social perception deficits after traumatic brain injury: interaction between emotion recognition, mentalizing ability, and social communication. Neuropsychology. 18 (3), 572-579 (2004).
  31. Milders, M., Ietswaart, M., Crawford, J. R., Currie, D. Impairments in theory of mind shortly after traumatic brain injury and at 1-year follow-up. Neuropsychology. 20 (4), 400-408 (2006).
  32. Nowicki, S. . The Manual for the Receptive Tests of the Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2). , (2008).
  33. Gross, J. J., Levenson, R. W. Emotion Elicitation Using Films. Cogn. Emot. 9 (1), 87-108 (1995).
  34. Spell, L. A., Frank, E. Recognition of Nonverbal Communication of Affect Following Traumatic Brain Injury. J. Nonverbal Behav. 24 (4), 285-300 (2000).
  35. Edgar, C., McRorie, M., Sneddon, I. Emotional intelligence, personality and the decoding of non-verbal expressions of emotion. Pers. Ind. Dif. 52, 295-300 (2012).
  36. Nowicki, S., Mitchell, J. Accuracy in Identifying Affect in Child and Adult Faces and Voices and Social Competence in Preschool Children. Genet. Soc. Gen. Psychol. 124 (1), (1998).
  37. Astesano, C., Besson, M., Alter, K. Brain potentials during semantic and prosodic processing in French. Cogn. Brain Res. 18, 172-184 (2004).
  38. Kotz, S. A., Paulmann, S. When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Res. 115, 107-118 (2007).
  39. Paulmann, S., Pell, M. D., Kotz, S. How aging affects the recognition of emotional speech. Brain Lang. 104 (3), 262-269 (2008).
  40. Pell, M. D., Jaywant, A., Monetta, L., Kotz, S. A. Emotional speech processing: disentangling the effects of prosody and semantic cues. Cogn. Emot. 25 (5), 834-853 (2011).
  41. Bänziger, T., Scherer, K. R. Using Actor Portrayals to Systematically Study Multimodal Emotion Expression: The GEMEP Corpus. Affective Computing and Intelligent Interaction, LNCS 4738. , 476-487 (2007).
  42. Busselle, R., Bilandzic, H. Fictionality and Perceived Realism in Experiencing Stories: A Model of Narrative Comprehension and Engagement. Commun. Theory. 18 (2), 255-280 (2008).
  43. Zagalo, N., Barker, A., Branco, V. Story reaction structures to emotion detection. Proceedings of the 1st ACM Work. Story Represent. Mech. Context - SRMC. 04, 33-38 (2004).
  44. Hagemann, D., Naumann, E., Maier, S., Becker, G., Lurken, A., Bartussek, D. The Assessment of Affective Reactivity Using Films: Validity, Reliability and Sex Differences. Pers. Ind. Dif. 26, 627-639 (1999).
  45. Hewig, J., Hagemann, D., Seifert, J., Gollwitzer, M., Naumann, E., Bartussek, D. Brief Report. A revised film set for the induction of basic emotions. Cogn. Emot. 19 (7), 1095-1109 (2005).
  46. Wranik, T., Scherer, K. R. Why Do I Get Angry? A Componential Appraisal Approach. International Handbook of Anger. , 243-266 (2010).
  47. Neumann, D., Zupan, B., Malec, J. F., Hammond, F. M. Relationships between alexithymia, affect recognition, and empathy after traumatic brain injury. J. Head Trauma Rehabil. 29 (1), E18-E27 (2013).
  48. Russell, J. A. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115 (1), 102-141 (1994).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Emotion RecognitionTraumatic Brain InjuryIsolated ExpressionsFilm ClipsMultimodal CuesDANVA FacesDANVA VoicesFacial ExpressionsVocal ExpressionsEmotion Processing

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved