A subscription to JoVE is required to view this content. Sign in or start your free trial.
Here, we present a protocol that manipulates interlocutor visibility to examine its impact on gesture production in interpersonal communication. This protocol is flexible to tasks implemented, gestures examined, and communication modality. It is ideal for populations with communication challenges, such as second language learners and individuals with autism spectrum disorder.
Understanding why speakers modify their co-speech hand gestures when speaking to interlocutors provides valuable insight into how these gestures contribute to interpersonal communication in face-to-face and virtual contexts. The current protocols manipulate the visibility of speakers and their interlocutors in tandem in a face-to-face context to examine the impact of visibility on gesture production when communication is challenging. In these protocols, speakers complete tasks such as teaching words from an unfamiliar second language or recounting the events of cartoon vignettes to an interlocutor who is either another participant or a confederate. When performing these tasks, speakers are visible or non-visible to their interlocutor, and the speaker is visible or non-visible to the participant. In the word learning task, speakers and interlocutors visible to one another produce more representational gestures, which convey meaning via handshape and motion, and deictic (pointing) gestures than speakers and interlocutors who are not visible to one another. In the narrative retelling protocol, adolescents with autism spectrum disorder (ASD) produced more gestures when speaking to visible interlocutors than non-visible interlocutors. A major strength of the current protocol is its flexibility in terms of the tasks, populations, and gestures examined, and the current protocol can be implemented in videoconferencing as well as face-to-face contexts. Thus, the current protocol has the potential to advance the understanding of gesture production by elucidating its role in interpersonal communication in populations with communication challenges.
Co-speech gestures (heretofore, gestures) - meaningful hand movements produced concurrently with speech - contribute to interpersonal communication by conveying information complementing verbal content1. According to the most widely used taxonomy2,3, gestures can be divided into three categories: representational gestures, which convey referents via their form and motion (e.g., flapping the hands back and forth together to convey flying); beat gestures, which convey emphasis via simple punctate movements (e.g., moving the dominant hand downward slightly in conjunction with each word in ....
All participants provided written consent, and all protocols were approved by the Institutional Review Boards at the host institution. The L2 word learning and cartoon retelling protocols were implemented in the studies on which the representative results are based21,33. Although these protocols have been conducted only in in-person contexts to date, a related protocol manipulating the visibility of the interlocutor and the participant independently not described.......
L2 word learning
Implementation: The results reported below are based on data collected from 52 healthy participants (21 males, 31 females; age: M = 20.15, SD = 1.73, range = 18-28) according to the protocol outlined above. All speech and gesture were coded by a primary coder, who could not be blind to visibility condition as data from both the speaker and interlocutor were recorded using a single camera, as described in step 3.4. To establish inter-rater reliability, a se.......
The current protocol manipulates the visibility of the speaker and interlocutor to one another, providing insight into its impact on gesture production under challenging circumstances: L2 word learning and narrative retelling by adolescents with ASD. This protocol can be implemented either in-person or virtually, permitting participant and interlocutor visibility to be manipulated in tandem or independently. It can accommodate a wide variety of experimental tasks, gestures, and populations, providing them with the flexib.......
Development and validation of the L2 word learning protocol was supported by a National Defense Science and Engineering Graduate (NDSEG) Fellowship (32 CFR 168a) issued by the US Department of Defense Air Force Office of Scientific Research. Development and validation of the cartoon retelling protocol with adolescents with ASD was supported by a Ruth S. Kirschstein Institutional National Research Service Award (T32) from the National Institutes of Mental Health. The author thanks Rachel Fader, Theo Haugen, Andrew Lynn, Ashlie Caputo, and Marco Pilotta with assistance with data collection and coding.
....Name | Company | Catalog Number | Comments |
Computer | Apple | Z131 | 24" iMac, 8-core CPU & GPU, M3 chip |
Conference USB microphone | Tonor | B07GVGMW59 | |
ELAN | The Language Archive | Software application used to transcribe speech and gesture | |
Video Recorder | Vjianger | B07YBCMXJJ | FHD 2.7K 30 FPS 24 MP 16X digital zoom 3" touch screen video recorder with renote control and tripod |
Weschler Abbreviated Scale of Intelligence | Pearson | 158981561 | Used to verify full scale IQ ≥ 80 in Morett et al. (2016) |
This article has been published
Video Coming Soon
ABOUT JoVE
Copyright © 2024 MyJoVE Corporation. All rights reserved