GEstures and Head Movements in language (GEHM)

The GEHM network will support cooperation among eight leading research groups working in the area of gesture and language, and thereby foster new theoretical insights into the way hand gestures and head movements interact with speech in face-to-face multimodal communication.

The network has specific focus on three research strands:

  1. language-specific characteristics of gesture-speech interaction
  2. multimodal prominence
  3. multimodal behaviour modelling

 

 

 

 

  1. The first research strand, on language-specific characteristics of gesture-speech interaction, will work towards a theory that can account for how speakers’ ability to process and produce gesture and speech is affected and changed by their language profile. Speech-gesture profiles of monolingual and bilingual speakers’ production will be established by combining audio, video and sensor output from motion capture. These rich multimodal data will provide fine-grained information about cross-linguistic differences in native and non-native speech-gesture coordination.
        
  2. The second research strand, on multimodal prominence, investigates the theoretical question how linguistic prominence is expressed through combinations of kinematic and prosodic features. In general, it is not yet well understood how gestures and pitch accents might be combined to create different types of multimodal prominence, and how specifically visual prominence cues are used in spoken communication. Datasets will be created and analysed by this research to arrive at a fine-grained and largely documented theory of multimodal prominence.
         
  3. The third research strand, on modelling multimodal behaviour, aims at conceptual and statistical modelling of multimodal contributions, with particular regard to head movements and the use of gaze. This research strand will develop models of multimodal behaviour based on the datasets developed in the previous two strands, but also take advantage of existing corpora, including interaction data where eye-gaze has been tracked.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2023

Agirrezabal, M., Paggio, P., Navarretta, C., Jongejan, B. (2023). Multimodal Detection and Classification of Head Movements in Face-to-Face Conversations: Exploring Models, Features and Their Interaction. In Proceedings of Gesture and Speech in Interaction (GESPIN 2023). Max Planck Institute for Psycholinguistics, Nijmegen, 13-15 September 2023, 6 pages.

Ambrazaitis, G., & House, D. (2023, forthcoming). The multimodal nature of prominence: some directions for the study of the relation between gestures and pitch accents. Proceedings of the 13th International Conference of Nordic Prosody.  

Arbona, E., Seeber, K., & Gullberg, M. (2023). Semantically related gestures facilitate language comprehension during simultaneous interpreting. Bilingualism: Language and Cognition, 26(2), 425-439. Doi. 10.1017/S136672892200058X. E-pub Oct. 2022.

Arbona, E., Seeber, K., & Gullberg, M. (2023). The role of manual gestures in second language comprehension: A simultaneous interpreting experiment. Frontiers in Psychology, 14, 1188628. Doi: 10.3389/fpsyg.2023.1188628.

Gullberg, M. (in press). Gesture and second/foreign language acquisition. In A. Cienki (ed.), Cambridge handbook of gesture studies (pp. 398-422). Cambridge University Press.

Gullberg, M. (2023). Gesture analysis in second language acquisition. In Chapelle, C. (Ed.) The Encyclopedia of Applied Linguistics (2nd ed.). New York: Wiley-Blackwell. Doi: 10.1002/9781405198431.wbeal0455.pub2.

Hofweber, J., Aumonier, L. Janke, V., Gullberg, M., & Marshall, C. (2023). Does visual motivation aid the learning of signs at first exposure? Language Learning, 73(S1), 33-63. Doi: 10.1111/lang.12587. E-pub July 18, 2023.

Mesh, K., Cruz, E., & Gullberg, M. (2023). When attentional and politeness demands clash: The case of mutual gaze and chin pointing in Quiahije Chatino. Journal of Nonverbal Behavior, 47(2), 211-243. Doi: 10.1007/s10919-022-00423-4. E-pub Feb. 2023.

Murali, Y. P. K., Vogel, C., & Ahmad, K. (2023, March). Head Orientation of Public Speakers: Variation with Emotion, Profession and Age. In Future of Information and Communication Conference (pp. 79-95). Cham: Springer Nature Switzerland.

Nabrotzky, J., Ambrazaitis, G., Zellers, & M., House, D. (2023a). Temporal alignment of manual gestures’ phase transitions with lexical and post-lexical accentual F0 peaks in spontaneous Swedish interaction. Proceedings of Gesture and Speech in Interaction (GESPIN 2023).

Nabrotzky, J., Ambrazaitis, G., Zellers, & M., House, D. (2023b). Can segmental or syllabic durations be predicted by the presence of co-speech gestures? Proceedings of the 20th International Congress of Phonetic Sciences, 4185- 4189. ISBN: 978-80-908 114-2-3.

Paggio, P., Vella, A., Mitterer, H, and G. Attard (under review). Do Hand Gestures Increase Prominence in Naturally Produced Utterances? Submitted to Special Issue of the Journal Language and Cognition on Multimodal Prosody.

Vogel, C., Koutsombogera, M., & Reverdy, J. (2023). Aspects of Dynamics in Dialogue. Electronics, 12 (10), 2210. Special Issue: Virtual Reality, Augmented Reality and the Metaverse for Enhanced Human Cognitive Capabilities. https://doi.org/10.3390/electronics12102210 

Vogel, C., Koutsombogera, M., Murat, A. C., Khosrobeigi, Z., & Ma, X. (2023). Gestural linguistic context vectors encode gesture meaning. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, et al. (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. http://dx.doi.org/10.17617/2.3527176  

Vogel, C. and Khurshid, A. 2023. Agreement and disagreement between major emotion recognition systems. Know.-Based Syst. 276, C (Sep 2023). https://doi.org/10.1016/j.knosys.2023.110759 

2022

Ambrazaitis, G., & House, D. (2022). Probing effects of lexical prosody on speech-gesture integration in prominence production by Swedish news presenters. Laboratory Phonology, 13(1). doi: https://doi.org/10.16995/labphon.6430

Ambrazaitis, G., Frid, J., & House, D. (2022). Auditory vs. audiovisual prominence ratings of speech involving spontaneously produced head movements. In Proceedings of Speech Prosody 2022, Lisbon, Portugal, 352-356.

Berger, S., & Zellers, M. (2022). Multimodal prominence marking in semi-spontaneous YouTube monologs: The interaction of intonation and eyebrow movements. Frontiers in Communication7, 903015.

Debreslioska, S. & Gullberg, M. (2022). Information status predicts the incidence of gesture in discourse - an experimental study. Discourse Processes, 59(10), 791-827. Doi: 10.1080/0163853X.2022.2085476.

Gullberg, M. (2022a). Bimodal convergence – how languages interact in multicompetent language users’ speech and gestures. In A. Morgenstern & S. Goldin-Meadow (Eds.), Gesture in language: Development across the lifespan (pp. 318-333). Mouton.

Gullberg, M. (2022b). Studying multimodal processing: The integration of speech and gestures. In A. Godfroid & H. Hopp (eds.), The Routledge handbook of second language acquisition and psycholinguistics (pp. 137-149). Routledge. Doi: 10.4324/9781003018872-14.

Gullberg, M. (2022c). The relationship between gestures and speaking in L2 learning. In T. Derwing, Munro, M., & R. Thomson (eds.), The Routledge handbook on second language acquisition and speaking (pp. 386-398). Routledge.

Gullberg, M. (2022d). Why the SLA of sign languages matters to general SLA research. Language, Interaction, Acquisition, 13(2), 231–253. doi: 10.1075/lia.22022.gul.

Hofweber, J., Aumônier, L. Janke, V., Gullberg, M., & Marshall, C. (2022). Breaking into language in a new modality: the role of input and of individual differences in recognising signs. Frontiers in Psychology, 13, 895880. Doi: 10.3389/fpsyg.2022.895880.

Khosrobeigi, Z., Koutsombogera, M. and Vogel, C. Gesture and Part-of-Speech Alignment in Dialogue, Proceedings of the 26th Workshop On the Semantics and Pragmatics of Dialogue, August 22-24 2022, Eleni Gregoromichelaki, Julian Hough & John D. Kelleher, 2022, 172 – 182. 

Koutsombogera, M. & Vogel, C. (2022). Understanding Laughter in Dialog. Cognitive Computation, 14, 1405–1420, https://doi.org/10.1007/s12559-022-10013-7.

Murat, A., Koutsombogera, M. and Vogel, C. 2022. Mutual Gaze and Linguistic Repetition in a Multimodal Corpus. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2771–2780, Marseille, France. European Language Resources Association. 

Paggio, P., Gatt, A. and M. Tanti (2022). Proceedings of LREC2022 Workshop "People in language, vision and the mind'' (P-VLAM2022). 20 June 2022 – Marseille, France.

2021

Ahmad, K., Wang, S., Vogel, C., Jain, P., O’Neill, O., Sufi, B.H. (2022). Comparing the Performance of Facial Emotion Recognition Systems on Real-Life Videos: Gender, Ethnicity and Age. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2021, Volume 1. FTC 2021. Lecture Notes in Networks and Systems, vol 358. Springer, Cham. https://doi.org/10.1007/978-3-030-89906-6_14 

Gullberg, M. (2021). Bimodal convergence: How languages interact in multicompetent language users’ speech and gestures. In A. Morgenstern & S. Goldin-Meadow (Eds.), Gesture in language: Development across the lifespan (pp. 318-333). Berlin: Mouton de Gruyter.

Mesh, K., Cruz, E., van de Weijer, J., Burenhult, N., & Gullberg, M. (2021). Effects of scale on multimodal deixis: Evidence from Quiahije Chatino. Frontiers in Psychology, 11(584231). doi:10.3389/fpsyg.2020.584231

Paggio, P., Navarretta, C., Jongejan, B., & Aguirrezabal Zabaleta, M. (2021). Towards a Methodology Supporting Semiautomatic Annotation of Head Movements in Video-recorded Conversations. I Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop (s. 151-159). Association for Computational Linguistics.

Svensson Lundmark, M., Frid, J., Ambrazaitis, G., & Schötz, S. (2021). Word-initial consonant–vowel coordination in a lexical pitch-accent language. Phonetica, 78(5-6), 515-569. doi: https://doi.org/10.1515/phon-2021-2014  

2020

Ambrazaitis, G., Frid, J. and House, D. (2020). Word prominence ratings in Swedish television news readings: effects of pitch accents and head movements. In Proceedings of Speech Prosody 2020, online https://sp2020.jpn.org/

Ambrazaitis, G., Zellers, M., House, D. (2020) Compounds in interaction: patterns of synchronization between manual gestures and lexically stressed syllables in spontaneous Swedish. In: Proceedings of Gesture and Speech in Interaction (GESPIN2020).

Debreslioska, S., & Gullberg, M. (2020a). The semantic content of gestures varies with definiteness, information status and clause structure. Journal of Pragmatics, 168, 36-52. doi:10.1016/j.pragma.2020.06.005

Debreslioska, S., & Gullberg, M. (2020b). What’s new? Gestures accompany inferable rather than brand-new referents in discourse. Frontiers in Psychology, Gesture-speech integration: Combining gesture and speech to create understanding(11), 1935. doi:10.3389/fpsyg.2020.01935

McLaren, L., Koutsombogera, M. and Vogel, C. (2020) A Heuristic Method for Automatic Gaze Detection in Constrained Multi-Modal Dialogue Corpora. In Proceedings of the 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 2020, pp. 55-60, doi: 10.1109/CogInfoCom50765.2020.9237883. 

McLaren, L., Koutsombogera, M. and Vogel, C. (2020) Gaze, Dominance and Dialogue Role in the MULTISIMO Corpus. In Proceedings of the 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 2020, pp. 83-88, doi: 10.1109/CogInfoCom50765.2020.9237833.

Navarretta, C & Paggio, P 2020, Dialogue Act Annotation in a Multimodal Corpus of First Encounter Dialogues. i Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020). European Language Resources Association, s. 627-636. 

Paggio, P., Agirrezabal, M., Jongejan, B. and C. Navarretta (2020). Automatic Detection and Classification of Head Movements in Face-to-Face Conversations. In Proceedings of ONION 2020: Workshop on peOple in laNguage, vIsiOn and the miNd, pages 15–21 Language Resources and Evaluation Conference (LREC 2020), Marseille, 11–16 May 2020. https://www.aclweb.org/anthology/2020.onion-1.3.pdf 

Paggio, P, Gatt, A & Klinger, R (eds) 2020, Proceedings of LREC2020 Workshop "People in language, vision and the mind'' (ONION2020). European Language Resources Association.

Prieto, P., and Espinal, M.T. (2020). "Prosody, Gesture, and Negation". The Oxford Handbook of Negation, ed. by V. Deprez and M.Teresa Espinal. Oxford: Oxford University Press, pp.667-693

Reverdy, J., Koutsombogera, M., Vogel, C. (2020). Linguistic Repetition in Three-Party Conversations. In: Esposito, A., Faundez-Zanuy, M., Morabito, F., Pasero, E. (eds) Neural Approaches to Dynamics of Signal Exchanges. Smart Innovation, Systems and Technologies, vol 151. Springer, Singapore.https://doi.org/10.1007/978-981-13-8950-4_32 

Vilà-Giménez, I., and Prieto, P. (in press, 2020). "Encouraging kids to beat: Children's beat gesture production boosts their narrative performance." Developmental Science. First online:  https://doi.org/10.1111/desc.12967

Vogel, Carl and Anna Esposito, "Interaction Analysis and Cognitive Infocommunications", Infocommunications Journal, Vol. XII, No 1, March 2020, pp. 2-9. DOI: 10.36244/ICJ.2020.1.1

Vogel, C., Koutsombogera, M., Costello, R. (2020). Analyzing Likert Scale Inter-annotator Disagreement. In: Esposito, A., Faundez-Zanuy, M., Morabito, F., Pasero, E. (eds) Neural Approaches to Dynamics of Signal Exchanges. Smart Innovation, Systems and Technologies, vol 151. Springer, Singapore.https://doi.org/10.1007/978-981-13-8950-4_34 

Zhang,Y., Baills, F., and Prieto, P. (in press). "Hand-clapping to the rhythm of newly learned words improves L2 pronunciation: Evidence from training Chinese adolescents with French words". Language Teaching Research. First online:  https://doi.org/10.1177/1362168818806531  

2019

Cravotta, A., Busà, M. G., and Prieto, P. (2019). "Effects of Encouraging the Use of Gestures on Speech". Journal of Speech, Language, and Hearing Research, 62, 3204-3219.

Debreslioska, S., & Gullberg, M. (2019). Discourse is bimodal: How information status in speech interacts with presence and viewpoint of gestures. Discourse Processes, 56(1), 41-60. doi:10.1080/0163853X.2017.1351909

Debreslioska, S., van de Weijer, J., & Gullberg, M. (2019). Addressees are sensitive to the presence of gestures when tracking a single referent in discourse. Frontiers in Psychology, 10(1775). doi:10.3389/fpsyg.2019.01775

Hübscher, I., and Prieto, P. (2019). "Gestural and prosodic development act as sister systems and jointly pave the way for children’s sociopragmatic development". Frontiers in Psychology, 10:1259.

Nirme, J., Haake, M., Gulz, A., & Gullberg, M. (2019). Motion capture-based animated characters for the study of speech-gesture integration. Behaviour Research Methods. doi:10.3758/s13428-019-01319-w

Paggio, P., & Navarretta, C. (2019). Multimodal feedback i social interaktion. NyS, 56, 77-101.

Sandler, W., Gullberg, M., & Padden, C. (Eds.). (2019). Visual language. Lausanne: Frontiers Media.

Vilà-Giménez, I., Igualada, A., and Prieto, P. (2019). "Observing storytellers who use rhythmic beat gestures improves children’s narrative discourse performance". Developmental Psychology, 55(2), 250-262. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=t-EKJIQt20g.

Vogel, Carl and Anna Esposito , Linguistic and Behaviour Interaction Analysis within Cognitive Infocommunications, 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 23-25 Oct. 2019, 2019, pp47 - 52 Conference Paper, 2019 DOI: https://doi.org/10.1109/CogInfoCom47531.2019.9089904

Yuan, C., González-Fuente, S., Baills, F., and Prieto, P. (2019). "Observing pitch gestures favors the learning of Spanish intonation by Mandarin speakers". Studies in Second Language Acquisition, 41(1), 5-32.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Researchers

UCPH researchers

Name Title Phone E-mail
Bart Jongejan Software Developer +4535329075 E-mail
Costanza Navarretta Senior Researcher +4535329079 E-mail
Patrizia Paggio Associate Professor +4535329072 E-mail

Funded by

Independent Research Fund Denmark

The network is funded by the Independent Research Fund Denmark with grant 9055-00004B.

Project period: 1 September 2019 - 31 December 2023.

Contact

Other network members

Department of Linguistics and Phonetics at Kiel University:

Division of Speech, Music and Hearing at KTH Royal Institute of Technology:

MIDI group at KU Leuven:

Centre for IMS at Linnaeus University:

Centre for Languages and Literature and Lund University Humanities Lab at Lund University:

Computational Linguistics Group at Trinity College Dublin:

GrEP at Universitat Pompeu Fabra:

University of Malta, Institute of Linguistics and Language Technology: