GEstures and Head Movements in language (GEHM)

The GEHM network will support cooperation among eight leading research groups working in the area of gesture and language, and thereby foster new theoretical insights into the way hand gestures and head movements interact with speech in face-to-face multimodal communication.

The network has specific focus on three research strands:

  1. language-specific characteristics of gesture-speech interaction
  2. multimodal prominence
  3. multimodal behaviour modelling





  1. The first research strand, on language-specific characteristics of gesture-speech interaction, will work towards a theory that can account for how speakers’ ability to process and produce gesture and speech is affected and changed by their language profile. Speech-gesture profiles of monolingual and bilingual speakers’ production will be established by combining audio, video and sensor output from motion capture. These rich multimodal data will provide fine-grained information about cross-linguistic differences in native and non-native speech-gesture coordination.
  2. The second research strand, on multimodal prominence, investigates the theoretical question how linguistic prominence is expressed through combinations of kinematic and prosodic features. In general, it is not yet well understood how gestures and pitch accents might be combined to create different types of multimodal prominence, and how specifically visual prominence cues are used in spoken communication. Datasets will be created and analysed by this research to arrive at a fine-grained and largely documented theory of multimodal prominence.
  3. The third research strand, on modelling multimodal behaviour, aims at conceptual and statistical modelling of multimodal contributions, with particular regard to head movements and the use of gaze. This research strand will develop models of multimodal behaviour based on the datasets developed in the previous two strands, but also take advantage of existing corpora, including interaction data where eye-gaze has been tracked.














































UCPH researchers

Name Title Phone E-mail
Bart Jongejan Software Developer +4535329075 E-mail
Costanza Navarretta Senior Researcher +4535329079 E-mail
Elisabeth Engberg-Pedersen Professor +4530298664 E-mail
Patrizia Paggio Associate Professor +4535329072 E-mail

Funded by

Independent Research Fund Denmark

The network is funded by the Independent Research Fund Denmark with grant 9055-00004B.

Project period: 1 September 2019 - 28 February 2023.


The MultiModal MultiDimensional (M3D) labeling system


The M3D labeling system represents a joint effort between three gesture labs (one of the GEHM labs, e.g., the Prosodic Studies Group at Universtitat Pompeu Fabra).

More details

2nd International Workshop on Language Acquisition

23-06-2021 08:25

The Second International Workshop on Language Acquisition, University of Copenhagen, Denmark.

More details

Other network members

Department of Linguistics and Phonetics at Kiel University:

Division of Speech, Music and Hearing at KTH Royal Institute of Technology:

MIDI group at KU Leuven:

Centre for IMS at Linnaeus University:

Centre for Languages and Literature and Lund University Humanities Lab at Lund University:

Computational Linguistics Group at Trinity College Dublin:

GrEP at Universitat Pompeu Fabra:

University of Malta, Institute of Linguistics and Language Technology: