Multimodal corpus analysis in The Nordic countries (NOMCO)
The project period was 2009-2010.
The collaborative Nordic project NOMCO was about the analysis of multimodal spoken language corpora in the Nordic Countries. Multimodal spoken language corpora were video resources where the various modalities involved in human communication, or human-computer interaction, were annotated at various levels. Multimodal corpora made it possible to study how gestures (head movements, facial displays, hand gestures and body postures) interact with speech in face-to-face communication. The project was funded by the Nordic research councils for the Humanities and Social Sciences (NOS-HS) and ran from 2009 to 2010.
Main aims
The project's main aims were to (i) further develop research building on earlier results obtained in this field by the research group involved, (ii) create multimodal corpora for Danish, Swedish, Finnish and Estonian with a number of standardised coding features which made comparative studies possible, (iii) perform a number of studies testing hypotheses on multimodal communicative interaction, (iv) develop, extend and adapt models of multimodal communication management that could provide the basis for interactive systems, and (v) apply machine learning techniques in order to create support for automatic recognition of gestures with different communication functions.
Theoretical foundation
The theoretical starting point was the MUMIN model (Allwood et al. 2006, 2007), which was designed to study multimodal communication, especially feedback, turn management and sequencing.
Projects participants
University of Gothenburg: Elisabeth Ahlsén and Jens Allwood (project manager)
University of Helsinki: Kristiina Jokinen
University of Copenhagen (Centre of Language Technology): Costanza Navarretta and Patrizia Paggio.
Project home page
CST contact
Patrizia Paggio (paggio @ hum.ku.dk)
Costanza Navarretta (costanza @ hum.ku.dk)