Distinguishing the communicative functions of gestures

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

This paper deals with the results of a machine learning experiment
conducted on annotated gesture data from two case studies (Danish
and Estonian). The data concern mainly facial displays, that are
annotated with attributes relating to shape and dynamics, as well as
communicative function. The results of the experiments show that the
granularity of the attributes used seems appropriate for the task of
distinguishing the desired communicative functions. This is a
promising result in view of a future automation of the annotation
task.
Original languageEnglish
Title of host publicationProceedings of the 5th International Workshop, MLMI 2008
EditorsAndrei Popescu-Belis, Rainer Stiefelhagen
Number of pages12
PublisherSpringer
Publication date2008
Pages38-49
ISBN (Print)978-3-540-85852-2
Publication statusPublished - 2008
EventMachine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008 - Utrecht, Netherlands
Duration: 8 Sep 200810 Sep 2008
Conference number: 5

Conference

ConferenceMachine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008
Nummer5
LandNetherlands
ByUtrecht
Periode08/09/200810/09/2008
SeriesLecture Notes in Computer Science LNCS 5237
ISSN0302-9743

Links

ID: 9532757