Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. / Navarretta, Costanza.

Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications. IEEE Signal Processing Society, 2016. s. 233-237.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Navarretta, C 2016, Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. i Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications. IEEE Signal Processing Society, s. 233-237.

APA

Navarretta, C. (2016). Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. I Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications (s. 233-237). IEEE Signal Processing Society.

Vancouver

Navarretta C. Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. I Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications. IEEE Signal Processing Society. 2016. s. 233-237

Author

Navarretta, Costanza. / Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech. Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications. IEEE Signal Processing Society, 2016. s. 233-237

Bibtex

@inproceedings{abda76795d6844e18d82487a8c653e63,
title = "Predicting an Individual{\textquoteright}s Gestures from the Interlocutor{\textquoteright}s Co-occurring Gestures and Related Speech",
abstract = "Overlapping speech and gestures are common inface-to-face conversations and have been interpreted as a signof synchronization between conversation participants. A numberof gestures are even mirrored or mimicked. Therefore, wehypothesize that the gestures of a subject can contribute to theprediction of gestures of the same type of the other subject.In this work, we also want to determine whether the speechsegments to which these gestures are related to contribute tothe prediction. The results of our pilot experiments show that aNaive Bayes classifier trained on the duration and shape featuresof head movements and facial expressions contributes to theidentification of the presence and shape of head movementsand facial expressions respectively. Speech only contributes toprediction in the case of facial expressions. The obtained resultsshow that the gestures of the interlocutors are one of thenumerous factors to be accounted for when modeling gestureproduction in conversational interactions and this is relevant tothe development of socio-cognitive ICT.",
author = "Costanza Navarretta",
year = "2016",
language = "English",
isbn = "978-1-5090-2644-9",
pages = "233--237",
booktitle = "Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications",
publisher = "IEEE Signal Processing Society",

}

RIS

TY - GEN

T1 - Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

AU - Navarretta, Costanza

PY - 2016

Y1 - 2016

N2 - Overlapping speech and gestures are common inface-to-face conversations and have been interpreted as a signof synchronization between conversation participants. A numberof gestures are even mirrored or mimicked. Therefore, wehypothesize that the gestures of a subject can contribute to theprediction of gestures of the same type of the other subject.In this work, we also want to determine whether the speechsegments to which these gestures are related to contribute tothe prediction. The results of our pilot experiments show that aNaive Bayes classifier trained on the duration and shape featuresof head movements and facial expressions contributes to theidentification of the presence and shape of head movementsand facial expressions respectively. Speech only contributes toprediction in the case of facial expressions. The obtained resultsshow that the gestures of the interlocutors are one of thenumerous factors to be accounted for when modeling gestureproduction in conversational interactions and this is relevant tothe development of socio-cognitive ICT.

AB - Overlapping speech and gestures are common inface-to-face conversations and have been interpreted as a signof synchronization between conversation participants. A numberof gestures are even mirrored or mimicked. Therefore, wehypothesize that the gestures of a subject can contribute to theprediction of gestures of the same type of the other subject.In this work, we also want to determine whether the speechsegments to which these gestures are related to contribute tothe prediction. The results of our pilot experiments show that aNaive Bayes classifier trained on the duration and shape featuresof head movements and facial expressions contributes to theidentification of the presence and shape of head movementsand facial expressions respectively. Speech only contributes toprediction in the case of facial expressions. The obtained resultsshow that the gestures of the interlocutors are one of thenumerous factors to be accounted for when modeling gestureproduction in conversational interactions and this is relevant tothe development of socio-cognitive ICT.

M3 - Article in proceedings

SN - 978-1-5090-2644-9

SP - 233

EP - 237

BT - Proceedings of the IEEE 7th International Conference on Cognitive Infocommunications

PB - IEEE Signal Processing Society

ER -

ID: 171660512