ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Jean-Claude MARTIN <[log in to unmask]>
Wed, 6 Jan 2010 10:49:43 +0100
text/plain (143 lines)
                       *** 1st Call for Papers ***
                          LREC 2010 Workshop on
Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality
                       *** 18 May 2010, Malta ***


A "Multimodal Corpus" involves the recording, annotation and analysis of
several communication modalities such as speech, hand gesture, facial
expression, body posture, etc. As many research areas are moving from
focused but single modality research to fully-fledged multimodality
research, multimodal corpora are becoming a core research asset and an
opportunity for interdisciplinary exchange of ideas, concepts and data.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08.
There is an increasing interest in multimodal communication and multimodal
corpora as visible by European Networks of Excellence and integrated
projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet.
Furthermore, the success of recent conferences and workshops dedicated to
multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium
on Multimodal Communication, Embodied Language Processing) and the
creation of the Journal of Multimodal User Interfaces also testify to the
growing interest in this area, and the general need for data on multimodal

The 2010 full-day workshop is planned to result in a significant follow-up
publication, similar to previous post-workshop publications like the 2008
special issue of the Journal of Language Resources and Evaluation and the
2009 state-of-the-art book published by Springer.


In 2010, we are aiming for a wide cross-section of the field, with
contributions on collection efforts, coding, validation and analysis
methods, as well as actual tools and applications of multimodal corpora.
However, we want to put emphasis on the fact that there have been
significant advances in capture technology that make highly accurate data
available to the broader research community. Examples are the tracking of
face, gaze, hands, body and the recording of articulated full-body motion
using motion capture. These data are much more accurate and complete than
simple videos that are traditionally used in the field and therefore, will
have a lasting impact on multimodality research. However, the richness of
the signals and the complexity of the recording process urgently call for
an exchange of state-of-the-art information regarding recording and coding
practices, new visualization and coding tools, advances in automatic
coding and analyzing corpora.


This LREC 2010 workshop on multimodal corpora will feature a special
session on databases of motion capture, trackers, inertial sensors,
biometric devices and image processing. Other topics to be addressed
include, but are not limited to:

    * Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction, human-robot
interaction, etc.) and descriptions of existing multimodal resources

    * Relations between modalities in natural (human) interaction and in
human-computer interaction

    * Multimodal interaction in specific scenarios, e.g. group interaction
in meetings

    * Coding schemes for the annotation of multimodal corpora

    * Evaluation and validation of multimodal annotations

    * Methods, tools, and best practices for the acquisition, creation,
management, access, distribution, and use of multimedia and multimodal

    * Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)

    * Collaborative coding

    * Metadata descriptions of multimodal corpora

    * Automatic annotation, based e.g. on motion capture or image
processing, and the integration with manual annotations

    * Corpus-based design of multimodal and multimedia systems, in
particular systems that involve human-like modalities either in input
(Virtual Reality, motion capture, etc.) and output (virtual

    * Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)

    * Machine learning applied to multimodal data

    * Multimodal dialogue modelling


* Deadline for paper submission (complete paper):    12 February 2010
* Notification of acceptance:                        10  March
* Final version of accepted paper:                   26 March
* Final program:                                     7 April
* Final proceedings:                                 14 April
* Workshop:                                          18 May


The workshop will consist primarily of paper presentations and
discussion/working sessions. Submissions should be 4 pages long, must be
in English, and follow the submission guidelines available under

Submit your paper here:

Demonstrations of multimodal corpora and related tools are encouraged as
well (a demonstration outline of 2 pages can be submitted).


When submitting a paper through the START page, authors will be kindly
asked to provide relevant information about the resources that have been
used for the work described in their paper or that are the outcome of
their research. For further information on this new initiative, please
refer to


Michael Kipp, DFKI, Germany
Jean-Claude Martin, LIMSI-CNRS, France
Patrizia Paggio, University of Copenhagen, Denmark
Dirk Heylen, University of Twente, The Netherlands

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see