CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Roman Bednarik <[log in to unmask]>
Reply To:
Roman Bednarik <[log in to unmask]>
Date:
Fri, 4 May 2012 12:20:13 +0300
Content-Type:
text/plain
Parts/Attachments:
text/plain (61 lines)
                      We have the pleasure to announce the

                             D-META Grand Challenge
                           http://d-meta.inrialpes.fr

      at the International Conference of Multimodal Interaction (ICMI)
                            Santa Monica, October 2012.
======================================================================

The D-META Grand Challenge (Datasets for Multimodal Evaluation of Tasks 
and Annotations) proposes to set up the basis for comparison, analysis, 
and further improvement of multimodal data annotations and multimodal 
interactive systems. Held by two coupled pillars, method benchmarking 
and annotation evaluation, the D-META challenge envisions a starting 
point for transparent and publicly available application and annotation 
evaluation on multimodal data sets. We expect papers covering areas such 
as: (i) applications of an algorithm to a data set(s) to solve precise 
tasks, (ii) benchmark of several algorithms using the same data set(s), 
(iii) extensions of the annotation scheme with new relevant features, 
(iv) applications of the data to an automatic system, (v) discussions on 
ecologically valid data sets and (vi) position papers of how to organize 
the next challenge. Papers may target one of the following tasks 
(http://http://d-meta.inrialpes.fr/tasks/) :

     AVRGR Recognize gestures addressed to the robot by means of the 
vision and the audio.
     AVSR Detect, localize and track multiple speakers using 
audio-visual information.
     CEP Estimate the level of engagement in a video-mediated communication.
     AVCGR Recognize conversational gestures in first encounter dialogues.
     AVFGR Recognize feedback gestures in first encounter dialogues.

Please check the web site for more information: 
http://d-meta.inrialpes.fr and keep in mind the important dates:

31-Jul-2012 Paper deadline
24-Aug-2012 Author notification
14-Sep-2012 Camera-ready
Oct-2012 Work presented at D-META’12

Hope you all take the time to participate and/or disseminate this 
information.

The D-META Grand Challenge Team.

-- 
------------------------------------------------------------------------
Roman Bednarik      http://cs.uef.fi/~rbednari      +358 13 251 7981
School of Computing, University of Eastern Finland
------------------------------------------------------------------------

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2