CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="iso-8859-1"
Date:
Mon, 16 Jul 2012 08:00:08 +0000
Reply-To:
Bjoern Schuller <[log in to unmask]>
Subject:
Content-Transfer-Encoding:
quoted-printable
Message-ID:
MIME-Version:
1.0
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Bjoern Schuller <[log in to unmask]>
Parts/Attachments:
text/plain (131 lines)
Dear List,

We would like to bring to your attention the extended submission deadline of the

2nd International Audio/Visual Emotion Challenge and Workshop (AVEC2012), 

organised in conjunction with ACM ICMI2012 this year. 

See below for the CFP including the updated dates - apologies for potential cross-posting.


_____________________________________________________________

Call for Participation / Papers

2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012)

in conjunction with ACM ICMI 2012, October 22, Santa Monica, California, USA

http://sspnet.eu/avec2012/
http://www.acm.org/icmi/2012/ 

Register and download data and features:
http://avec-db.sspnet.eu/accounts/register/ 

_____________________________________________________________

Scope

The Audio/Visual Emotion Challenge and Workshop (AVEC 2012) will be the second competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities, to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world.

We are calling for teams to participate in emotion recognition from acoustic audio analysis, linguistic audio analysis, video analysis, or any combination of these. As benchmarking database the SEMAINE database of naturalistic video and audio of human-agent interactions, along with labels for four affect dimensions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in the dimensions arousal, expectation, power and valence. Two Sub-Challenges are addressed: The Word-Level Sub-Challenge requires participants to predict the level of affect at word-level and only when the user is speaking. The Fully Continuous Sub-Challenge involves fully continuous affect recognition, where the level of affect has to be predicted for every moment of the recording.

Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio and video processing of emotive data, and the issues concerning combined audio-visual emotion recognition

Topics include, but are not limited to:

Audio/Visual Emotion Recognition:
. Audio-based Emotion Recognition
. Linguistics-based Emotion Recognition
. Video-based Emotion Recognition
. Social Signals in Emotion Recognition
. Multi-task learning of Multiple Dimensions 
. Novel Fusion Techniques as by Prediction 
. Cross-corpus Feature Relevance 
. Agglomeration of Learning Data 
. Semi- and Unsupervised Learning 
. Synthesized Training Material 
. Context in Audio/Visual Emotion Recognition 
. Multiple Rater Ambiguity

Application:
. Multimedia Coding and Retrieval
. Usability of Audio/Visual Emotion Recognition 
. Real-time Issues


Important Dates
___________________________________________

Paper submission
July 31, 2012

Notification of acceptance
August 14, 2012

Camera ready paper and final challenge result submission 
August 18, 2012

Workshop
October 22, 2012

Organisers
___________________________________________

Björn Schuller (Tech. Univ. Munich, Germany) 
Michel Valstar University of Nottingham, UK) 
Roddy Cowie (Queen's University Belfast, UK) 
Maja Pantic (Imperial College London, UK)


Program Committee
___________________________________________

Elisabeth André, Universität Augsburg, Germany
Anton Batliner, Universität Erlangen-Nuremberg, Germany
Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Fang Chen, NICTA, Australia
Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), Fance
Laurence Devillers, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), France
Julien Epps, University of New South Wales, Australia
Anna Esposito, International Institute for Advanced Scientific Studies, Italy
Raul Fernandez, IBM, USA
Roland Göcke, Australian National University, Australia
Hatice Gunes, Queen Mary University London, UK
Julia Hirschberg, Columbia University, USA
Aleix Martinez, Ohio State University, USA
Marc Méhu, University of Geneva, Switzerland
Marcello Mortillaro, University of Geneva, Switzerland
Matti Pietikainen, University of Oulu, Finland
Ioannis Pitas, University of Thessaloniki, Greece
Peter Robinson, University of Cambridge, UK
Stefan  Steidl, Uinversität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Fernando de la Torre, Carnegie Mellon University, USA
Mohan Trivedi, University of California San Diego, USA
Matthew Turk, University of California Santa Barbara, USA
Alessandro Vinciarelli, University of Glasgow, UK
Stefanos Zafeiriou, Imperial College London, UK


Please regularly visit our website http://sspnet.eu/avec2012 for more information and excuse cross-postings,


Thank you very much and all the best,



Björn  Schuller, Michel Valstar, Roddy Cowie, Maja Pantic

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2