ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bjoern Schuller <[log in to unmask]>
Reply To:
Bjoern Schuller <[log in to unmask]>
Wed, 7 Nov 2012 12:17:43 +0000
text/plain (114 lines)
EmoSPACE 2013 - Call for Papers
2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space
In conjunction with IEEE FG 2013, Shanghai, China, 22/26 April, 2013

Building upon the success of the first EmoSPACE workshop at IEEE FG'11, the second workshop in the EmoSPACE Workshop series aims to (i) focus on continuity in input, analysis and synthesis in terms of continuity in time and continuity in affective, mental and social dimensions and phenomena, and (ii) discuss the issues and the challenges pertinent in sensing, recognizing and responding to continuous human affective and social behaviour from diverse communicative cues and modalities.

The key aim of EmoSPACE'13, the second workshop in the series, is to present cutting-edge research and new challenges in automatic and continuous analysis and synthesis of human affective and social behaviour in time and/or space in an interdisciplinary forum of affective and behavioural scientists. More specifically, the workshop aims (i) to bring forth existing efforts and major accomplishments in modelling, analysis and synthesis of affective and social behaviour in continuous time and/or space, (ii) while encouraging the design of novel applications in context as diverse as human-computer and human-robot interaction, clinical and biomedical studies, learning and driving environments, and entertainment technology, and (iii) to focus on current trends and future directions in the field.

Suggested workshop topics include, but are by no means limited to:
*       Cues for continuous affective, mental and social state recognition
o       facial expressions
o       head movements and gestures
o       body postures and gestures
o       audio (e.g., speech, non-linguistic vocalisations, etc.)
o       bio signals (e.g., heart, brain, thermal signals, etc.)
*       Automatic analysis and prediction
o       approaches for discretised and continuous prediction
o       identifying appropriate classification and prediction methods
o       introducing or identifying optimal strategies for fusion
o       techniques for modelling high inter-subject variation
o       approaches to determining duration of affective and social cues for automatic analysis
*       Data acquisition and annotation
o       elicitation of affective, mental and social states
o       individual variations (interpersonal and cognitive issues)
o       (multimodal) naturalistic data sets and annotations
o       (multimodal) annotation tools
o       modelling annotations from multiple raters and their reliability
*       Applications
o       interaction with robots, virtual agents, and games (including tutoring)
o       mobile affective computing
o       smart environments& digital spaces (e.g., in a car, or digital artworks)
o       implicit (multimedia) tagging
o       clinical and biomedical studies (e.g., autism, depression, pain etc.)

Workshop Organisers
*       Hatice Gunes, Queen Mary University of London, UK, [log in to unmask]<mailto:[log in to unmask]>
*       Björn Schuller, Technische Universität München, Germany, [log in to unmask]<mailto:[log in to unmask]>
*       Maja Pantic, Imperial College London, UK, [log in to unmask]<mailto:[log in to unmask]>
*       Roddy Cowie, Queen's University Belfast, UK, [log in to unmask]<mailto:[log in to unmask]>

Program Committee
*       Anton Batliner, Technische Universität München, Germany
*       Nadia Bianchi-Berthouze, University College London, UK
*       Felix Burkhardt, Deutsche Telekom, Germany
*       Carlos Busso, University of Texas at Dallas, USA
*       Antonio Camurri, University of Genova, Italy
*       George Caridakis, National Technical University of Athens, Greece
*       Ginevra Castellano, University of Birmingham, UK
*       Sidney D'Mello, University of Memphis, USA
*       Dirk Heylen, University of Twente, The Netherlands
*       Eva Hudlicka, Psychometrix Associates, USA
*       Irene Kotsia, Queen Mary University London, UK
*       Gary McKeown, Queen's University Belfast, UK
*       Louis-Philippe Morency, University of Southern California, USA
*       Anton Nijholt, University of Twente, Netherlands
*       Peter Robinson, University of Cambridge, UK
*       Albert Ali Salah, Bogazici University, Turkey
*       Stefan Steidl, FAU, Germany
*       Michel Valstar, University of Nottingham, UK
*       Dongrui Wu, GE Global Research, USA
*       Stefanos Zafeiriou, Imperial College London, UK

Important Dates
Paper Submission:       21 November 2012
Notification of Acceptance:     8 January 2013
Camera Ready Paper:     15 January 2013
Workshop:       22 or 26 April 2013 (t.b.a.)

In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.
Manuscripts should be in the IEEE FG paper format<>.
Authors should submit papers as a PDF file.
Papers accepted for the workshop will be allocated 6 pages in the proceedings, with the option of having up to 2 extra pages.
EmoSPACE reviewing is double blind. Reviewing will be by members of the program committee. Each paper will receive at least two reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.
Submission and reviewing will be handled via easychair.

Please submit your paper at


Dr. Björn Schuller

Technische Universität München
Institute for Human-Machine Communication
D-80333 München

[log in to unmask]<mailto:[log in to unmask]><>

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see