ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Fri, 20 Aug 2010 12:09:01 +0200
"C. Peter" <[log in to unmask]>
text/plain; charset=ISO-8859-15; format=flowed
"C. Peter" <[log in to unmask]>
text/plain (138 lines)
*Call for Participation:
W3C Workshop on Emotion Markup Language *

*5-6 October 2010 *

*Hosted by Telecom ParisTech, Paris, France *

This W3C Workshop is taking place right before the Accounting for Social
Variables in Human Computer Interaction Workshop linked to the European
project SSPNet (Social Signal Processing Network) <>.

The W3C Emotion Markup Language (EmotionML) is a representation of
emotions and emotion-related states for use in technology. It aims to
strike a balance between practical applicability and scientific
well-foundedness. The present draft of EmotionML can be found at:

The workshop is aimed at receiving feedback from the community on the
current EmotionML specification, with an emphasis on the following issues:

    * Is the current list of recommended vocabularies scientifically
      sound and defendable? Should descriptions be added, removed, or
      presented differently?
    * Does the specification have sufficient expressive power? Can it
      represent what people need to represent?
    * Is EmotionML easy enough to use, or should the syntax be changed
      somehow to avoid any confusions etc.?

Three broad use case types for the language have been identified that the
language should support:

    * Manual annotation of material involving emotionality, such as 
      of videos, of speech recordings, of faces, of texts, etc.;
    * Automatic recognition of emotions from sensors, including 
      sensors, speech recordings, facial expressions, etc., as well as from
      multi-modal combinations of sensors;
    * Generation of emotion-related system responses, which may involve
      reasoning about the emotional implications of events, emotional 
      in synthetic speech, facial expressions and gestures of embodied 
      or robots, the choice of music and colors of lighting in a room, etc.

In order to address the above-mentioned questions, it might be helpful 
to develop
for each of the three application types a concrete, protoypical, use 
case scenario,
on the basis of which the issues could be validated.
Participants are encouraged to outline such use cases in their position 
as a basis for a common discussion during the workshop.

Position papers and discussions at the workshop are expected to lead to an
understanding whether revisions to the EmotionML specification are needed
before the formal standardisation process continues.

All workshop attendees must submit *a position paper of 1 page (max. up 
to 5
pages)*. Position papers will be the basis for the discussions at the
workshop. They will be published on the workshop website; the authors of
selected position papers will be invited to present their position paper
at the workshop to foster discussion. Participation in the workshop is
conditional upon acceptance of the position paper by the program committee.

*Important dates:*

    * 30 august 2010: Deadline for position papers. Submit position
      papers to <[log in to unmask] <mailto:[log in to unmask]>>

    * 7 sept 2010: Acceptance notification and registration instructions
    * 15 sept 2010: Program and accepted position papers posted on the
      workshop website.
    * 20  sept 2010 : Deadline for registration.

    * 5-6 oct 2010 : workshop

*Workshop Organizing Committee:*

    * Marc Schröder, Editor of EmotionML in the W3C Multimodal
      Interaction Working Group (DFKI), <[log in to unmask]
<mailto:[log in to unmask]>>
    * Catherine Pelachaud, Member of the W3C Multimodal Interaction
      Working Group (CNRS, Telecom ParisTech),
<[log in to unmask]
<mailto:[log in to unmask]>>
    * Deborah Dahl, Chair of the W3C Multimodal Interaction Working
      Group (W3C Invited Expert), <[log in to unmask]
<mailto:[log in to unmask]>>
    * Kazuyuki Ashimura, Multimodal Interaction/Voice Browser Activity
      Lead (W3C), <[log in to unmask] <mailto:[log in to unmask]>>

*Venue and Schedule:*

The workshop will be held at Institut Telecom Paristech, 37-39 rue
Dareau   75014 Paris, France.

The workshop program will run from 9:00 am to 6 pm on both days, 5 and 6

October 2010.

Christian Peter
Institute for Computer Graphics&  Knowledge Visualization
Technical University Graz
Inffeldgasse 16c, A-8010 Graz
Tel +43(316)873-5407, Fax +43(316)873-105401
Email: [log in to unmask]
Wir bitten, die Informationen in dieser E-Mail vertraulich zu behandeln.
Please treat the information contained in this e-mail confidential.

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see