ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bjoern Schuller <[log in to unmask]>
Reply To:
Bjoern Schuller <[log in to unmask]>
Thu, 15 Dec 2011 16:19:53 +0000
text/plain (162 lines)
Dear List,

For those of you interested:

ES³ 2012
4th International  Workshop on 

Satellite of LREC 2012, ELRA
Full Day Workshop on Saturday, 26 May 2012, Istanbul, Turkey


The fourth instalment of the workshop series on Corpora for Research on Emotion held at LREC aims at further cross-fertilisation between the highly related communities of emotion and affect processing based on acoustics of the speech signal, and linguistic analysis of spoken and written text, i.e., the field of sentiment analysis including figurative languages such as irony, sarcasm, satire, metaphor, parody, etc. At the same time, the workshop opens up for the emerging field of behavioural and social signal processing including signals such as laughs, smiles, sighs, hesitations, consents, etc. Besides data from human-system interaction, dyadic and human-to-human data, its labelling and suited models as well as benchmark analysis and evaluation results on suited and relevant corpora are invited. By this, we aim at bridging between these larger and highly connected fields: Emotion and sentiment are part of social communication, and social signals are highly relevant in helping to better understand affective behaviour and its context. For example, understanding of a subject's personality is needed to make better sense of observed emotional patterns. At the same time, non-linguistic behaviour such as laughter and linguistic analysis can give further insight into the state or personality trait of the subject.

All these fields further share a unique trait: Genuine emotion, sentiment and social signals are hard to collect, ambiguous to annotate, and tricky to distribute due to privacy reasons. In addition, the few available corpora suffer from a number of issues owing to the peculiarity of these young and emerging fields: As in no related task, different forms of modelling exist, and ground truth is never solid due to the often highly different perception of the mostly very few annotators. Due to data sparseness, cross-validation without strict partitioning including development sets and without strict separation of speakers and subjects throughout partitioning are frequently seen.

Topics include, but are not limited to: 

+ Novel corpora of affective speech in audio & multimodal data 
+ Novel corpora for sentiment & opinion mining analysis
+ Novel corpora of audio & multimodal behavioural & social signals 
+ Novel corpora with combined annotation of the above
+ Analysis in speech, language & multimodal cues
+ Rich emotion and personality: dimensional, complex, categories, etc.
+ Figurative languages: irony, sarcasm, satire, metaphor, parody, etc.
+ Social signals: laughs, smiles, sighs, hesitations, consents, etc.
+ Discussion of models for emotion, sentiment & social signals
+ Measures for quantitative corpus quality assessment 
+ Standardisation of corpora & labels for cross-corpus testing
+ Real-life applications of language & multimodal resources 
+ Long-term recordings of interactional & dyadic communication
+ Rich & novel annotations such as inclusion of situational context 
+ Communications on testing protocols
+ Evaluations on novel or multiple corpora
+ New methods for community or distributed annotation
+ Unsupervised learning techniques to exploit additional data
+ Synthesis of data for learning in sparse data tasks
+ Resources for underrepresented languages & cultures
+ Evaluations on novel or multiple corpora

Important Dates 

1500-2000 words abstract submission deadline
20 February 2012

Notification of acceptance
12 March 2012 

Camera ready paper
20 March 2012

 26 May 2012


Laurence Devillers
U. Paris-Sorbonne 4, France

Björn Schuller
TUM, Germany

Anton Batliner
FAU, Germany

Paolo Rosso
U. Politèc. Valencia, Spain

Ellen Douglas-Cowie 
Queen's Univ. Belfast, UK

Roddy Cowie
Queen's Univ. Belfast, UK

Catherine Pelachaud
CNRS - LTCI, France

Program Committee

Vered Aharonson, AFEKA, Israel
Alexandra Balahur, EC JRCentre, Italy
Felix Burkhardt, Deut. Telekom, Germany
Carlos Busso, UT Dallas, USA
Rafael Calvo, University Sydney, Australia
Erik Cambria, Nat. U. Singapore, Singapore
Antonio Camurri, Univ. Genova, Italy
Mohamed Chetouani, Univ. Paris 6, France
Thierry Dutoit, Univ. Mons, Belgium
Julien Epps, U. New South Wales, Australia
Anna Esposito, IIASS, Italy
Hatice Gunes, Queen Mary University, UK
Catherine Havasi, MIT Media Lab., USA
Bing Liu, Univ. Illinois at Chicago, USA
Florian Metze, CMU, USA
Shrikanth Narayanan, USC, USA
Maja Pantic, Imperial College London, UK
Antonio Reyes, Univ. Politèc. Valencia, Spain
Fabien Ringeval, Univ. Fribourg, Switzerland
Peter Robinson, Univ. Cambridge, UK
Florian Schiel, LMU, Germany
Jianhua Tao, Chinese Acad. Sciences, China
José A. Troyano, Univ. de Sevilla, Spain
Tony Veale, UCD, Ireland
Alessandro Vinciarelli, Univ. Glasgow, UK
Haixun Wang, Microsoft Research Asia, China

Submission Policy

Submitted abstracts of papers for oral and 
poster must consist of about 1500-2000 words. 

Final submissions should be 4 pages long, 
must be in English, and follow the submission guidelines at LREC 2012. 

When submitting a paper from the START 
page, authors will be asked to provide 
essential information about resources (in a 
broad sense, i.e. also technologies, standards, evaluation kits, etc.)
that have been used for the work described in the paper or are a new result
of your research. For further information on this new initiative, please refer to

Contact: [log in to unmask]

Thank you for excusing cross-postings.


Dr. Björn Schuller

Technische Universität München
Institute for Human-Machine Communication
D-80333 München

[log in to unmask]

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see