CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="iso-8859-1"
Date:
Wed, 16 Jan 2013 15:59:57 +0000
Reply-To:
Bjoern Schuller <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Bjoern Schuller <[log in to unmask]>
Parts/Attachments:
text/plain (101 lines)
Dear List,

For those of you interested, let us announce the opening of the fourth and last Sub-Challenge of the
INTERSPEECH 2013 Computational Paralinguistics Challenge:



Call for Participation
INTERSPEECH 2013 ComParE:
COMPUTATIONAL PARALINGUISTICS CHALLENGE

Social Signals, Conflict, Emotion, Autism

http://emotion-research.net/sigs/speech-sig/is13-compare




Fourth Sub-Challenge now open - obtain the data:
http://emotion-research.net/sigs/speech-sig/IS13-Challenge-Agreements-SC2.pdf




The Challenge

After four consecutive Challenges at INTERSPEECH, there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. In the last instalments, we focused on single speakers. With a new task, we now want to broaden to analysing discussion of multiple speakers in the Conflict Sub-Challenge. A further novelty is introduced by the Social Signals Sub-Challenge: For the first time, non-linguistic events have to be classified and localised - laughter and fillers. In the Emotion Sub-Challenge we are literally "going back to the roots". However, by intention, we use acted material for the first time to fuel the ever on-going discussion on differences between naturalistic and acted material and hope to highlight the differences. Finally, the Autism Sub-Challenge picks up on Autism Spectrum Condition in children's speech in this year. Apart from intelligent and socially competent future agents and robots, main applications are found in the medical domain and surveillance. The Challenge corpora feature rich annotation such as speaker meta-data, orthographic transcript, phonemic transcript, and segmentation. All four are given with distinct definitions of test, development, and training partitions, incorporating speaker independence as needed in most real-life settings. Benchmark results of the most popular approaches will be provided as in the years before. In these respects, the INTERSPEECH 2013 COMPUTATIONAL PARALINGUISTICS CHALLENGE (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results.

In summary, four Sub-Challenges are addressed:

*             In the Social Signals Sub-Challenge, non-linguistic events - laughter and fillers - of a speaker have to be classified and localised based on acoustics.
*             In the Conflict Sub-Challenge, group discussions have to be automatically evaluated aiming at retrieving conflicts.
*             In the Emotion Sub-Challenge, the emotion of a speaker's voice has to be determined by a suited learning algorithm and acoustic features.
*             In the Autism Sub-Challenge, the type of pathology of a speaker has to be determined by a suited classification algorithm and acoustic features.

The measures of competition will be Unweighted Average Area Under receiver operating Curve and Recall. All Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set will be provided per corpus that may be used. Participants will have to stick to the definition of training, development, and test sets. They may report on results obtained on the development set, but have only five trials to upload their results on the test sets, whose labels are unknown to them. Each participation will be accompanied by a paper presenting the results that undergoes peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge. Participants are encouraged to compete in all Sub-Challenges.

Overall, contributions using the provided or equivalent data are sought for (but not limited to):

*             Participation in a Sub-Challenge
*             Contributions focussing on Computational Paralinguistics centred around the Challenge topics

The results of the Challenge will be presented at Interspeech 2013 in Lyon, France. Prizes will be awarded to the Sub-Challenge winners. If you are interested and planning to participate in INTERSPEECH 2013 ComParE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage: http://emotion-research.net/sigs/speech-sig/is13-compare


Organisers:

Björn Schuller (TUM, Germany)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (TUM, Germany)
Alessandro Vinciarelli (University of Glasgow, UK)
Klaus Scherer (Swiss Center for Affective Sciences, Switzerland)
Fabien Ringeval (University of Fribourg, Switzerland)
Mohamed Chetouani (Université Pierre et Marie Curie, France)


Dates:

Paper Submission            18 March             2013
Final Result Upload         24 May                 2013
Camera-ready Paper      29 May                 2013


Sponsors:

HUMAINE Association                   (http://emotion-research.net/)
SSPNet                                                 (http://sspnet.eu/)
ASC-Inclusion                                    (http://www.asc-inclusion.eu/)





___________________________________________

PD Dr. habil. DI Björn W. Schuller

Head Machine Intelligence & Signal Processing Group
Institute for Human-Machine Communication
Technische Universität München
D-80333 München
Germany

+49-(0)89-289-28548

[log in to unmask]<mailto:[log in to unmask]>
www.mmk.ei.tum.de/~sch<http://www.mmk.ei.tum.de/~sch>
___________________________________________



    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2