The second Sub-Challenge of this year's Interspeech Computational Paralinguistics Challenge has just opened:
Call for Participation
INTERSPEECH 2017 ComParE:
Computational Paralinguistics challengE
Addressee, Cold & Snoring
Björn Schuller (University of Passau, Germany & Imperial College London, UK)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (University of Passau, Germany)
Elika Bergelson (Duke University, USA)
Jarek Krajewski (University of Wuppertal, Germany)
Christoph Janott (Technische Universität München, Germany)
Paper Submission 14 March 2017
Final Result Upload 1 June 2017
Camera-ready Paper 5 June 2017
The Interspeech 2017 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states and traits of speakers as manifested in their speech signal's acoustic properties. There have so far been eight consecutive Challenges at INTERSPEECH since 2009 (cf. the Challenge series' repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, we introduce three new tasks in this year's edition. The following Sub-Challenges are addressed:
* In the Addressee Sub-Challenge, the conversational partner (child/adult) has to be identified.
* In the Cold Sub-Challenge, speech under cold has to be determined for the first time.
* In the Snoring Language Sub-Challenge, four types of snoring sounds have to be recognised.
All Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set will be provided that may be used. Participants will have to stick to the definition of training, development, and test sets as given. They may report results obtained on the development sets, but have only five trials to upload their results on the test set per Sub-Challenge, whose labels are unknown to them. Each participation has to be accompanied by a paper presenting the results that undergoes the normal Interspeech peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge.
In these respects, the INTERSPEECH 2017 Computational Paralinguistics challengE (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results. We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these data. Overall, contributions using the provided or equivalent data are sought for (but not limited to):
· Participation in a Sub-Challenge
· Contributions focussing on Computational Paralinguistics centred around the Challenge topics
The results of the Challenge will be presented at Interspeech 2017 in Stockholm, Sweden.
Prizes will be awarded to the Sub-Challenge winners. If you are interested and planning to participate in INTERSPEECH 2017 ComParE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail ([log in to unmask]<mailto:[log in to unmask]>) to indicate your interest and visit the homepage: http://emotion-research.net/sigs/speech-sig/is17-compare
On behalf of the organisers,
Univ.-Prof. Dr.-Ing. habil.
Björn W. Schuller
Chair of Complex and Intelligent Systems
University of Passau
Passau / Germany
Reader (Associate Professor)
Department of Computing
Imperial College London
London / U.K.
Gilching / Germany
School of Computer Science and Technology
Harbin Institute of Technology
Harbin / P.R. China
Centre Interfacultaire en Sciences Affectives
Université de Genève
Geneva / Switzerland
Editor in Chief
IEEE Transactions on Affective Computing
[log in to unmask]<mailto:[log in to unmask]>
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org