This year's 10th anniversary Interspeech Computational Paralinguistics Challenge (ComParE) just opened also the Atypical and Self-Assessed Affect Sub-Challenges.
All Sub-Challenges are now open for participation.
Call for Participation
INTERSPEECH 2018 ComParE
Computational Paralinguistics challengE
Atypical & Self-Assessed Affect, Crying & Heart Beats
Björn Schuller (University of Augsburg, Germany & Imperial College, UK)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (University of Augsburg, Germany)
Peter B. Marschik (MU Graz, Austria / UMC Göttingen, Germany / KI, Sweden)
Harald Baumeister (University of Ulm, Germany)
Fengquan Dong (Shenzhen University General Hospital, PR China)
16 March 2018 Paper abstract registration
23 March 2018 Final paper submission
11 June 2018 Final result upload
17 June 2018 Camera-ready paper
The Interspeech 2018 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states and traits of speakers as manifested in their speech signal's acoustic properties. There have so far been nine consecutive Challenges at INTERSPEECH since 2009 (cf. the Challenge series' repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, in this year's 10th anniversary edition, we introduce four new tasks. The following Sub-Challenges are addressed:
* In the Atypical Affect Sub-Challenge, emotion of disabled speakers is to be recognised.
* In the Self-Assessed Affect Sub-Challenge, self-assessed affect shall be determined.
* In the Crying Sub-Challenge, mood-related types of infant vocalisation have to be classified.
* In the Heart Beats Sub-Challenge, types of Heart Beat Sounds need to be distinguished.
All Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set and tools will be provided that may be used. Participants will have to stick to the definition of training, development, and test sets as given. They may report results obtained on the development sets, but have only five trials to upload their results on the test set per Sub-Challenge, whose labels are unknown to them. Each participation has to be accompanied by a paper presenting the results that undergoes the normal Interspeech peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge.
In these respects, the INTERSPEECH 2017 Computational Paralinguistics challengE (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results. We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these data. Overall, contributions using the provided or equivalent data are sought for (but not limited to):
* Participation in a Sub-Challenge
* Contributions focussing on Computational Paralinguistics centred around the Challenge topics
The results of the Challenge will be presented at Interspeech 2018 in Hyderabad, India.
Prizes will be awarded to the Sub-Challenge winners. If you are interested and planning to participate in INTERSPEECH 2018 ComParE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail ([log in to unmask]<mailto:[log in to unmask]>) to indicate your interest and visit the homepage:
With best wishes,
On behalf of the organisers
Professor Dr habil.
Björn W. Schuller
Chair of Embedded Intelligence for Health Care and Wellbeing
University of Augsburg / Germany
Head GLAM - Group on Audio, Language & Music
Imperial College London / UK
CEO audEERING GmbH
Gilching / Germany
Harbin Institute of Technology / China
Editor in Chief
IEEE Transactions on Affective Computing
[log in to unmask]<mailto:[log in to unmask]>
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org