CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Bjoern Schuller <[log in to unmask]>
Reply To:
Bjoern Schuller <[log in to unmask]>
Date:
Thu, 22 Jan 2015 20:58:45 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (166 lines)
This is to announce the opening of all Sub-Challenges of Interspeech 2015 ComParE:


Call for Participation

INTERSPEECH 2015 ComParE:

Computational Paralinguistics challengE



Degree of Nativeness, Parkinson's & Eating Condition



http://emotion-research.net/sigs/speech-sig/is15-compare





Organisers:



Björn Schuller (University of Passau, Germany & Imperial College London, UK)

Stefan Steidl (FAU Erlangen-Nuremberg, Germany)

Anton Batliner (TUM, Germany)

Florian Hönig (FAU Erlangen-Nuremberg, Germany)

Rafael Orozco (FAU Erlangen-Nuremberg, Germany & U. de Antioquia, Colombia)



Dates:



Paper Submission          20 March 2015

Final Result Upload         05 June   2015

Camera-ready Paper       10 June   2015



The Challenge

[cid:image001.jpg@01D0368E.92AC84A0]<http://www.isca-speech.org/>



[cid:image002.jpg@01D0368E.92AC84A0]



[cid:image003.png@01D0368E.92AC84A0]

[cid:image004.jpg@01D0368E.92AC84A0]<http://ihearu.eu/>

[qr_code_without_logo]




The Interspeech 2015 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states of speakers as manifested in their speech signal's acoustic properties. There have so far been six consecutive Challenges at INTERSPEECH since 2009 (cf. the Challenge series' repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, we introduce three new tasks. For all these, the data are provided by the organisers. They comprise high diversity of speakers and different languages covered ((Non-native) English, Spanish, and German). The following Sub-Challenges are addressed:



*              In the Degree of Nativeness (DN) Sub-Challenge, the degree of nativeness has to be determined based on the acoustics.
In the Parkinson's Condition (PC) Sub-Challenge, the degree of Parkinson's condition has to be recognised based on speech analysis.

*              In the Eating Condition (EC) Sub-Challenge, eating condition and the type of consumed food (seven classes) of a speaker have to be determined for the first time.



The measure of competition will be Spearman Correlation for DN and PC, and Unweighted Accuracy for EC. Orthographic transcriptions of the sets will be known. All three Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set will be provided that may be used. Participants will have to stick to the definition of training and test sets. They may report results obtained on the training sets, but have only five trials to upload their results on the test sets, whose labels are unknown to them. Each participation has to be accompanied by a paper presenting the results that undergoes the normal Interspeech peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge.



In these respects, the INTERSPEECH 2015 Computational Paralinguistics challengE (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results.



We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these data. Overall, contributions using the provided or equivalent data are sought for (but not limited to):



·         Participation in a Sub-Challenge

·         Contributions focussing on Computational Paralinguistics centred around the Challenge topics



The results of the Challenge will be presented at Interspeech 2015 in Dresden, Germany.

Prizes will be awarded to the Sub-Challenge winners. In addition, we plan a "best paper award" for the paper with the most innovative and/or interesting approach.



If you are interested and planning to participate in INTERSPEECH 2015 ComParE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail ([log in to unmask]<mailto:[log in to unmask]>) to indicate your interest and visit the homepage: http://emotion-research.net/sigs/speech-sig/is15-compare




___________________________________________

Univ.-Prof. Dr.-Ing. habil.
Björn W. Schuller

Head
Chair of Complex and Intelligent Systems
University of Passau
Passau / Germany

Senior Lecturer in Machine Learning
Department of Computing
Imperial College London
London / U.K.

CEO
audEERING UG (limited)
Gilching / Germany

Visiting Professor
School of Computer Science and Technology
Harbin Institute of Technology
Harbin / P.R. China

Associate
Institute for Information and Communication Technologies
Joanneum Research
Graz / Austria

Associate
Centre Interfacultaire en Sciences Affectives
Université de Genève
Geneva / Switzerland

Editor in Chief
IEEE Transactions on Affective Computing

President
Association for the Advancement of Affective Computing
AAAC

[log in to unmask]<mailto:[log in to unmask]>
http://www.schuller.it<http://www.schuller.it/>
___________________________________________




    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2