CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
"[log in to unmask]" <[log in to unmask]>, Stefan Steidl <[log in to unmask]>, "De Brito Lima Ferreira Coutinho, Eduardo" <[log in to unmask]>
Date:
Sat, 6 Feb 2016 08:09:58 +0000
Reply-To:
Bjoern Schuller <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Content-Type:
text/plain; charset="iso-8859-1"
From:
Bjoern Schuller <[log in to unmask]>
Parts/Attachments:
text/plain (156 lines)
Dear Colleagues,

the Interspeech 2016 Computational Paralinguistics ChallengE (ComParE) has opened the Native Language Sub-Challenge.

Please find details at:
http://emotion-research.net/sigs/speech-sig/is16-compare

With best wishes

Björn Schuller
On behalf of the organisers



Call for Participation

INTERSPEECH 2016 ComParE:

Computational Paralinguistics challengE



Deception, Sincerity & Native Language



Organisers:

Björn Schuller (University of Passau, Germany & Imperial College London, UK)

Stefan Steidl (FAU Erlangen-Nuremberg, Germany)

Anton Batliner (TUM, Germany)

Julia Hirschberg (Columbia University, USA)

Judee K. Burgoon (University of Arizona, USA)

Eduardo Coutinho (University of Liverpool, UK)



Dates:

Paper Submission          23 March 2016

Final Result Upload        16 June   2016

Camera-ready Paper     24 June   2016



The Challenge


















The Interspeech 2016 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states and traits of speakers as manifested in their speech signal's acoustic properties. There have so far been seven consecutive Challenges at INTERSPEECH since 2009 (cf. the Challenge series' repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, we introduce three new tasks in this year's edition. The following Sub-Challenges are addressed:



*              In the Deception Sub-Challenge, deceptive speech has to be identified.

*              In the Sincerity Sub-Challenge, perceived sincerity of speakers has to be determined for the first time.

*              In the Native Language Sub-Challenge, the native language of non-native English speakers from eleven countries has to be recognised.



All Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set will be provided that may be used. Participants will have to stick to the definition of training, development, and test sets as given. They may report results obtained on the development sets, but have only five trials to upload their results on the test sets depending on the Sub-Challenges, whose labels are unknown to them. Each participation has to be accompanied by a paper presenting the results that undergoes the normal Interspeech peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge.

In these respects, the INTERSPEECH 2016 Computational Paralinguistics challengE (ComParE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results. We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these data. Overall, contributions using the provided or equivalent data are sought for (but not limited to):



·         Participation in a Sub-Challenge

·         Contributions focussing on Computational Paralinguistics centred around the Challenge topics



The results of the Challenge will be presented at Interspeech 2016<http://www.interspeech2016.org/> in San Francisco, U.S.A.

Prizes will be awarded to the Sub-Challenge winners. If you are interested and planning to participate in INTERSPEECH 2016 ComParE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail ([log in to unmask]<mailto:[log in to unmask]>) to indicate your interest and visit the homepage: http://emotion-research.net/sigs/speech-sig/is16-compare



___________________________________________

Univ.-Prof. Dr.-Ing. habil.
Björn W. Schuller

Head (Full Professor)
Chair of Complex and Intelligent Systems
University of Passau
Passau / Germany

Reader (Associate Professor)
Department of Computing
Imperial College London
London / U.K.

CEO
audEERING UG
Gilching / Germany

Visiting Professor
School of Computer Science and Technology
Harbin Institute of Technology
Harbin / P.R. China

Associate
Institute for Information and Communication Technologies
Joanneum Research
Graz / Austria

Associate
Centre Interfacultaire en Sciences Affectives
Université de Genève
Geneva / Switzerland

Editor in Chief
IEEE Transactions on Affective Computing

[log in to unmask]<mailto:[log in to unmask]>
http://www.schuller.one
___________________________________________

This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from our institute may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks.



    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2