ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bjoern Schuller <[log in to unmask]>
Reply To:
Bjoern Schuller <[log in to unmask]>
Thu, 15 Apr 2021 10:16:39 +0000
text/plain (130 lines)
Dear Colleagues,

The 2nd International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop (MuSe 2021) 
@ ACM Multimedia, October 2021, Chengdu China

is now open:


The Multimodal Sentiment Analysis Challenge and Workshop (MuSe) focuses on Multimodal Sentiment Recognition of data sourced from user-generated content and stress-induced situations. The competition is aimed to compare multimedia processing and deep learning methods for automatic audiovisual, biological, and textual based sentiment and emotion sensing, under a common experimental condition set.

The goal of the challenge is to provide a common benchmarkable test set for multimodal information processing and to bring together the Affective Computing, Sentiment Analysis, and Health Informatics communities, to compare the merits of multimodal fusion for a large amount of modalities under well-defined conditions. Another motivation is the need to advance sentiment and emotion recognition systems to be able to deal with previously unexplored naturalistic behaviour in large volumes of in-the-wild data. The raw video recordings, transcriptions, pre-processed features and model baselines are available on our website.

We are calling for teams to participate in four Sub-Challenges:

Multimodal Continuous Emotions in-the-Wild Sub-challenge (MuSe-Wilder)
Predicting the level of emotional dimensions (valence, arousal) in a time-continuous manner from audio-visual recordings.

Multimodal Sentiment Classification Sub-challenge (MuSe-Sent)
Predicting 5 advanced intensity classes of emotions based on valence and arousal for segments of audio-visual recordings.

Multimodal Emotional Stress
Sub-challenge (MuSe-Stress)
Predicting the level of emotion (dimensions of arousal, valence) in a time-continuous manner from biological signals and audio-visual recordings.

Multimodal Biosignal Affect
Sub-challenge (MuSe-Physio)
Predicting the combined signal of human annotated arousal and Electrodermal activity (i.e., physical arousal) in a time-continuous manner based on audio-visual-text data and biological signals.

Important Dates 

Challenge opening

01 April 2021

Paper submission

Late July 2021

Notification of acceptance

Late August 2021

Camera ready paper

Early September 2021


20-24 October 2021


Björn W. Schuller

Imperial College London, UK, [log in to unmask]

Erik Cambria

NTU/SenticNet, SG, [log in to unmask]

Eva-Maria Meßner

Ulm University, DE, [log in to unmask]

Guoying Zhao

University of Oulu, FN, [log in to unmask]

Lukas Stappen

University of Augsburg, DE, [log in to unmask] 

Welcome to the Challenge!

Best wishes,

Björn Schuller 
On behalf of the organisers


Univ.-Prof. mult. Dr. habil. 
Björn W. Schuller, 

Professor and Chair of Embedded Intelligence for Health Care and Wellbeing
University of Augsburg / Germany

Professor of Artificial Intelligence
Head GLAM - Group on Language, Audio & Music
Imperial College London / UK


Field Chief Editor Frontiers in Digital Health

[log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see: