Co-located with
ACL 2020
Website:
http://multicomp.cs.cmu.edu/acl2020multimodalworkshop/
Keynotes:
Important Dates:
Supported by:
=================================================================
The ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language (ACL 2020) offers a unique opportunity for interdisciplinary
researchers to study and model interactions between modalities of language, vision, and acoustic. Modeling multimodal language is a growing research area in NLP. This research area pushes the boundaries of multimodal learning and requires advanced neural modeling
of all three constituent modalities. Advances in this research area allow the field of NLP to take the leap towards better generalization to real-world communication (as opposed to limitation to textual applications), and better downstream performance in Conversational
AI, Virtual Reality, Robotics, HCI, Healthcare, and Education.
There are two tracks for submission: Grand-challenge and Workshop (workshop allows archival and non-archival submissions). Grand-Challenge is focused on multimodal sentiment and emotion
recognition on CMU-MOSEI (grand-prize of >$1k in value for the winner) and MELD dataset. The workshop accepts publications in the below listed research areas. Archival track will be published in ACL workshop proceedings and non-archival track will be only
presented during the workshop (but not published in proceedings). We invite researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI, and Affective Computing to submit their papers.
We accept the following types of submissions:
Workshop Organizers: