CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Amirali Bagher Zadeh <[log in to unmask]>
Reply To:
Amirali Bagher Zadeh <[log in to unmask]>
Date:
Fri, 31 Jan 2020 08:00:00 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (153 lines)
ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language
(Challenge-HML)

Website: http://multicomp.cs.cmu.edu/acl2020multimodalworkshop/

Keynotes:

   -

   Rada Mihalcea – University of Michigan (USA)
   -

   Ruslan Salakhutdinov – Carnegie Mellon University (USA)
   -

   M. Ehsan Hoque – University of Rochester (USA)
   -

   Yejin Choi - University of Washington (USA)

Important Dates

   -

   Paper Deadline: April 25th (Workshop) and May 1st (Grand-Challenge)
   -

   Grand challenge test data release: Feb 15th
   -

   Notification of Acceptance: May 9th
   -

   Camera-ready: May 21st
   -

   Workshop location: ACL 2020, Seattle, USA

**All deadlines @11:59 pm anywhere on Earth- year 2020)**

Supported by:

   -

   National Science Foundation (NSF)
   -

   Intel

=================================================================

The ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language
(ACL 2020) offers a unique opportunity for interdisciplinary researchers to
study and model interactions between modalities of language, vision, and
acoustic. Modeling multimodal language is a growing research area in NLP.
This research area pushes the boundaries of multimodal learning and
requires advanced neural modeling of all three constituent modalities.
Advances in this research area allow the field of NLP to take the leap
towards better generalization to real-world communication (as opposed to
limitation to textual applications), and better downstream performance in
Conversational AI, Virtual Reality, Robotics, HCI, Healthcare, and
Education.

There are two tracks for submission: Grand-challenge and Workshop (workshop
allows archival and non-archival submissions). Grand-Challenge is focused
on multimodal sentiment and emotion recognition on CMU-MOSEI (grand-prize
of >$1k in value for the winner) and MELD dataset. The workshop accepts
publications in the below listed research areas. Archival track will be
published in ACL workshop proceedings and non-archival track will be only
presented during the workshop (but not published in proceedings). We invite
researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI,
and Affective Computing to submit their papers.

   -

   Neural Modeling of Multimodal Language
   -

   Multimodal Dialogue Modeling and Generation
   -

   Multimodal Sentiment Analysis and Emotion Recognition
   -

   Language, Vision, and Speech
   -

   Multimodal Artificial Social Intelligence Modeling
   -

   Multimodal Commonsense Reasoning
   -

   Multimodal RL and Control
   -

   Multimodal Healthcare
   -

   Multimodal Educational Systems
   -

   Multimodal Affective Computing
   -

   Multimodal Robot/Computer Interaction
   -

   Multimodal and Multimedia Resources
   -

   Creative Applications of Multimodal Learning in E-commerce, Art, and
   other Impactful Areas.

We accept the following types of submissions:

   -

   Grand challenge papers are 6-8 pages, including infinite references.
   -

   Full and short workshop papers 6-8 and 4 pages respectively with
   infinite references.

Submission must be formatted according to ACL 2020 style files:
https://acl2020.org/calls/papers/#paper-submission-and-templates

Workshop Organizers

   -

   Amir Zadeh (Language Technologies Institute, Carnegie Mellon University)
   -

   Louis-Philippe Morency (Language Technologies Institute, Carnegie Mellon
   University)
   -

   Paul Pu Liang (Machine Learning Department, Carnegie Mellon University)
   -

   Soujanya Poria (Singapore University of Technology and Design)

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2