CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender: "ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Date: Fri, 17 Apr 2020 01:03:35 +0000
Reply-To: Louis-Philippe Morency <[log in to unmask]>
MIME-Version: 1.0
Message-ID: <[log in to unmask]>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="us-ascii"
From: Louis-Philippe Morency <[log in to unmask]>
Parts/Attachments: text/plain (77 lines)
2nd Grand-Challenge and Workshop on Multimodal Language
   (Challenge-HML)



Co-located with ACL 2020

Website: http://multicomp.cs.cmu.edu/acl2020multimodalworkshop/



Keynotes:

  *   Yejin Choi - University of Washington
  *   M. Ehsan Hoque - University of Rochester
  *   Rada Mihalcea - University of Michigan
  *   Ruslan Salakhutdinov - Carnegie Mellon University

Important Dates:

  *   Paper Deadline: May 18th (Workshop) and May 20th (Grand-Challenge)
  *   Notification of Acceptance: May 26th
  *   Camera-ready: June 2nd

Supported by:

  *   National Science Foundation (NSF)
  *   Intel

=================================================================



The ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language (ACL 2020) offers a unique opportunity for interdisciplinary researchers to study and model interactions between modalities of language, vision, and acoustic. Modeling multimodal language is a growing research area in NLP. This research area pushes the boundaries of multimodal learning and requires advanced neural modeling of all three constituent modalities. Advances in this research area allow the field of NLP to take the leap towards better generalization to real-world communication (as opposed to limitation to textual applications), and better downstream performance in Conversational AI, Virtual Reality, Robotics, HCI, Healthcare, and Education.



There are two tracks for submission: Grand-challenge and Workshop (workshop allows archival and non-archival submissions). Grand-Challenge is focused on multimodal sentiment and emotion recognition on CMU-MOSEI (grand-prize of >$1k in value for the winner) and MELD dataset. The workshop accepts publications in the below listed research areas. Archival track will be published in ACL workshop proceedings and non-archival track will be only presented during the workshop (but not published in proceedings). We invite researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI, and Affective Computing to submit their papers.

  *   Neural Modeling of Multimodal Language
  *   Multimodal Dialogue Modeling and Generation
  *   Multimodal Sentiment Analysis and Emotion Recognition
  *   Language, Vision, and Speech
  *   Multimodal Artificial Social Intelligence Modeling
  *   Multimodal Commonsense Reasoning
  *   Multimodal RL and Control
  *   Multimodal Healthcare
  *   Multimodal Educational Systems
  *   Multimodal Affective Computing
  *   Multimodal Robot/Computer Interaction
  *   Multimodal and Multimedia Resources
  *   Creative Applications of Multimodal Learning in E-commerce, Art, and other Impactful Areas.

We accept the following types of submissions:

  *   Grand challenge papers: 6-8 pages, unlimited number of pages for references.
  *   Workshop papers: 6-8 (long) or 4 (short) pages, unlimited number of pages for references.

Workshop Organizers:

  *   Amir Zadeh (Language Technologies Institute, Carnegie Mellon University)
  *   Louis-Philippe Morency (Language Technologies Institute, Carnegie Mellon University)
  *   Paul Pu Liang (Machine Learning Department, Carnegie Mellon University)
  *   Soujanya Poria (Singapore University of Technology and Design)



    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2