CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="us-ascii"
Date:
Wed, 14 Feb 2018 21:14:51 +0000
Reply-To:
Louis-Philippe Morency <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Louis-Philippe Morency <[log in to unmask]>
Parts/Attachments:
text/plain (50 lines)
First Workshop on Computational Modeling of Human Multimodal Language
Co-located with ACL 2018 conference, Melbourne, Australia
Date: 20 July 2018 (submission deadline: April 20, 2018)
  *** workshop includes a Grand Challenge with new shared tasks on multimodal language modeling ***
http://multicomp.cs.cmu.edu/acl2018multimodalworkshop
The first ACL 2018 Workshop on Computational Modeling of Human Multimodal Language is offering a unique opportunity for interdisciplinary researchers to study and model interactions between language, vision and voice. This workshop is also a Grand Challenge which introduces new shared tasks building upon the recently release CMU-MOSEI dataset: more than 23,000 annotated videos from more than 1,000 different speakers and more than 200 topics. Two shared tasks are presented: (1) multimodal sentiment analysis and (2) multimodal emotion recognition. The workshop also introduces the CMU Multimodal Data SDK to the scientific community for conveniently loading large-scale multimodal datasets into proper TensorFlow and PyTorch formats.
The focus of this workshop is on joint analysis of language (spoken text), vision (gestures and expressions) and acoustic (paraverbal) modalities. We seek the following types of submissions:

  *   Grand challenge papers: Papers summarizing the research effort with the CMU-MOSEI shared tasks on multimodal sentiment analysis and/or emotion recognition. Grand challenge papers are 8 pages, including references.
  *   Full and short papers: These papers are presenting substantial, original and unpublished research on human multimodal language. Full papers are up to 8 pages including references and short papers are 4 pages + 1 page for references.
Topics of interest for full and short papers include:

  *   Multimodal sentiment analysis
  *   Multimodal emotion recognition
  *   Multimodal affective computing
  *   Multimodal speaker traits recognition
  *   Dyadic multimodal interactions
  *   Multimodal dialogue modeling
  *   Cognitive modeling and multimodal interaction
  *   Statistical analysis of human multimodal language
Submission must be formatted according to ACL 2018 style files: http://acl2018.org/call-for-papers/#paper-submission-and-templates
Important Dates

  *   Grand challenge data release: 18 January 2018
  *   Grand challenge test set available: 1 March 2018
  *   Paper deadline [grand challenge, full and short]: 20 April 2018
  *   Notification of Acceptance: 14 May 2018
  *   Camera ready: 28 May 2018
  *   Workshop date and location: 20 July 2018, ACL 2018 Melbourne Australia
Workshop Organizers
              Amir Zadeh                       (Language Technologies Institute, Carnegie Mellon University)
              Louis-Philippe Morency   (Language Technologies Institute, Carnegie Mellon University)
              Paul Pu Liang                   (Machine Learning Department, Carnegie Mellon University)
              Soujanya Poria                 (Temasek Laboratories, Nanyang Technological University)
              Erik Cambria                     (Temasek Laboratories, Nanyang Technological University)
              Stefan Scherer                  (Institute for Creative Technologies, University of Southern California)




    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2