CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="iso-8859-2"
Date:
Sat, 26 Sep 2015 15:31:38 -0400
Reply-To:
Louis-Philippe Morency <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Louis-Philippe Morency <[log in to unmask]>
Parts/Attachments:
text/plain (94 lines)
=====================================================
NIPS 2015 Workshop: Multimodal Machine Learning
Montreal, Quebec, Canada
https://sites.google.com/site/multiml2015/
=====================================================

IMPORTANT DATES

*       Submission Deadline: October 9th, 2015, 11:59pm PDT

*       Author Notification:  October 24th, 2015

*       Workshop: Friday, December 11, 2015

KEYNOTE SPEAKERS

*       Dhruv Batra (Virginia Tech)

*       Shih-Fu Chang (Columbia University)

*       Li Deng (Microsoft Research)

*       Raymond Mooney (University of Texas, Austin)

*       Ruslan Salakhutdinov (Carnegie Mellon University)

OVERVIEW
Multimodal machine learning aims at building models that can process and relate information from multiple modalities. From the early research on audio-visual speech recognition to the recent explosion of interest in models mapping images to natural language, multimodal machine learning is  a vibrant multi-disciplinary field of increasing importance and with extraordinary potential.

Learning from paired multimodal sources offers the possibility of capturing correspondences  between modalities and gain in-depth understanding of natural phenomena. Thus,  multimodal data provides a means of reducing our dependence on the more standard supervised learning paradigm that is inherently limited by the availability of labeled examples.

This research field brings some unique challenges for machine learning researchers given the heterogeneity of the data and the complementarity often found between modalities. This workshop will facilitate the progress in multimodal machine learning by bringing together researchers from natural language processing, multimedia, computer vision, speech processing and machine learning to discuss the current challenges and identify the research infrastructure needed to enable a stronger multidisciplinary collaboration.

TOPICS
We are looking for contributed papers that apply machine learning to multimodal data. We are interested in both application-oriented papers as well as more fundamental algorithmic / theoretical works.

A non-exhaustive list of relevant topics:

*       Automatic image and video description

*       Multimodal signal processing

*       Audio-visual speech recognition

*       Multimodal affect recognition

*       Cross-modal multimedia retrieval

*       Multi-view multi-task learning

*       Multimodal representation learning

*       Multi-sensory computational modeling

*       Multilingual, multimodal language processing

*       Multimodal modeling for robotics control

*       Multimodal human behavior modeling

SUBMISSIONS
Authors should submit an extended abstract between 4 and 6 pages (including references). We encourage submissions that have been previously published outside the machine learning community (i.e. at NIPS and ICML) to emphasize the multidisciplinary aspect of this research area. We also encourage submission of relevant work in progress.

Submitted abstracts may be a shortened version of a longer paper or technical report, in which case the longer paper should be referred from the submission. Reviewers will be asked to judge the submission solely based on the submitted extended abstract.

All submissions must be in PDF format, and we encourage authors to follow the style guidelines of NIPS 2015 at: https://nips.cc/Conferences/2015/PaperInformation/AuthorSubmissionInstructions

Submissions must be made through:
https://cmt.research.microsoft.com/MMML2015/

Submissions will be reviewed for relevance, quality and novelty.  They will be presented as posters during the poster session (before the lunch break). A handful of submissions will be given a short talk.

ORGANIZERS

*       Louis-Philippe Morency ([log in to unmask]<mailto:[log in to unmask]>)

*       Tadas Baltrušaitis ([log in to unmask]<mailto:[log in to unmask]>)

*       Aaron Courville ([log in to unmask]<mailto:[log in to unmask]>)

*       KyungHyun Cho ([log in to unmask]<mailto:[log in to unmask]>)



    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2