MM-INTEREST Archives

ACM SIGMM Interest List

MM-INTEREST@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Message-ID:
Sender:
ACM SIGMM Interest List <[log in to unmask]>
Subject:
From:
XAVIER ANGUERA MIRO <[log in to unmask]>
Date:
Fri, 10 Dec 2010 17:28:12 +0100
MIME-version:
1.0
Content-type:
multipart/alternative; boundary="Boundary_(ID_y3JA1ktMhOBMUpru6ztTFg)"
Reply-To:
XAVIER ANGUERA MIRO <[log in to unmask]>
Parts/Attachments:
text/plain (4047 bytes) , text/html (6 kB)
                                (Apologies if you receive multiple copies)

**************************************************************
                               Workshop on Multimodal Audio-based Multimedia
                                       Content Analysis (MAMCA-2011)
                                      website: http://www.mamca2011.com

                               In Conjunction with the IEEE International Conference
                                        on Multimedia and Expo (ICME)
                                      Barcelona, Spain, July 11-15, 2011

                                             Call for Papers

**************************************************************


By definition, multimedia content is composed of multiple forms, including
audio, video, text/ subtitles, and others. Traditionally, applications and
algorithms that work with such content have considered only a single modality,
allowing for example searching of textual tags, thereby ignoring any information
available from others modalities. The limitations of this approach are obvious,
and there is a recent trend towards multimodal processing, in which different
content modalities complement each other, or are used for bootstrapping analysis
of new modalities.

Audio is a prominent part of multimedia content, which is backed up by extensive
research by the speech and music communities, although usually performed on
audio-only systems. Utility of audio-only systems is often limited by the quality
of the acoustic environment or the information contained therein, so they can
benefit from a multimodal analysis of multimedia data, to enhance the resulting
performance, robustness, and efficiency.

The main goal of the workshop is to explore ways in which audio processing can
be enhanced, bootstrapped, or facilitated by other available information modalities.
We are interested not only in applications that show successful combinations of
audio and other sources of information, but also on algorithms that effectively
integrate them and leverage complementary information from each modality to obtain
an enhanced result, in terms of degree of detail, coverage of the corpus, or other
enabling factors.

The workshop will provide a forum for publication of high-quality, novel research
on multimedia applications and multimodal processing, with a special focus on the
audio modality.

Paper submission
-----------------

MAMCA 2011 solicits regular technical papers of up to 6 pages following the ICME
author guidelines. The proceedings of the workshop will be published as part of
the IEEE ICME 2011 main conference proceedings and will be indexed by IEEE Xplore.
Papers must be original and not submitted to or accepted by any other conference
or journal. Papers can be submitted through the ICME submission website.

Papers submitted to the workshop will be peer-reviewed by members of the community
with extensive experience both in audio processing as well as other relevant
modalities considered. The review will be semi-blind and assignment will be
performed manually in order to generally produce three best practice reviews of
each of the submitted papers.

Papers can be submitted through the ICME submissions website at http://www.icme2011.org/submission.php


Topics of interest
------------------

including, but not limited to:
- Effective fusion of audio with other modalities
- Multimodal input applications, where one input is audio
- Multimodal databases
- Bootstrapping of multimodal systems
- Co-training for labeling new data
- User-in-the loop calculations to detect preferences
- Games with a purpose to label new data
- Improving robustness through multimodality
- Prediction of modality preference
- Applications that utilize multimodality


Important dates
---------------

- Paper submission deadline: February 20th 2011
- Paper acceptance notification: April 10th 2011
- Camera-ready paper: April 20th 2011
- Workshop day: tentative date July 11th or 15th 2011


Organizing committee
--------------------

Xavier Anguera (Telefonica Research)
Gerald Friedland (ICSI)
Florian Metze (CMU)



ATOM RSS1 RSS2