CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Proportional Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Sheng-Wei (Kuan-Ta) Chen" <[log in to unmask]>
Reply To:
Sheng-Wei (Kuan-Ta) Chen
Date:
Mon, 7 May 2012 13:35:39 +0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (137 lines)
[Apologies if you receive this more than once]

Call For Papers for CrowdMM 2012  - International ACM Workshop on
Crowdsourcing for Multimedia
---------------------------------------

Call For Papers


CrowdMM 2012
International ACM Workshop on Crowdsourcing for Multimedia
held in conjunction with ACM Multimedia 2012, Oct 29 - Nov 2 2012, Nara,
Japan

http://crowdmm.org

Crowdsourcing--leveraging a large number of human contributors and the
capabilities of human computation--has enormous potential to address key
challenges in the area of multimedia research. Applications of
crowdsourcing range from the exploitation of unsolicited user
contributions, such as using tags to aid image understanding, to utilizing
crowdsourcing platforms and marketplaces to micro-outsource tasks such as
semantic video annotation. Further, crowdsourcing offers a time- and
resource-efficient method for collecting large volumes of input for system
design or evaluation, making it possible to optimize multimedia systems
more rapidly and to address human factors more effectively.

CrowdMM 2012 solicits novel contributions to multimedia research that make
use of human intelligence, but also take advantage of human plurality. This
workshop especially encourages contributions that propose solutions for the
key challenges that face widespread adoption of crowdsourcing paradigms in
the multimedia research community. These include: identification of optimal
crowd members (e.g., user expertise, worker reliability), providing
effective explanations (i.e., good task design), controlling noise and
quality in the results, designing incentive structures that do not breed
cheating, adversarial environments, gathering necessary background
information about crowd members without violating privacy, controlling
descriptions of task. Particular emphasis will be put on contributions that
successfully combine human and automatic methods in order to address
multimedia research challenges.

This workshop encourages theoretical, experimental, and/or methodological
developments advancing state-of-the-art knowledge of crowdsourcing
techniques for multimedia research.  Topics include, but are not limited to
the use of crowds, wisdom of crowds, or human computation in multimedia, in
the following areas of research:

* Creation: content synthesis, authoring, editing, and collaboration,
summarization and storytelling
* Evaluation: evaluation of multimedia signal processing algorithms,
multimedia analysis and retrieval algorithms, or multimedia systems and
applications
* Retrieval: analysis of user multimedia queries, evaluating multimedia
search algorithms and interactive multimedia retrieval
* Annotation: generating semantic annotations for multimedia content,
collecting large-scale input on user affective reactions
* Human factors: designing or evaluating user interfaces for multimedia
systems, usability study, multi-modal environment, human recognition and
perceptions
* Novel applications (e.g., human as an element in the loop of computation)
* Effective Learning from crowd-annotated or crowd-augmented datasets
* Quality assurance and cheat detection
* Economics and incentive structures
* Programming languages, tools and platforms providing enhanced support
* Inherent biases, limitations and trade-offs of crowd-centered approaches


==========
Submission
==========

CrowdMM 2012 welcomes submissions of full papers, as well as short papers
reporting work-in-progress. Full papers must be no longer than 6 pages
(inclusive of all figures, references and appendices). Short papers are 2
pages, and will be presented as Posters in an interactive setting.

All submissions must be written in English and must be formatted according
to the ACM Proceedings style. They must contain no information identifying
the author(s) or their organization(s).  Reviews will be double-blind.
 Papers will be judged on their relevance, technical content and
correctness, and the clarity of presentation of the research.

Detailed of the submission procedures will soon be available at
http://crowdmm.org.

===========
Publication
===========

Accepted full and short papers will appear in the ACM Multimedia 2012
Workshop Proceedings and in the ACM Digital Library.

Outstanding workshop papers will qualify for submission in extended form to
the special issue "Crowdsourcing for Multimedia" of IEEE Transactions on
Multimedia.  The special issue is scheduled to be published in late 2013.


===============
Important Dates
===============

Submission due:      June 29, 2012
Author notification: July 24, 2012
Camera ready due:    August 15, 2012
Workshop date:       October 29, 2012


==========
Organizers
==========

Wei-Ta Chu, National Chung Cheng University, Taiwan
Martha Larson, Delft University of Technology, Netherlands
Wei Tsang Ooi, National University of Singapore, Singapore
Kuan-Ta Chen, Academia Sinica, Taiwan

=======
Contact
=======

CrowdMM Website: http://crowdmm.org

For any questions or more information, please contact workshop co-chairs:
Wei-Ta Chu ([log in to unmask]), Martha Larson ([log in to unmask]),
Wei Tsang Ooi ([log in to unmask]), or Kuan-Ta Chen (
[log in to unmask]).

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2