ACM SIGMM Interest List


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Ooi Wei Tsang <[log in to unmask]>
Reply To:
Ooi Wei Tsang <[log in to unmask]>
Thu, 23 May 2013 02:16:30 +0800
text/plain (182 lines)
                         CROWDMM 2013
    International ACM Workshop on Crowdsourcing for Multimedia 

                       IDEA COMPETITION

Submit a short description of a multimedia crowdsourcing
experiment that you would run, if you had US$ 250 of
crowdsourcing credit on the Microworkers crowdsourcing platform.
Please submit your idea (200-500 words) by June 28, 2013 by
filling up the competition form at

Ten ideas meeting the criteria below will be chosen and each will
be awarded US$ 250 worth of free crowdsourcing credit by

The ideas will be judged using the following three criteria and
the decision will be sent shortly after your submission:

- The clarity and simplicity with which the idea is explained. (A
  clear and simple-to-understand task will run the most
  successfully on the crowdsourcing platform.)

- The relationship between your idea and the areas of ACM 
  CrowdMM 2013 (

- Innovation and contribution to the state of the art in
  crowdsourcing for multimedia.

Additional US$ 500 for further crowdsourcing tests will be given
to the best crowdsourcing idea submitted to ACM CrowdMM 2013.

We look forward to receiving your idea and we wish you happy

We thank Microworkers for donating the credit that makes this
competition possible.

If you have any questions, please contact Tobias Hossfeld at:
   [log in to unmask]


                       CALL FOR PAPERS
                         CROWDMM 2013
    International ACM Workshop on Crowdsourcing for Multimedia 

          held in conjunction with ACM Multimedia 2013
                     Oct 21 - Oct 25 2013
                       Barcelona, Spain

The power of crowds -- leveraging a large number of human
contributors and the capabilities of human computation -- has
enormous potential to address key challenges in the area of
multimedia research.  Applications range from the exploitation of
unsolicited user contributions, such as using tags to aid
understanding of the visual content of yet-unseen images, to
utilizing crowdsourcing platforms and marketplaces like Amazon's
Mechanical Turk and CrowdFlower, which micro-outsource tasks such
as semantic video annotation to a large population of workers.
Further, crowdsourcing offers a time- and resource-efficient
method for collecting large volumes of input for system design
and evaluation, making it possible to optimize multimedia systems
more rapidly and to address human factors more effectively.

CrowdMM 2013 solicits novel contributions to multimedia
research that make use of human intelligence, but also take
advantage of human plurality.  We will especially encourage
contributions that propose solutions for the key challenges that
face widespread adoption of crowdsourcing paradigms in the
multimedia research community.  These include: identification of
optimal crowd members (e.g., user expertise, worker reliability),
providing effective explanations (i.e., good task design),
controlling noise and quality in the results, designing incentive
structures that do not breed cheating, adversarial environments,
gathering necessary background information about crowd members
without violating privacy, controlling descriptions of task.
Particular emphasis will be put on contributions that
successfully combine human and automatic methods in order to
address multimedia research challenges.

This workshop encourages theoretical, experimental, and
methodological developments advancing state-of-the-art knowledge
of crowdsourcing techniques for multimedia research and novel
applications using crowdsourcing to solve traditional challenges
in multimedia research.  Topics include, but are not limited to
the use of crowds, wisdom of crowds, or human computation in
multimedia, in the following areas of research:

- Creation: content synthesis, authoring, editing, and
  collaboration, summarization and storytelling

- Evaluation: evaluation of multimedia signal processing
  algorithms, multimedia analysis and retrieval algorithms, or
  multimedia systems and applications

- Retrieval: analysis of user multimedia queries, evaluating
  multimedia search algorithms and interactive multimedia

- Annotation: generating semantic annotations for multimedia
  content, collecting large-scale input on user affective

- Human factors: designing or evaluating user interfaces for
  multimedia systems, usability study, multi-modal environment,
  human recognition and perceptions

- Novel applications (e.g., human as an element in the loop of

- Effective Learning from crowd-annotated or crowd-augmented

- Quality assurance and cheat detection

- Economics and incentive structures

- Programming languages, tools, and platforms providing enhanced

- Inherent biases, limitations, and trade-offs of crowd-centered
CrowdMM 2013 welcomes submissions of full papers, as well as
short papers reporting work-in-progress. Full papers must be no
longer than 6 pages (inclusive of all figures, references, and
appendices).  Short papers are 2 pages, and will be presented as
Posters in an interactive setting.

All submissions must be written in English and must be formatted
according to the ACM Proceedings style.  They must contain no
information identifying the author(s) or their organization(s).
Reviews will be double-blind.  Papers will be judged on their
relevance, technical content and correctness, and the clarity of
presentation of the research.

Detailed of the submission procedures will soon be available on our

Accepted full and short papers will appear in the ACM Multimedia
2013 Workshop Proceedings and in the ACM Digital Library.
Outstanding workshop papers will qualify for submission in
extended form for a fast-track review at IEEE Transactions on

Submission due:      June 28, 2013 
Author notification: July 15, 2013 
Camera ready due:    July 25, 2013
Workshop date:       October 22, 2013 

- Wei-Ta Chu, National Chung Cheng University, Taiwan 
- Martha Larson, Delft University of Technology, Netherlands 
- Kuan-Ta Chen, Academia Sinica, Taiwan 

- Tobias Hossfeld, University of Wuerzburg, Germany. 

Publicity and Web: 
- Wei Tsang Ooi, National University of Singapore, Singapore 

For questions or more information, please contact workshop
co-chairs: Wei-Ta Chu ([log in to unmask]), Martha Larson
([log in to unmask]), or Kuan-Ta Chen ([log in to unmask])


To unsubscribe from the MM-INTEREST list:
write to: mailto:[log in to unmask]
or click the following link: