CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
Date:
Fri, 31 Jul 2015 13:05:13 +0100
Reply-To:
Umair ulHassan <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Type:
text/plain; charset=UTF-8
From:
Umair ulHassan <[log in to unmask]>
Parts/Attachments:
text/plain (150 lines)
=== CALL FOR PAPERS ====
** Deadline is November 20, 2015 **

*CROWDBENCH 2016* - http://crowdbench.insight-centre.org/
*International Workshop on Benchmarks for Ubiquitous Crowdsourcing:
Metrics, Methodologies, and Datasets*

In conjunction with IEEE PERCOM 2016 - https://wwww.percom.org
International Conference on Pervasive Comptuing and Communication
14-18 March, 2016
Sydney, Australia


WORKSHOP SCOPE:

The primary goal of this workshop is to synthesize existing research work,
in ubiquitous crowdsourcing and crowdsensing, for establishing guidelines
and methodologies for the evaluation of crowd-based algorithms and systems.
This goal will be achieved by bringing together researchers from the
community to discuss and disseminate ideas for comparative analysis and
evaluation on shared tasks and data sets.
A variety of views has emerged on the evaluation of crowdsourcing, across
research communities, but so far there has been little effort to clarify
key differences and commonalities in a forum.
The aim of this workshop is to provide such a forum; such that, it creates
the time and involvement required to subject the different views to
rigorous discussion.
It is expected that the workshop would result in a set of short papers that
will clearly argue the positions on the issue.
These papers will serve as a base resource for consolidating research in
the field and moving it forward.
Further, we expect that, the discussions at the workshop would provide
basic specifications for metrics, benchmarks, and evaluation campaigns that
can then be considered by the wider community.

We invite submission of short papers which identify and motivate
comparative analysis and evaluation approaches for crowdsourcing.
We encourage submissions identifying and clearly articulating problems in
terms of evaluating crowdsourcing approaches or algorithms designed for
improving the process of crowdsourcing.
We welcome early work, and particularly encourage submission of visionary
position papers that provide possible directions towards improving the
validity of evaluations and benchmarks.
Topics include but are not limited to:

 - Domain or application specific datasets for the evaluation of
crowdsourcing/crowdsensing techniques
 - Generalized metrics for task aggregation methods in
crowdsourcing/crowdsensing
 - Generalized metrics for task assignment techniques in
crowdsourcing/crowdsensing
 - Online evaluation methods for task aggregation and task assignment
 - Simulation methodologies for testing crowdsourcing/crowdsensing
algorithms
 - Agent-based modeling methods for using existing simulation tools
 - Bechmarking tools for comparing crowdsourcing/crowdsensing platforms or
services
 - Mobile-based datasets for crowdsourcing/crowdsensing
 - Data sets with detailed spatio-temporal information for
crowdsourcing/crowdsensing
 - Using online collected data for offline evaluation

Each submitted paper should focus on one dimension of evaluation and
benchmarks in crowdsourcing/crowdsensing. Multiple submissions per author
are encouraged for articulating distinct topics for discussion at the
workshop. Papers are welcome to argue the merits of an approach or problem
already published in earlier work by the author (or anyone else). Papers
should clearly identify the analytical and practical aspects of evaluation
methods and their specificity in terms of crowdsourcing tasks, application
domains, and/or type of platforms. During the workshop, papers will be
grouped together into tracks, with each track further elaborating upon a
particular critical area meriting further work and study.


IMPORTANT DATES:

Abstract registration:    20 November 2015
Paper submissions:        27 November 2015
Paper notifications:      2 January 2016

Camera-ready submissions: 15 January 2016
Author registration:      15 January 2016
Workshop date:            14-18 March 2016


PAPER SUBMISSION AND PUBLICATION:

Authors will submit papers at https://edas.info/N21129 via EDAS.
Accepted papers will be included in the IEEE PerCom Workshop Proceedings
and will be indexed in the IEEE Xplore digital library.
All submissions will be reviewed by the Technical Program Committee for
relevance, originality, significance, validity and clarity.
We are also looking for opportunities to publish extended version of
workshop papers in journals and book chapters.

FORMATTING GUIDELINES:

Submissions are limited to a length of 6 pages and must adhere to the IEEE
format (2 column, 10 pt font).
LaTex and Microsoft Word templates are available as part of the IEEE
Computer Society website at http://www.computer.org/web/cs-cps/authors and
the conference website https://www.percom.org/.

CONFERENCE REGISTRATION:

At least one author of each accepted paper must register for PerCom 2016
and must present the paper at the workshop.
Failure to present the paper at the workshop will result in the withdrawal
of the paper from the PerCom Workshop Proceedings.
For detailed venue and registration information, please consult the
conference website https://www.percom.org/.


ORGANIZING COMMITTEE:

 - Umair ul Hassan, Insight Centre of Data Analytics, Ireland
 - Edward Curry, University College Dublin, Ireland
 - Daqing Zhang, Institut Mines-Telecom/Telecom SudParis, France

TECHNICAL PROGRAM COMMITTEE:

 - Afra Mashhadi, Bell Labs, Ireland
 - Alessandro Bozzon, Delft University of Technology, Netherlands
 - Amrapali Zaveri, University of Leipzig, Germany
 - Bin Guo, Northwestern Polytechnical University, China
 - Brian Mac Namee, University College Dublin, Ireland
 - David Coyle, University College Dublin, Ireland
 - Fan Ye, Stony Brook University, United States
 - Gianluca Demartini, University of Sheffield, United Kingdom
 - Hien To, University of Southern California, United States
 - John Krumm, Microsoft Research, United States
 - Lora Aroyo, VU University Amsterdam, Netherlands
 - Matt Lease, University of Texas at Austin, United States
 - Raghu K. Ganti, IBM T. J. Watson Research Center, United States
 - Wang Yasha, Peking University, China


Please address queries related to this call to Umair ul Hassan, Email:
umair.ulhassan(at)insight-centre.org

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2