ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Alejandro Bellogin Kouki <[log in to unmask]>
Reply To:
Alejandro Bellogin Kouki <[log in to unmask]>
Mon, 21 Jul 2014 09:28:17 +0200
text/plain (222 lines)
------------------ Final Call for Papers - REDD 2014 ------------------

                 International ACM RecSys Workshop on

  Recommender Systems Evaluation: Dimensions and Design - REDD 2014

          Foster City, Silicon Valley, CA, USA, October 2014



* Extended deadline: 28 July 2014 *



Evaluation is a cardinal issue in recommender systems; as in almost any
other technical discipline, it highlights to a large extent the problems
that need to be solved by the field and, hence, leads the way for
algorithmic research and development in the community. Yet, in the field
of recommender systems, there still exists considerable disparity in
evaluation methods, metrics, and experimental designs as well as a
significant mismatch between evaluation methods in the lab and what
constitutes an effective recommendation for real users and businesses.
This workshop aims at providing an informal forum to tackle such issues
and move towards better understood and commonly agreed evaluation
methodologies, allowing one to leverage the efforts and the workforce of
the academic community on meaningful and relevant directions to
real-world developments.

REDD 2014 places a specific focus, on the one hand, on the
identificationand measurement of different recommendation quality
dimensions that go beyond the monolithic concept of simply matching user
preferences. Noveltyand diversity, for instance, have been recognized as
key components of the utility of recommendations for users in real-world
scenarios, with a direct positive effect on business performance.
Considering the business perspective, performance metrics related to
sales, revenues, and user engagement along the recommendation funnel
should also be used. Additionally,from an engineering point of view,
aspects such as efficiency, scalability, robustness, and user interface
design are typically major concerns; often prioritized over the
effectiveness of the internal algorithms at thecore of the system. On
the other hand, once a relevant target quality has been defined, a clear
evaluation protocol should be specified in detailand agreed upon,
allowing for the comparison, replicability and reproducibility of the
results and experiments by different authors and enabling incremental

REDD 2014 aims at gathering researchers and practitioners interested in
better understanding the unmet needs of the field in terms of evaluation
methodologies and experimental practices. The main goal of this workshop
is to provide an informal setting for discussing and exchanging ideas,
experiences, and viewpoints. REDD seeks to identify and better
understand the current gaps in recommender system evaluation
methodologies, help lay directions for progress in addressing them, and
foster the consolidation and convergence of experimental methods and



We invite the submission of papers reporting original research, studies,
advances, or experiences that focus on recommender system utility
evaluation. The topics that the workshop seeks to address
include--though need not be limited to--the following:

* Recommendation quality dimensions

  - Effective accuracy, ranking quality

  - Novelty, diversity, unexpectedness, serendipity

  - Utility, gain, cost, risk, benefit

  - Robustness, confidence, coverage, ease of use, persuasiveness, etc.

* Matching metrics to tasks, needs, and goals

  - User satisfaction, user perception, human factors

  - Business-oriented evaluation

  - Multiple objective optimization, user engagement

  - Quality of service, quality of experience

* Evaluation methodology and experimental design

  - Definition and evaluation of new metrics, studies of existing ones

  - Adaptation of methodologies from related fields: IR, Machine
Learning, HCI, etc.

  - Evaluation theory

* Practical aspects of evaluation

  - Offline and online experimental approaches

  - Simulation-based evaluation

  - Datasets and benchmarks

  - Validation of metrics

  - Efficiency and scalability

  - Open evaluation platforms and infrastructures



Two submission types are accepted: technical papers up to 6 pages long,
and position papers up to 3 pages. Each paper will be evaluated by at
least two reviewers from the Programme Committee. The papers will be
evaluated for their originality, contribution significance, soundness,
clarity, and overall quality. Within a required quality standard,
position papers will be evaluated based on the presentation of new
perspectives and insights, and their potential for provoking thoughts
and stimulating discussion.

All submissions shall adhere to the standard ACM SIG proceedings format: The accepted
papers will be published in the CEUR Proceedings series.

Submissions shall be sent as a pdf file through the online submission
system now open at:

Important dates


Paper submission deadline: 21 July 2014

Author notification:       21 August 2014

Camera ready version due:  5 September 2014

REDD 2014 workshop:        October 2014

Programme Committee


Linas Baltrunas, Telefonica Research, Spain

Marcel Blattner, Univ. of Applied Sciences, Switzerland

Iván Cantador, Universidad Autónoma de Madrid, Spain

Charles Clarke, University of Waterloo, Canada

Juan Manuel Fernández, Universidad de Granada, Spain

Zeno Gantner, Nokia, Germany

Ido Guy, IBM Haifa Research Lab, Israel

Juan Huete, Universidad de Granada, Spain

Kris Jack, Mendeley, UK

Dietmar Jannach, University of Dortmund, Germany

Jaap Kamps, University of Amsterdam, Netherlands

Alexandros Karatzoglou, Telefonica Research, Spain

Bart Knijnenburg, University of California, Irvine, USA

Till Plumbaum, TU Berlin, Germany

Filip Radlinski, Microsoft, Canada

Alan Said, TU Delft, Netherlands

Yue Shi, Yahoo! Labs, USA

Fabrizio Silvestri, Yahoo!, Spain

David Vallet, Google Inc., Australia

Arjen de Vries, CWI, The Netherlands

Jun Wang, University College London, UK

Xiang-Jun Wang, Netflix

Xiaoxue Zhao, University College London, UK



Panagiotis Adamopoulos, New York University, USA

Alejandro Bellogín, Universidad Autónoma de Madrid, Spain

Pablo Castells, Universidad Autónoma de Madrid, Spain

Paolo Cremonesi, Politecnico di Milano, Italy

Harald Steck, Netflix, USA

Contact email: [log in to unmask]

More info at:

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see