ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bart Knijnenburg <[log in to unmask]>
Reply To:
Bart Knijnenburg <[log in to unmask]>
Fri, 20 Apr 2018 13:05:50 +0000
text/plain (75 lines)
[Our apologies if you receive multiple copies of this CfP]


1st International Workshop on Multi-Method Evaluation of Personalized Systems (MuMe 2018)<>

held in conjunction with UMAP 2018 (User Modeling, Adaptation and Personalization)

8 - 11 July, 2018 at Nanyang Technological University, Singapore


Are you using multiple methods in the evaluation of recommender systems and other personalized systems?

Then please consider submitting to this workshop!

The primary goal of this workshop is to build a community around the multi-method evaluation topic and to develop a long-term research agenda for the topic.

We have extended the submission deadline to April 25 (Wednesday)

Position papers welcome!
(…meaning you do not need to write a full paper for a chance to attend UMAP in beautiful Singapore in July! 😊)

Feel free to contact the workshop chairs if you have questions: [log in to unmask]<mailto:[log in to unmask]>

We solicit position and research papers (4 pages excluding references, UMAP 2018 Format) that address challenges in the multi-method evaluation of recommender systems and other personalized systems. This includes:
- "lessons learned" from the successful application of multi-method evaluations,
- "post mortem" analyses describing specific evaluation strategies that failed to uncover decisive elements,
- "overview papers" analyzing patterns of challenges or obstacles to multi-method evaluation, and
- "solution papers" presenting solutions towards identified challenges.

Possible questions addressed may include (but are not limited to):
- How can we select evaluation methods that allow to identify blind spots in user experience? What may be criteria to compare and evaluate the suitability of methods for given evaluation objectives and how can we develop those?
- How can we integrate and combine the results of multiple methods to get a comprehensive picture of user experience?
- What are the challenges and limitations of single- or multi-method evaluation of RecSys? How can we overcome such hurdles?
- What are viable user-centric multi-method study designs (guidelines) for evaluating RecSys? What are the lessons learned from successful or unsuccessful user-centric multi-method study designs?

Important Dates
Submission deadline: April 25, 2018 (EXTENDED)
Notification: May 15, 2018
Deadline for camera ready version: May 27, 2018
Workshop date: July 8, 2018, Singapore
(all deadlines are AoE)

Christine Bauer, Johannes Kepler University Linz, Austria
Eva Zangerle, University of Innsbruck, Austria
Bart P. Knijnenburg, Clemson University, USA

For details visit the workshop’s website:<>

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see