ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Alison Smith <[log in to unmask]>
Reply To:
Alison Smith <[log in to unmask]>
Mon, 11 Dec 2017 14:10:55 -0500
text/plain (61 lines)

ExSS2018: Workshop on Explainable Smart Systems (ExSS)
Intelligent User Interfaces (IUI)

Tokyo, Japan
March 8-11, 2018

Topics: machine learning, autonomous systems, explanations

Smart systems that apply complex reasoning to make decisions and plan behavior, such as clinical decision support systems, personalized recommendations, and machine learning classifiers, are difficult for users to understand. While research to make systems more explainable and therefore more intelligible and transparent is gaining pace, there are numerous issues and problems regarding these systems that demand further attention. The goal of this workshop is to bring researchers and industry together to address these issues, such as when and how to provide an explanation to a user. The workshop will include a keynote, poster panels, and group activities, with the goal of developing concrete approaches to handling challenges related to the design and development of explainable smart systems.


Researchers in academia or industry who have an interest in making smart systems explainable to users are invited to submit 2-4 page position papers in ACM SIGCHI Paper Format explaining their previous research and experience in explainable models and interfaces, and the challenges that they experience. Participants are encouraged to ground their positions in real application scenarios. Suggested contribution types include, but are not limited to:

• What is an explanation? What should they look like?
• Are explanations always a good idea? Can explanations “hurt” the user experience, and in what circumstances?
• When are the optimal points at which explanations are needed for a particular system?
• How can we measure the value of explanations or how the explanation is provided? What human factors influence the value of explanations?
• What are “more explainable” models that still have good performance in terms of speed and accuracy?

Papers should be submitted via Easychair ( by end of December 17th 2017 and will be reviewed by committee members. At least one author of each accepted position paper must attend the workshop. Paper authors will prepare posters to be presented as part of a thematic panel. All attendees must register for the workshop and at least one day of the IUI conference.


Submission deadline: 17 December 2017
Notification to Authors: 23 January 2018
Camera-ready copies due: 6 February 2018


Brian Lim - National University of Singapore
Alison Smith - Decisive Analytics Corporation, USA; University of Maryland, College Park, USA
Simone Stumpf - City, University of London, UK


Enrico Bertini - New York University, USA
Fan Du - University of Maryland, College Park, USA
Dave Gunning - DARPA, USA
Judy Kay - University of Sydney, Australia
Bran Knowles - University of Lancaster, UK
Todd Kulesza, Microsoft, USA
Mark W. Newman, University of Michigan, USA
Deokgun Park - University of Maryland, College Park, USA
Forough Poursabzi-Sangdeh - University of Colorado, Boulder, USA
Jo Vermeulen - Aarhus University, Denmark

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see