CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset="UTF-8"
Date:
Thu, 17 Dec 2020 11:17:17 -0500
Reply-To:
Alison Renner <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Alison Renner <[log in to unmask]>
Parts/Attachments:
text/plain (145 lines)
*** Extended submission deadline: January 4th, 2021 ***

Workshop on Transparency and Explanations in Smart Systems (TExSS)

Explainable AI for Fairness and Social Justice

Held in conjunction with ACM Intelligent User Interfaces (IUI) 2021, April
13-17, Virtual.

Smart systems that apply complex reasoning to make decisions and plan
behavior, such as decision support systems and personalized
recommendations, are difficult for users to understand. Algorithms allow
the exploitation of rich and varied data sources, in order to support human
decision-making and/or taking direct actions; however, there are increasing
concerns surrounding their transparency and accountability, as these
processes are typically opaque to the user - e.g., because they are too
technically complex to be explained or are protected trade secrets. The
topics of transparency and accountability have attracted increasing
interest to provide more effective system training, better reliability and
improved usability. This workshop will provide a venue for exploring issues
that arise in designing, developing and evaluating intelligent user
interfaces that provide system transparency or explanations of their
behavior. We will focus specifically on explaining systems and models
toward ensuring fairness and social justice, such as approaches to
detecting or mitigating algorithmic biases or discrimination  (e.g.,
awareness, data provenance, and validation).

Suggested themes include, but are not limited to:

- What are explanations? What should they look like? What should be
included in explanations and how (and to whom) should they be presented?

- Is transparency (or explainability) always a good idea? Can transparent
algorithms or explanations “hurt” the user experience, and in what
circumstances?

- How can we build (good) algorithmic systems, particularly those that
demonstrate that they are fair, accountable, and unbiased?

- When are the optimal points at which explanations are needed for
transparency?

- What are more transparent models that still have good performance in
terms of speed and accuracy?

- What is important in user modeling for system transparency and
explanations?

- What are possible metrics that can be used when evaluating transparent
systems and explanations?

- How can we evaluate explanations and their ability to accurately explain
underlying algorithms and overall systems’ behavior, especially for the
goals of fairness and accountability?

- How can explanations allow human evaluators to select model(s) that are
unbiased, such as by revealing traits or outcomes of the underlying learned
system?

- What are important social aspects in interaction design for system
transparency and explanations?

- How can we detect biases and discrimination in transparent systems?

- Through explanations, transparency, or other means, how can we raise
stakeholders’ awareness of the potential risk for biases and social harms
that could result from developing and using intelligent systems?

Researchers and practitioners in academia or industry who have an interest
in these areas are invited to submit papers up to 6 pages (not including
references) in the ACM SIGCHI Paper Format (see
http://iui.acm.org/2021/call_for_papers.html). These submissions must be
original and relevant contributions. Examples include, but are not limited
to, position papers summarizing authors’ existing research in this area and
how it relates to the workshop theme, papers offering an industrial
perspective on the workshop theme or a real-world approach to the workshop
theme, papers that review the related literature and offer a new
perspective, and papers that describe work-in-progress research projects.

Papers should be submitted via Easychair (
https://easychair.org/conferences/?conf=texss2021) by the end of December
23rd, 2020 and will be reviewed by committee members. Position papers do
not need to be anonymized. At least one author of each accepted position
paper must register for and attend the workshop. It is anticipated that
accepted contributions will be published in dedicated workshop proceedings.
For further questions please contact the workshop organizers at <
[log in to unmask]>.

The workshop will feature a keynote by Timnit Gebru (
https://ai.stanford.edu/~tgebru/) who co-leads the Ethical Artificial
Intelligence Team at Google. Paper authors will then present their work as
part of thematic panels. The remainder of the workshop will consist of
smaller group activities related to the workshop theme. For more
information visit our website at
http://explainablesystems.comp.nus.edu.sg/2021.

Important Dates

==============

Submission date Jan 4, 2021 (extended)

Notifications send  Jan 31, 2021

Camera-ready     Feb 28, 2021

Workshop Date   April 13, 2021

Organizing Committee

===================

Alison Smith-Renner, Machine Learning Visualization Lab, DAC/WBB, United
States

Styliani Kleanthous Loizou, Cyprus Centre for Algorithmic Transparency,
Open University of Cyprus, Nicosia, Cyprus

Jonathan Dodge, Oregon State University, Corvallis, Oregon, United States

Casey Dugan, IBM Research, Cambridge, Massachusetts, United States

Min Kyung Lee, University of Texas at Austin, Austin, Texas, United States

Brian Y Lim, Department of Computer Science, National University of
Singapore, Singapore, Singapore

Tsvi Kuflik, Information Systems, The University of Haifa, Haifa, Israel

Advait Sarkar, Microsoft Research, Cambridge, United Kingdom

Avital Shulner-Tal, Information Systems, The University of Haifa, Haifa,
Israel

Simone Stumpf, Centre for HCI Design, City, University of London, London,
United Kingdom

    ----------------------------------------------------------------------------------------
    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see:
     https://sigchi.org/operations/listserv/
    ----------------------------------------------------------------------------------------

ATOM RSS1 RSS2