CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Date:
Mon, 7 Mar 2022 14:19:52 +0000
Reply-To:
"Riener, Andreas" <[log in to unmask]>
Message-ID:
Subject:
MIME-Version:
1.0
Content-Transfer-Encoding:
quoted-printable
In-Reply-To:
Content-Type:
text/plain; charset="us-ascii"
From:
"Riener, Andreas" <[log in to unmask]>
Parts/Attachments:
text/plain (60 lines)
[our apologies if you receive multiple copies of this CfP]

========================================================================
  Call for Papers

  2nd ACM CHI Workshop on Human-Centered Explainable AI (HCXAI)
  Fully Virtual
  May 12-13, 2022 (half-day each)
  https://hcxai.jimdosite.com/
========================================================================

We are interested in a wide range of topics, from sociotechnical aspects of XAI to human-centered evaluation techniques to the responsible use of XAI. We are especially interested in the discourse around one or more of the questions: who (e.g., clarifying who the human is in XAI, how different who's interpret explainability), why (e.g., how social and individual factors influence explainability goals), and where (e.g., contextual explainability differences
in diverse application areas). Beyond these, we invite work on topics including but not limited to weaponizing AI explanations (e.g., inducing over-trust in AI), harmful effects of XAI, appropriate trust calibration, designing for accountability, and avoiding "ethics washing" in XAI. The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as source of inspiration:
*         How might we chart the landscape of different "who's" (relevant stakeholders) in XAI and their respective explainability needs?
*         What user goals should XAI aim to support, for whom, and why?
*         How can we address value tensions amongst stakeholders in XAI?
*         How do user characteristics (e.g., educational background, profession) impact needs around explainability?
*         Where, or in what categories of AI applications, should we prioritize our XAI efforts on?
*         How might we develop transferable evaluation methods for XAI? What key constructs need to be considered?
*         Given the contextual nature of explanations, what are the potential pitfalls of evaluation metrics standardization?
*         How might we take into account the who, why, and where in the evaluation methods?
*         How might we stop AI explanations from being weaponized (e.g., inducing dark patterns)?
*         Not all harms are intentional. How might we address unintentional negative effects of AI explanations (e.g.,
*         inadvertently triggering cognitive biases that lead to over-trust)?
*         What steps should we take to hold organizations/creators of XAI systems accountable and prevent "ethics washing" (the practice of ethical window dressing where 'lip service' is provided around AI ethics)?
*         From an AI governance perspective, how can we address perverse incentives in organizations that might lead to harmful effects (e.g., privileging growth and AI adoption above all else)?
*         How do we address power dynamics in the XAI ecosystem to promote equity and diversity?
*         What are issues in the Global South that impact Human-centered XAI? Why? How might we address them?
Researchers, practitioners, or policy makers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references) in the two-column (landscape) Extended Abstract Format that CHI workshops have traditionally used. Templates: [Overleaf<https://www.overleaf.com/latex/templates/chi2020-extended-abstract/hvnyhtvgqhwc>] [Word<https://chi2020.acm.org/sigchi-chi20-sample-ea>] [PDF<https://chi2020.acm.org/wp-content/uploads/2019/12/extended-abstract.pdf>]

Submissions are single-blind reviewed; i.e., submissions must include the author's names and affiliation. The workshop's organizing and program committees will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference.

Submissions must be original and relevant contributions to the workshop's theme. Each paper should directly and explicitly address how it speaks to the workshops goals and themes. Pro-tip: direct mapping to a question or goal posed above will help. We are looking for position papers that take a well-justified stance and can generate productive and lively discussions during the workshop. Examples include, but not limited to, position papers that include research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects.

Since this workshop will be held virtually which reduces visa or travel-related burdens, we aim to have global and diverse participation. With an effort towards equitable conversations, we welcome participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)

Important Dates
Submission Deadline: March 15, 2022 | 11:59pm AoE (Extended!)
Submit your paper here:  https://easychair.org/conferences/?conf=hcxai2022
Workshop Dates: May 12-13, 2022 (half day each)
Timing TBD based on time zones of accepted papers

Organizers
Upol Ehsan, Georgia Institute of Technology
Q. Vera Liao, IBM Research AI
Elizabeth Anne Watkins, Princeton University
Mark Riedl, Georgia Institute of Technology
Andreas Riener, Technische Hochschule Ingolstadt
Carina Manger, Technische Hochschule Ingolstadt
Hal Daume III, University of Maryland
Philipp Wintersberger, TU Wien

    ----------------------------------------------------------------------------------------
    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see:
     https://sigchi.org/operations/listserv/
    ----------------------------------------------------------------------------------------

ATOM RSS1 RSS2