Hi all -
Please consider submitting to the CHI 2020 workshop on Fair & Responsible AI<http://fair-ai.owlstown.com/>.
The workshop will be held on Sunday, April 26, 2020 at Honolulu, Hawaii right before the main conference.
Min (on behalf of the organizing committee)
Call for Participation
As AI changes the way decisions are made in organizations and governments, it is ever more important to ensure that these systems work according to the values that diverse users and groups find important. Researchers have proposed numerous algorithmic techniques to formalize statistical fairness notions, but emerging work suggests that AI systems must account for the real-world contexts in which they will be embedded in order to actually work fairly. These findings call for an expanded research focus beyond statistical fairness that includes fundamental understandings of human uses and the social impacts of AI systems, a theme central to the HCI community.
We are hosting a one-day workshop on the topic of Fair and Responsible AI <http://fair-ai.owlstown.com/> at CHI 2020 in Honolulu, Hawaii, on April 26, 2020, and we invite academic and industry researchers and practitioners in the fields of HCI, machine learning (ML) and AI, and the social sciences to participate in developing a cross-disciplinary agenda for creating fair and responsible AI systems. We aim to achieve the following outcomes:
1) Synthesis of emerging research discoveries and methods. An emerging line of work seeks to systematically study human perceptions of algorithmic fairness, explain algorithmic decisions to promote trust and a sense of fairness, understand human use of algorithmic decisions, and develop methods to incorporate them into AI design. How can we map the current research landscape to identify gaps and opportunities for fruitful future research?
2) Design guidelines for fair and responsible AI. Existing fairness AI toolkits aim to support algorithm developers, and existing human-AI interaction guidelines mainly focus on usability and experience. Can we create design guidelines for HCI and user experience (UX) practitioners and educators to design fair and responsible AI?
How to participate
To participate, submit a 2-4 page position paper in CHI extended abstract format<https://chi2020.acm.org/authors/chi-proceedings-format> via Easy Chair<https://easychair.org/account/signin?l=pwq69v0jBTaeMln9go4yvJ#> by February 11, 2020.
We are open to diverse forms of the submissions, including reports on empirical research findings on fair and responsible AI, essays that offer critical stances and/or visions for future work, and show-and-tell case studies of industry projects.
Potential topics include:
- Human biases in human-in-the-loop decisions
- Human perceptions of algorithmic fairness
- Human-centered evaluation of fair ML models
- Explanations & transparency of algorithmic decisions
- Methods for stakeholder participation in AI design
- Decision-support system design
- Algorithm auditing techniques
- Ethics of AI
- Sociocultural studies of AI in practice
For more details on the participation and review process, please see: http://fair-ai.owlstown.com/
Position paper deadline: February 11, 2020
Paper Notification: February 28, 2020
Workshop at CHI2020: April 26 (Sunday), 2020
[log in to unmask]<mailto:[log in to unmask]>
Submit your paper via EasyChair<https://easychair.org/account/signin?l=pwq69v0jBTaeMln9go4yvJ#>.
Min Kyung Lee, University of Texas at Austin, United States;
Nina Grgic-Hlaca, Max Planck Institute for Software Systems, Germany;
Michael Carl Tschantz, International Computer Science Institute, United States;
Reuben Binns, University of Oxford, United Kingdom;
Adrian Weller University of Cambridge, United Kingdom;
Michelle Carney, Google AI, United States;
Kori Inkpen, Microsoft Research, United States
Min Kyung Lee
School of Information, University of Texas at Austin
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org