We are happy to announced that we have conference-approval to extend the
deadline of the NeurIPS 2021 workshop on Human Centered AI until 25
September 2021 AoE
Human-Centered AI workshop at NeurIPS 2021
Monday 13 December 2021, online
EXTENDED DEADLINE: 25 September 2021 AoE
Human-Centered AI (HCAI) is an emerging discipline that aims to create AI
systems that amplify and augment human abilities and preserve human control
in order to make AI partnerships more productive, enjoyable, and fair. Our
workshop aims to bring together researchers and practitioners from the
NeurIPS and HCI communities and others with convergent interests in HCAI.
With an emphasis on diversity and discussion, we will explore research
questions that stem from the increasingly wide-spread usage of machine
learning algorithms across all areas of society, with a specific focus on
understanding both technical and design requirements for HCAI systems, as
well as how to evaluate the efficacy and effects of HCAI systems.
Details at https://sites.google.com/view/hcai-human-centered-ai-neurips/home
Submissions to the workshop may address one or more of the following themes
- or other relevant
themes of interest:
Theoretical frameworks, disciplines and disciplinarity. How we approach AI
and data science depends on the "lenses" that we bring, based in theory and
in practice. Through what perspectives do you approach this complex domain?
Experiences and cases with AI systems. Theories suggest studies and
experience reports. Studies and experience reports inform theories. What
cases or experiences of human-AI interactions can you contribute to our
inter-disciplinary knowledge and discussion?
Design frameworks for human initiative and AI initiative. Scholars have
debated the question of who should have initiative or control between human
and AI for over 70 years. What forms of discrete or shared initiative are
possible now, and how can we include these possibilities in our systems?
Experiences and cases with human-AI collaboration. Design frameworks can
inform applications. Experiences with applications can challenge
frameworks, or lead to new frameworks. What cases or experiences of
human-AI collaborations can you contribute to our inter-disciplinary
knowledge and discussion?
Fairness and bias. Machine learning-based decision-making systems have the
potential to replicate or even exacerbate social inequeties and
discrimination. As a result, there is a surge of recent work on developing
machine learning algorithms with fairness constraints or guarantees.
However, for these tools to have positive real-world impact, their design
and implementation should be informed by a clear understanding of human
behavior and real needs. What is the interplay between algorithmic fairness
Privacy. In many important machine learning tasks – e.g. those related to
healthcare – there is much to be gained from training on personal
information, but we must take care to respect individuals’ privacy
appropriately. In this workshop, we are particularly interested in
understanding specific use cases and considering costs and benefits to
individuals and society of making use of private data.
Transparency, explainability, interpretability, and trust. We are
interested to understand what specific types of explainability or
interpretability are helpful to whom in concrete settings, and in exploring
any tradeoffs which are inevitably faced.
User research. What do we need to know in order to create or enhance an
AI-based system? Our engineering heritage suggests that we seek user needs
and resolve user pain points. How does our user research for these concepts
change with AI systems? Are there other user research goals that are now
possible with more sophisticated AI resources and implementations?
Accountability. When people engineer (or create) an AI system and its data,
how do we hold them and ourselves accountable for design decisions and
Automation of AI. It is tempting to apply AI to AI, in the form of
automated AI. Is this a credible approach? Does human discernment play a
role in creating AI systems? Is this a necessary role?
Evaluation. What are the appropriate measurement concepts and resulting
metrics to assess our AI systems? How do we balance among efficiency,
explainability, understandability, user satisfaction, and user hedonics?
Governance. Consequential machine learning systems impact the lives of
millions of people in areas such as criminal justice, healthcare,
education, credit scoring or hiring. Key concepts in the governance of such
systems include algorithmic discrimination, transparency, veracity,
explainability and the preservation of privacy. What is the role of HCI in
relation to the governance of such systems?
Problematizing data. Data initially seem to be simple and ”objective.”
However, a growing body of evidence shows the often-hidden role of humans
in shaping the data in AI. Should we design our systems to strengthen human
engagement with data? or to reduce human impact on data?
Qualitative data in data science. Quantitative data analyses may be
powerful, but often decontextualized and potentially shallow. Qualitative
data analyses may be insightful, but often limited to a narrow sample. How
can we combine the strengths of these two approaches?
Values and ethics of AI. Values and ethics are necessarily entangled with
localized, situated, and culturally-informed human perspectives. What are
useful frameworks for a comparative analysis of values and ethics in AI?
How to Apply
We invite your submission based on the workshop themes (above) or related
themes from your own work.
Your submission may take either of the following two forms, using the
NeurIPS conference templates (
Short abstracts up to 2 pages
Longer papers up to 4 pages
Please send your submissions to [log in to unmask]
EXTENDED Due date: 25 September 2021, Anywhere on Earth
Notification date: 15 October 2021
Michael Muller, PhD, IBM Research, Cambridge MA USA
ACM Distinguished Scientist
ACM SIGCHI Academy
To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
mailto:[log in to unmask]
To manage your SIGCHI Mailing lists or read our polices see: