From 'Explainable AI' to 'Graspable AI'
Held in conjunction with ACM Tangible, Embedded, and Embodied Interaction
(TEI), Virtual, February 19th, 2021
Studio website: https://sites.google.com/view/graspable-ai/
<https://sites.google.com/view/graspable-ai/home?authuser=0>
Intelligent systems promise to support diverse use situations, improve task
performance and accuracy, increase transparency, and broaden participation
in computing. However, an interaction modality of intelligent systems that
has been less explored is Tangible Embodied Interaction (TEI). This studio
brings together designers, artists and HCI researchers around topics of
tangible and explainable AI/ML.
In the studio, we will map opportunities that TEI in its broadest sense,
including TUI, embodied interaction, and physical computing can offer to
the design of intelligent systems. We use the phrase Graspable AI,
referencing two senses of the word *to grasp*: taking something into one's
hand and when the mind comprehends an idea.
This studio focuses on three successive approaches to Graspable AI:
- Graspable forms are synthesized and unified wholes capable of
conveying a meaning, a message or a state (classifying, learning or
explaining) manifest in physical forms; they are often self-explanatory,
intuitive, and relatable. This studio will explore how such forms of ML/AI
models can become basic units of analysis for *graspable* AI.
- Graspable forms over time: In a learning system, meaningful units
change over time, increasing the complexity of the outcomes and
predictions. We ask how temporality influences the tangible forms of
algorithms, considering metaphors of biological growth, emergence, and
structure-preserving transformations.
- Graspable forms in the world: Design criticism suggests that artifact
form structures and integrates diverse dimensions of design, including
intentionality, user experiences, and the sociocultural contexts in which
the artifact was produced or consumed. We ask how tangible forms, as they
undergo change and transformation, acquire and shape meanings in the world.
Participations are open for everyone with or without position papers,
however, we recommend interested participants to submit a position paper
(2–4 pages, excluding the references) in the ACM Extended Abstracts
Submission Template (Word or LaTeX)
<https://www.acm.org/publications/proceedings-template> or as visual
artifacts to [log in to unmask] by the end of January 25th.
Possible contributions include (but not limited to):
- Theoretical considerations: using TEI theoretical frameworks to
analyze, interpret and/or frame explainable AI
- Case studies at any stage: proposed, preliminary, or completed case
studies that explore TEI with AI/ML systems.
- Prepared data: any physicalizations of data, preparations of data, or
data sets
- Methods/approaches: new/adapted methods, design tactics, or approaches
to making data more graspable
- Design collections: annotated and visual collections of artifacts,
design concepts, critical writing about specific designs or aspects of
designs
- Demonstrations/Performances: visual stories, illustrations, physical
objects and videos (e.g. dance or music performances, short movies,
animations) which visualize or perform the concept of Graspable AI, or
whose graspability is relevant to the studio three main topic areas that
are mentioned above.
Important Dates:
Submission deadline: January 25th
Workshop date (virtual): February 19th
Organizers:
Maliheh Ghajargar, Malmo University, Sweden
Jeffrey Bardzell, Pennsylvania State University, USA
Alison Smith-Renner, Decisive Analytics Corporation, USA
Peter Gall Krogh, Aarhus University, Denmark
Kristina Höök, KTH, Sweden
David Cuartielles, Malmö University, Sweden
Laurens Boer, ITU, Denmark
Mikael Wiberg, Umeå University, Sweden
----------------------------------------------------------------------------------------
To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
mailto:[log in to unmask]
To manage your SIGCHI Mailing lists or read our polices see:
https://sigchi.org/operations/listserv/
----------------------------------------------------------------------------------------
|