CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
John Wenskovitch <[log in to unmask]>
Reply To:
John Wenskovitch <[log in to unmask]>
Date:
Fri, 10 Jul 2020 10:20:57 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (98 lines)
 *Call for Participation:*  Machine Learning from User Interactions (MLUI)
workshop at IEEE VIS 2020

*Workshop Date:*  October 25 or 26, 2020
*Workshop URL:*  learningfromusersworkshop.github.io
*Conference Status:*  IEEE VIS has announced that they will be a fully
virtual conference this year in response to the COVID outbreak.

*Papers Submission deadline:*  July 24, 2020
*Author notification: * August 12, 2020

The Machine Learning from User Interactions (MLUI) workshop seeks to bring
together researchers to share their knowledge and build collaborations at
the intersection of the Machine Learning and Visualization fields, with a
focus on learning from user interaction. Rather than focusing on what
visualization can do to support machine learning (as in current Explainable
AI research), this workshop seeks contributions on how machine learning can
support visualization. Such support incorporates human-centric sensemaking
processes, user-driven analytical systems, and gaining insight from data.
Our intention in this workshop is to generate open discussion about how we
currently learn from user interaction, how to build intelligent
visualization systems, and how to proceed with future research in this
area. We hope to foster discussion regarding systems, interaction models,
and interaction techniques. Further, we hope to extend last year’s
collaborative creation of a research agenda that explores the future of
machine learning with user interaction.

We invite research and position papers between 5 and 10 pages in length
(NOT including references). All submissions must be formatted according to
the VGTC conference style template (i.e., NOT the journal style template
that full papers use). All papers accepted for presentation at the workshop
will be published and linked from the workshop website. All papers should
contain full author names and affiliations. These papers are considered
archival; reuse of the content in a follow-up publication is only permitted
in a proper journal, and any extended version must extend the original
paper by at least 30%. If applicable, a link to a short video (up to 5 min.
in length) may also be submitted. The papers will be juried by the
organizers and selected external reviewers and will be chosen according to
relevance, quality, and likelihood that they will stimulate and contribute
to the discussion. At least one author of each accepted paper needs to
register for the conference (even if only for the workshop). Papers should
be submitted to the "VIS 2020 MLUI 2020
<https://new.precisionconference.com/submissions>" track in PCS under the
VGTC Society.

Relevant topics include but are not limited to:

   - How are machine learning algorithms currently learning from user
   interaction, and what other possibilities exist?
   - What kinds of interactions can provide feedback to machine learning
   algorithms?
   - What can machine learning algorithms learn from interactions?
   - Which machine learning algorithms are most applicable in this domain?
   - How can machine learning algorithms be designed to enable user
   interaction and feedback?
   - How can visualizations and interactions be designed to exploit machine
   learning algorithms?
   - How can visualization system architectures be designed to support
   machine learning?
   - How should we manage conflicts between the user's intent and the data
   or machine learning algorithm capabilities?
   - How can we evaluate systems that incorporate both machine learning
   algorithms and user interaction together?
   - How can machine learning and user interaction together make both
   computation and user cognition more efficient?
   - How can we support the sensemaking process by learning from user
   interaction?


*Organizers*

   - John Wenskovitch, Pacific Northwest National Lab and Virginia Tech
   - Michelle Dowling, Grand Valley State University
   - Eli T. Brown, DePaul University
   - Kris Cook, Pacific Northwest National Lab
   - Ab Mosca, Tufts University
   - Conny Walchshofer, Johannes Kepler University Linz
   - Marc Streit, Johannes Kepler University Linz
   - Kai Xu, Middlesex University


*Steering Committee*

   - Chris North, Virginia Tech
   - Remco Chang, Tufts University
   - Alex Endert, Georgia Tech
   - David H. Rogers, Los Alamos National Lab

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2