**The submission page is now open!**
We invite submissions for our IEEE VIS 2020 workshop: "TREX: Workshop on TRust and EXperience in Visual Analytics" that takes place on October 25 or 26th, 2020. Following the conference decisions, this workshop will be held virtually.
For more information, please refer to the workshop website https://trexvis.github.io/Workshop2020/index.html.
Please help us spread the news by forwarding this call-for-papers to those who will find it interesting and/or relevant.
Submission Deadline (extended): July 24, 2020
Author Notification: August 12, 2020
Camera-ready Deadline: August 21, 2020
Visual analytics (VA) systems combine computational support and human cognitive and perceptual skills to explore and analyze data. Many of these systems have been incorporating machine learning (ML) models and algorithms to introduce some level of automation to the analytical process. However, within this relationship, there are a number of aspects that can impact the effectiveness of the human-machine teaming, including 1) people's domain and system expertise; 2) human biases, including cognitive and perceptual biases; 3) trust in ML models and visual representation of data. Expertise, bias, and trust are intrinsically intertwined. Additionally, visual analytics systems are used in different fields and by people from various backgrounds, with different levels of domain expertise and experience with machine learning and visual analytics tools. This variety of experience and domain expertise among human users has opened the door for new research directions and challenges in the fields of visual analytics and machine learning. Designers who fail to consider the aforementioned diversities might introduce problems to the analysis effectiveness and user experience. Furthermore, experience and domain expertise might affect user trust in visual analytics tools; although, how and why they affect trust is still an open question. Trust will eventually affect how much the users would rely on and use the tool. While users will take advantage of their prior experiences to make better decisions with the assistance of analytic support, they might carry many cognitive biases that can negatively influence their decision-making or analysis process. Recent research shows trust in and reliance on the visual analytics systems/tools as well as user strategies and biases can be directly influenced by domain and system expertise (or lack of expertise). The goal of this workshop is to bring together researchers and practitioners from different disciplines to discuss and discover challenges in ML supported visual analytics tools and set the stage for future research directions and collaborations regarding these issues by proposing design guidelines, empirical findings, and VA techniques.
Topics of Interest
The workshop suggested topics are as followed:
1. Trust considerations based on different areas of domain expertise (e.g., medical, security, scientific, financial domains).
2. Trust and bias considerations based on different levels of user familiarity with machine learning and visual analytics systems.
3. Detecting and preventing cognitive biases in visual analytics and machine learning for users.
4. User trust in machine learning models and visual explanations of model decisions in visual analytics systems.
5. The correlation between trust, domain knowledge, and potential cognitive biases.
6. The relationship between domain expertise and trust with model transparency, human interpretability, and model understandability in visual data analysis.
7. The relationship between model interpretability, domain expertise, and trust.
We are looking for and open to diverse forms of research (as listed below) within the realm of VA and ML with appropriate emphasis and relation to human trust, expertise, and cognitive factors. We encourage submissions of research involving:
1. Human empirical research/findings
2. Design guidelines and surveys
3. Technique papers
4. Position papers
5. Case studies
The workshop will accept papers with 2 to 6 pages in length, plus 1 page of references. All submissions must be in the IEEE VGTC Conference Style Template format. (You can download the appropriate templates here<http://junctionpublishing.org/vgtc/Tasks/camera.html>.)
We also encourage authors to submit a short 30-second video of their work when submitting their work, if applicable. For the camera-ready deadline, all the poster submissions must prepare a 30-second video for the poster fast-forward session. These videos might have audio for archival purposes, but the authors will only use the video as a visual tool while presenting their fast-forward in the workshop. The submissions are made through the PCS (Precision Conference Solutions<https://new.precisionconference.com/user/login?next=https%3A//new.precisionconference.com/submissions>) website. At least one author for an accepted paper needs to register for the conference (at least the workshop registration). The submissions are not required to be anonymized and may include the full name of authors, their email, and affiliation as well as an acknowledgment.
Mahsan Nourani, University of Florida (https://mahsan.online<https://mahsan.online/>)
Eric Ragan, University of Florida (https://www.cise.ufl.edu/~eragan/)
Emily Wall, Georgia Tech (https://emilywall.github.io/)
John Goodall, Oak Ridge National Lab (https://jgoodall.me/)
Aritra Dasgupta, New Jersey Institue of Technology (https://aedeegee.github.io/)
Kris Cook, Pacific North National Lab (https://www.pnnl.gov/science/staff/staff_info.asp?staff_num=8639)
For more information, you can email us at [log in to unmask]<mailto:[log in to unmask]> or visit our workshop website https://trexvis.github.io/Workshop2020/index.html.
Graduate Research Assistant, INDIE Lab,
Department of Computer and Information Science and Engineering,
University of Florida
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org