This is the final call for submissions to the upcoming TREX workshop on TRust and EXpertise in Visualization, which will take place **virtually** at IEEE VIS 2021. Please help us spread the word!
Following TREX 2020, this year's workshop will focus on **human-centered aspects of visualization and visual analytics systems that are powered by machine learning and artificial intelligence algorithms, including user trust, cognitive biases, domain expertise, transparency, and human-in-the-loop considerations**.
For more information and updates, please follow us on Twitter @TrexInVIS (https://twitter.com/TrexInVIS) or visit our website: https://trexvis.github.io/Workshop2021/.
Below is the formal call-for-papers for TREX 2021:
Submission Deadline: Jul 30, 2021
Author Notification: Sep 3, 2021
Camera-Ready Deadline: Sep 17, 2021
TREX 2021 Workshop: October 24 or 25, 2021 (Virtual or hybrid, depending on the IEEE VIS decision on conference format)
Visual analytics (VA) systems combine computational support and human cognitive and perceptual skills to explore and analyze data. Many of these systems have been incorporating machine learning (ML) models and algorithms to introduce some level of automation to the analytical process. However, within this relationship, there are a number of aspects that can impact the effectiveness of the human-machine teaming, including (1) people’s domain and system expertise, (2) human biases, including cognitive and perceptual biases, and (3) trust in ML models and visual representation of data. Expertise, bias, and trust are intrinsically intertwined. Additionally, visual analytics systems are used in different fields and by people from various backgrounds, with different levels of domain expertise and experience with machine learning and visual analytics tools. This variety of experience and domain expertise among human users has opened the door for new research directions and challenges in the fields of visual analytics and machine learning. Designers who fail to consider the aforementioned diversities might introduce problems to the analysis effectiveness and user experience. Furthermore, experience and domain expertise might affect user trust in visual analytics tools; although, how and why they affect trust is still an open question. Trust will eventually affect how much the users would rely on and use the tool. While users will take advantage of their prior experiences to make better decisions with the assistance of analytic support, they might carry many cognitive biases that can negatively influence their decision-making or analysis process. Recent research shows trust in and reliance on the visual analytics systems/tools as well as user strategies and biases can be directly influenced by domain and system expertise (or lack of expertise). The goal of this workshop is to bring together researchers and practitioners from different disciplines to discuss and discover challenges in ML supported visual analytics tools and set the stage for future research
**TOPICS OF INTEREST**
1. Trust considerations based on different areas of domain expertise (e.g., medical, security, scientific, financial domains)
2. Trust and bias considerations based on different levels of user familiarity with machine learning and visual analytics systems
3. Detecting and preventing cognitive biases in visual analytics and machine learning for users.
4. User trust in machine learning models and visual explanations of model decisions in visual analytics systems.
5. The correlation between trust, domain knowledge, and potential cognitive biases.
6. The relationship between domain expertise and trust with model transparency, human interpretability, and model understandability in visual data analysis.
7. The relationship between model interpretability, domain expertise, and trust.
8. Human-centered considerations in Human-in-the-loop visualization tools and interpretable models.
We are looking for and open to diverse forms of research (as listed below) within the realm of visualization and machine learning with appropriate emphasis and relation to human trust, expertise, and cognitive factors. We encourage submissions of research involving:
1. Human empirical research/findings
2. Design guidelines and surveys
3. Technique papers
4. Position papers
5. Case studies
The workshop will accept papers with 2 to 6 pages in length, plus 2 pages of references. All submissions must be in IEEE VGTC Conference Style Template format (Download the templates here: http://junctionpublishing.org/vgtc/Tasks/camera.html). We also encourage authors to submit a short 30-second video of their work (for publication purposes) when submitting their work, if applicable. Depending on the conference format this year (i.e., virtual or hybrid), the authors of the accepted papers might be required to submit a longer video (e.g., 5 to 10 minutes) to substitute a live presentation. In such an event, we will send further instructions closer to the workshop. The submissions are made through the PCS (Precision Conference Solutions) website. At least one author from each accepted paper needs to register for the conference (please refer to the IEEE VIS Website http://ieeevis.org/ for the registration requirements). The submissions are not required to be anonymized and may include the full name of authors, their email, and affiliation as well as an acknowledgment.
The workshop accepts papers in two formats, as seen below. Upon submitting the papers, authors are asked to determine their desired paper format:
1. Archival Short paper publication through IEEE Xplore. This means that the papers can only be published in a proper journal as an extended version with at least 30% new content. This means each paper will be published and available through IEEE Xplore with an assigned DOI and are citable. While the papers published in this format can be extended to a journal publication (with 30% new content, subject to the rules of that journal), this content cannot be re-used for a future conference submission, such as IEEE VIS, EuroVis, and CHI.
1. Short paper statements or Work-in-progress notes, which will be non-archival. In other words, the authors maintain the copyright of their paper and are allowed to reuse their material in future publications.
Mahsan Nourani -- University of Florida
Eric Ragan -- University of Florida
Emily Wall -- Emory University
Alireza Karduni -- Northwestern University
Aritra Dasgupta -- New Jersey Institue of Technology
John Goodall -- Oak Ridge National Lab
Lane Harrison -- Worcester Polytechnic Institute
Chad Steed -- Regions Bank
Cindy Xiong -- University of Massachusetts in Amherst
For more information, you can email us at [log in to unmask], follow us on Twitter @TrexInVIS, or visit our workshop website https://trexvis.github.io/Workshop2021/index.html.
Graduate Research Assistant, INDIE Lab,
Department of Computer and Information Science and Engineering,
University of Florida
To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
mailto:[log in to unmask]
To manage your SIGCHI Mailing lists or read our polices see: