ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Cataldo Musto <[log in to unmask]>
Reply To:
Cataldo Musto <[log in to unmask]>
Thu, 30 Sep 2021 19:15:51 +0200
text/plain (265 lines)
*** Apologies for cross postings *** Workshop @AIxIA 2021 - CALL FOR PAPERS


2nd Italian Workshop on Explainable Artificial Intelligence (

December 1-3, 2021


co-located with AIxIA 2021 ( - Milano, Italy




Submission: (track: Italian Explainable Artificial Intelligence Workshop)

For any information: [log in to unmask]






- CEUR conference proceedings






Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based algorithms are being adopting in a growing number of contexts and applications domains, ranging from media and entertainment to medical, finance and legal decision-making. While the very first AI systems were easily interpretable, the current trend showed the rise of opaque methodologies such as those based on Deep Neural Networks (DNN), whose (very good) effectiveness is contrasted by the enormous complexity of the models, which is due to the huge number of layers and parameters that characterize these models.


As intelligent systems become more and more widely applied (especially in very sensitive domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore the general rationale that guides the algorithms in the task it carries on Moreover, the metrics that are usually adopted to evaluate the effectiveness of the algorithms reward very opaque methodologies that maximize the accuracy of the model at the expense of the transparency and explainability.


This issue is even more felt in the light of the recent experiences, such as the General Data Protection Regulation (GDPR) and DARPA's Explainable AI Project, which further emphasized the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the information held and managed by AI-based systems.


Accordingly, the main motivation of the workshop is simple and straightforward: how can we deal with such a dichotomy between the need for effective intelligent systems and the right to transparency and interpretability?


These questions trigger several lines, that are particularly relevant for the current research in AI. The workshop tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the area.






Several research questions are triggered by this workshop:

1. How to design more transparent and more explainable models that maintain high performance?

2. How to allow humans to understand and trust AI-based systems and methods?

3. How to evaluate the overall transparency and explainability of the models


Topics of interest:

- Explainable Artificial Intelligence

- Interpretable and Transparent Machine Learning Models

- Strategies to Explain Black Box Decision Systems

- Designing new Explanation Styles

- Evaluating Transparency and Interpretability of AI Systems

- Technical Aspects of Algorithms for Explanation

- Theoretical Aspects of Explanation and Interpretability

- Ethics in Explainable AI

- Argumentation Theory for Explainable AI





We encourage the submission of original contributions, investigating novel methodologies to to build transparent and scrutable AI systems and algorithms. In particular, authors can submit:


(A) Regular papers (max. 12 + references - Springer LLNCS format);

(B) Short/Position papers (max 6 pages + references - Springer LLNCS format);


Submission Site


All submitted papers will be evaluated by at least two members of the program committee, based on originality, significance, relevance and technical quality. Papers should be formatted according to the Springer LLNCS Style.

Submissions should be single blinded, i.e. authors names should be included in the submissions. 

Submissions must be made through the EasyChair conference system prior the specified deadline (all deadlines refer to GMT) by selecting as submission track. 

At least one of the authors should register and take part at the conference to make the presentation. 





* Paper submission deadline: October 11th, 2021 (11:59PM UTC-12)

* Notification to authors: November 8th, 2021

* Camera-Ready submission: November 22nd, 2021





All accepted papers will be published in the AIxIA series of CEUR-WS. Authors of selected papers, accepted to the workshops, will be invited to submit a revised and extended version of their work.





Cataldo Musto - Università di Bari, Italy 

Riccardo Guidotti - Università di Pisa, Italy

Anna Monreale - Università di Pisa, Italy

Giovanni Semeraro - Università di Bari, Italy






Davide Bacciu, Università di Pisa

Matteo Baldoni, Università di Torino

Valerio Basile, Università di Torino

Federico Bianchi, Università Bocconi - Milano

Ludovico Boratto, Università di Cagliari

Roberta Calegari, Università di Bologna

Federica Cena, Università di Torino

Roberto Capobianco, Università di Roma La Sapienza

Federica Cena, Università di Trento

Nicolò Cesa-Bianchi, Università di Milano

Roberto Confalonieri, Libera Università di Bozen-Bolzano

Luca Costabello, Accenture

Rodolfo Delmonte, Università Ca' Foscari

Mauro Dragoni, Fondazione Bruno Kessler

Stefano Ferilli, Università di Bari

Fabio Gasparetti, Roma Tre University

Alessandro Giuliani, Università di Cagliari

Andrea Iovine, Università di Bari

Antonio Lieto, Università di Torino

Alessandro Mazzei, Università di Torino

Stefania Montani, Università di Roma La Sapienza

Daniele Nardi, Università di Roma La Sapienza

Andrea Omicini, Università di Bologna 

Andrea Passerini, Università di Trento

Roberto Prevete, Università di Naples Federico II

Antonio Rago, Imperial College London

Amon Rapp, Università di Torino

Salvatore Rinzivillo, ISTI - CNR

Gaetano Rossiello, IBM Research

Salvatore Ruggieri, Università di Pisa

Giuseppe Sansonetti, Roma Tre University

Lucio Davide Spano, Università di Cagliari

Stefano Teso, Katholieke Universiteit Leuven

Francesca Toni, Imperial College London


    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see: