ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Cataldo Musto <[log in to unmask]>
Reply To:
Cataldo Musto <[log in to unmask]>
Thu, 23 Jul 2020 21:02:23 +0200
text/plain (304 lines)
*** Apologies for cross postings *** Workshop @AIxIA 2020 - CALL FOR PAPERS


Italian Workshop on Explainable Artificial Intelligence (

November 25-27, 2020


co-located with AIxIA 2020 ( - Virtual




Submission: (track:
Italian Explainable Artificial Intelligence Workshop)

For any information: [log in to unmask]


NEWS: AIxIA 2020 will be FREE OF CHARGE. To attend the conference, it will
be just necessary to register to the Italian Artificial Intelligence
Association (45 euros).






Nowadays we are witnessing a new summer of Artificial Intelligence, since
the AI-based algorithms are being adopting in a growing number of contexts
and applications domains, ranging from media and entertainment to medical,
finance and legal decision-making.

While the very first AI systems were easily interpretable, the current trend
showed the rise of opaque methodologies such as those based on Deep Neural
Networks (DNN), whose (very good) effectiveness is contrasted by the
enormous complexity of the models, which is due to the huge number of layers
and parameters that characterize these models.


As intelligent systems become more and more widely applied (especially in
very “sensitive” domain), it is not possible to adopt opaque or inscrutable
black-box models or to ignore the general rationale that guides the
algorithms in the task it carries on Moreover, the metrics that are usually
adopted to evaluate the ef-fectiveness of the algorithms reward  very opaque
methodologies that maximize the accuracy of the model at the expense of the
transparency and explainability.


This issue is even more felt in the light of the recent experiences, such as
the General Data Protection Regulation (GDPR) and DARPA's Explainable AI
Project, which further emphasized the need and the right for scrutable and
transparent methodologies that can guide the user in a complete
comprehension of the information held and managed by AI-based systems.


Accordingly, the main motivation of the workshop is simple and
straightforward: how can we deal with such a dichotomy between the need for
effective intelligent systems and the right to transparency and


These questions trigger several lines, that are particularly relevant for
the current research in AI. The workshop tries to address these research
lines and aims to provide a forum for the Italian community to discuss
problems, challenges and innovative approaches in the area.






Several research questions are triggered by this workshop:

1. How to design more transparent and more explainable models that maintain
high performance?

2. How to allow humans to understand and trust AI-based systems and methods?

3. How to evaluate the overall transparency and explainability of the models


Topics of interest:

- Explainable Artificial Intelligence

- Interpretable and Transparent Machine Learning Models

- Strategies to Explain Black Box Decision Systems

- Designing new Explanation Styles

- Evaluating Transparency and Interpretability of AI Systems

- Technical Aspects of Algorithms for Explanation

- Theoretical Aspects of Explanation and Interpretability

- Ethics in Explainable AI

- Argumentation Theory for Explainable AI






We encourage the submission of original contributions, investigating novel
methodologies to to build transparent and scrutable AI systems and
algorithms. In particular, authors can submit:


(A) Regular papers (max. 12 + references – Springer LLNCS format);

(B) Short/Position papers (max 6 pages + references - Springer LLNCS


Submission Site


All submitted papers will be evaluated by at least two members of the
program committee, based on originality, significance, relevance and
technical quality. Papers should be formatted according to the Springer
LLNCS Style.

Submissions should be single blinded, i.e. authors names should be included
in the submissions. 

Submissions must be made through the EasyChair conference system prior the
specified deadline (all deadlines refer to GMT) by selecting as
submission track. 

At least one of the authors should register and take part at the conference
to make the presentation. AIxIA 2020 will be FREE OF CHARGE. To attend the
conference, it will be just necessary to register to the Italian Artificial
Intelligence Association (45 euros).






* Paper submission deadline: September 20th, 2020 (11:59PM UTC-12)

* Notification to authors: October 16th, 2020

* Camera-Ready submission: October 30th, 2020

* Video and slides of the presentation: November 10th, 2020






All accepted papers will be published in the AIxIA series of CEUR-WS.
Authors of selected papers, accepted to the workshops, will be invited to
submit a revised and extended version of their work to appear in a volume,
published by an important international publisher. A selection of the best
papers, accepted for the presentation at the workshops, will be invited to
submit an extended version for publication on “Intelligenza Artificiale”,
the International Journal of the Italian Association for Artificial
Intelligence, edited by IOS Press and indexed by Thomson Reuters' "Emerging
Sources Citation Index" and Scopus by Elsevier.






Authors of accepted papers accepted papers at the workshops will be
requested to provide a video recorded presentation of their work.
Presentations will be available to all members of the association. Videos
must have a length between 5 and 8 minutes according to the rules given by
the workshop organizers. The video should be in MP4 format (1280x720). More
details will be published here.






Cataldo Musto - University of Bari, Italy 

Daniele Magazzeni - JP Morgan, UK

Salvatore Ruggieri - University of Pisa, Italy

Giovanni Semeraro - University of Bari, Italy






Luca Maria Aiello, Nokia Bell Labs

Matteo Baldoni, University of Torino

Federico Bianchi, Università Bocconi - Milano

Ludovico Boratto, EURECAT

Cristina Conati, University of British Columbia

Roberto Confalonieri, Free University of Bozen-Bolzano

Alessandro Giuliani, University of Cagliari

Riccardo Guidotti, University of Pisa

Andrea Iovine, University of Bari

Kyriaki   Kalimeri, ISI Foundation

Antonio Lieto, University of Turin

Francesca Lisi, University of Bari

Anna Monreale, University of Pisa

Stefania Montani, Università Piemonte Orientale

Andra Omicini, University of Bologna

Marco Polignano, Università degli Studi di Bari Aldo Moro

Gaetano Rossiello, IBM Research

Stefano Teso, Katholieke Universiteit Leuven


    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see