*** Apologies for cross postings ***
XAI.it Workshop @AIxIA 2023 - EXTENDED DEADLINE
4th Italian Workshop on Explainable Artificial Intelligence (XAI.it 2023)
November 6-9, 2023
co-located with AIxIA 2023 (http://www.aixia2023.cnr.it/) - Rome, Italy
For any information: [log in to unmask]
- EXTENDED DEADLINE: September 15, 2023
- CEUR conference proceedings
Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based algorithms are being adopting in a growing number of contexts and applications domains, ranging from media and entertainment to medical, finance and legal decision-making.
While the very first AI systems were easily interpretable, the current trend showed the rise of opaque methodologies such as those based on Deep Neural Networks (DNN), whose (very good) effectiveness is contrasted by the enormous complexity of the models, which is due to the huge number of layers and parameters that characterize these models.
As intelligent systems become more and more widely applied (especially in very “sensitive” domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore the general rationale that guides the algorithms in the task it carries on Moreover, the metrics that are usually adopted to evaluate the ef-fectiveness of the algorithms reward very opaque methodologies that maximize the accuracy of the model at the expense of the transparency and explainability.
This issue is even more felt in the light of the recent experiences, such as the General Data Protection Regulation (GDPR) and DARPA's Explainable AI Project, which further emphasized the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the information held and managed by AI-based systems.
Accordingly, the main motivation of the workshop is simple and straightforward: how can we deal with such a dichotomy between the need for effective intelligent systems and the right to transparency and interpretability?
These questions trigger several lines, that are particularly relevant for the current research in AI. The workshop tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the area.
Several research questions are triggered by this workshop:
1. How to design more transparent and more explainable models that maintain high performance?
2. How to allow humans to understand and trust AI-based systems and methods?
3. How to evaluate the overall transparency and explainability of the models?
Topics of interest:
- Explainable Artificial Intelligence
- Interpretable and Transparent Machine Learning Models
- Strategies to Explain Black Box Decision Systems
- Designing new Explanation Styles
- Evaluating Transparency and Interpretability of AI Systems
- Technical Aspects of Algorithms for Explanation
- Theoretical Aspects of Explanation and Interpretability
- Ethics in Explainable AI
- Argumentation Theory for Explainable AI
- Natural Language Generation for Explainable AI
- Human-Machine Interaction for Explainable AI
- Fairness and Bias Auditing
- Privacy-Preserving Explanations
- Privacy by Design Approaches for Human Data
- Monitoring and Understanding System Behavior
- Successful Applications of Interpretable AI Systems
We encourage the submission of original contributions that investigate novel methodologies to build transparent and scrutable AI systems and algorithms. In particular, authors can submit:
(A) Regular papers (max. 12 + references - CEUR.ws format);
(B) Short/Position papers (max 8 pages + references - CEUR.ws format);
Submission Site https://easychair.org/conferences/?conf=xaiit2023
All submitted papers will be evaluated by at least two members of the program committee, based on originality, significance, relevance and technical quality. Papers should be formatted according to the CEUR.ws format (http://ceur-ws.org/Vol-XXX/CEURART.zip).
Submissions should be single-blinded, i.e. authors' names should be included in the submissions.
Submissions must be made through the EasyChair conference system prior to the specified deadline (all deadlines refer to GMT) by selecting XAI.it as submission track.
At least one of the authors should register and take part in the conference to make the presentation.
IMPORTANT DATES (EXTENDED SUBMISSION DEADLINE)
* Paper submission deadline: September 15, 2023 (11:59 PM UTC-12)
* Notification to authors: October 9, 2023
* Camera-Ready submission: October 18, 2023
PROCEEDINGS AND POST-PROCEEDINGS
All accepted papers will be published in the AIxIA series of CEUR-WS. Authors of selected papers, accepted to the workshops, will be invited to submit a revised and extended version of their work.
Cataldo Musto - University of Bari, Italy
Riccardo Guidotti - University of Pisa, Italy
Anna Monreale - University of Pisa, Italy
Erasmo Purificato - Otto von Guericke University Magdeburg, Germany
Giovanni Semeraro - University of Bari, Italy
PROGRAM COMMITEE (TBC)
Davide Bacciu - University of Pisa, Italy
Valerio Basile - University of Turin, Italy
Roberta Calegari - Alma Mater Studiorum–Università di Bologna, Italy
Federica Cena - University of Turin, Italy
Tania Cerquitelli - Politecnico di Torino, Italy
Roberto Confalonieri - University of Padua, Italy
Rodolfo Delmonte - Università Ca' Foscari, Italy
Mauro Dragoni - Fondazione Bruno Kessler - FBK-IRST, Italy
Alessandro Giuliani - University of Cagliari, Italy
Kyriaki Kalimeri - ISI Foundation, Italy
Francesca Alessandra Lisi - University of Bari, Italy
Andrea Omicini - Alma Mater Studiorum–Università di Bologna, Italy
Ruggero G. Pensa - University of Turin, Italy
Claudio Pomo - Politecnico di Bari, Italy
Antonio Rago - Imperial College London, UK
Amon Rapp - University of Turin, Italy
Salvatore Ruggieri - University of Pisa, Italy
Giuseppe Sansonetti - Roma Tre University, Italy
Mattia Setzu - University of Pisa, Italy
Fabrizio Silvestri - University of Rome La Sapienza, Italy
To unsubscribe from the DBWorld mailing list please visit https://dbworld.sigmod.org and follow the unsubscribe directions given there.