CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Cataldo Musto <[log in to unmask]>
Reply To:
Cataldo Musto <[log in to unmask]>
Date:
Thu, 9 Jul 2020 09:44:55 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (302 lines)
*** Apologies for cross postings ***

 

XAI.it Workshop @AIxIA 2020 - CALL FOR PAPERS

----------------------------------------------------------------------

Italian Workshop on Explainable Artificial Intelligence (XAI.it)

November 25-27, 2020

 

co-located with AIxIA 2020 (https://aixia2020.di.unito.it/) - Virtual
Conference

 

Twitter: https://twitter.com/XAI_Workshop

Web: http://www.di.uniba.it/~swap/xai-it

Submission: https://easychair.org/conferences/?conf=aixia2020 (track:
Italian Explainable Artificial Intelligence Workshop)

For any information: [log in to unmask]

 

=========

ABSTRACT

=========

 

Nowadays we are witnessing a new summer of Artificial Intelligence, since
the AI-based algorithms are being adopting in a growing number of contexts
and applications domains, 

ranging from media and entertainment to medical, finance and legal
decision-making.

While the very first AI systems were easily interpretable, the current trend
showed the rise of opaque methodologies such as those based on Deep Neural
Networks (DNN), whose (very good) 

effectiveness is contrasted by the enormous complexity of the models, which
is due to the huge number of layers and parameters that characterize these
models.

 

As intelligent systems become more and more widely applied (especially in
very “sensitive” domain), it is not possible to adopt opaque or inscrutable
black-box models or to ignore 

the general rationale that guides the algorithms in the task it carries on
Moreover, the metrics that are usually adopted to evaluate the
ef-fectiveness of the algorithms reward 

very opaque methodologies that maximize the accuracy of the model at the
expense of the transparency and explainability.

 

This issue is even more felt in the light of the recent experiences, such as
the General Data Protection Regulation (GDPR) and DARPA's Explainable AI
Project, which further emphasized 

the need and the right for scrutable and transparent methodologies that can
guide the user in a complete comprehension of the information held and
managed by AI-based systems.

 

Accordingly, the main motivation of the workshop is simple and
straightforward: how can we deal with such a dichotomy between the need for
effective intelligent systems and the 

right to transparency and interpretability?

 

These questions trigger several lines, that are particularly relevant for
the current research in AI. The workshop tries to address these research
lines and aims to provide a 

forum for the Italian community to discuss problems, challenges and
innovative approaches in the area.

 

 

======

TOPICS

======

Several research questions are triggered by this workshop:

1. How to design more transparent and more explainable models that maintain
high performance?

2. How to allow humans to understand and trust AI-based systems and methods?

3. How to evaluate the overall transparency and explainability of the models

 

Topics of interest:

- Explainable Artificial Intelligence

- Interpretable and Transparent Machine Learning Models

- Strategies to Explain Black Box Decision Systems

- Designing new Explanation Styles

- Evaluating Transparency and Interpretability of AI Systems

- Technical Aspects of Algorithms for Explanation

- Theoretical Aspects of Explanation and Interpretability

- Ethics in Explainable AI

- Argumentation Theory for Explainable AI

 

 

============

SUBMISSIONS

============

We encourage the submission of original contributions, investigating novel
methodologies to to build transparent and scrutable AI systems and
algorithms. In particular, authors can submit:

 

(A) Regular papers (max. 12 + references – Springer LLNCS format);

(B) Short/Position papers (max 6 pages + references - Springer LLNCS
format);

 

Submission Site https://easychair.org/conferences/?conf=aixia2020

 

All submitted papers will be evaluated by at least two members of the
program committee, based on originality, significance, relevance and
technical quality. Papers should be formatted according to the Springer
LLNCS Style.

Submissions should be single blinded, i.e. authors names should be included
in the submissions. 

Submissions must be made through the EasyChair conference system prior the
specified deadline (all deadlines refer to GMT) by selecting XAI.it as
submission track. 

At least one of the authors should register and take part at the conference
to make the presentation.

 

 

================

IMPORTANT DATES

===============

* Paper submission deadline: September 20th, 2020 (11:59PM UTC-12)

* Notification to authors: October 16th, 2020

* Camera-Ready submission: October 30th, 2020

* Video and slides of the presentation: November 10th, 2020

 

 

================

PROCEEDINGS AND POST-PROCEEDINGS

===============

All accepted papers will be published in the AIxIA series of CEUR-WS.
Authors of selected papers, accepted to the workshops, will be invited to
submit a revised and extended version of their work to appear in a volume,
published by an important international publisher. A selection of the best
papers, accepted for the presentation at the workshops, will be invited to
submit an extended version for publication on “intelligenza Artificiale”,
the International Journal of the Italian Association for Artificial
Intelligence, edited by IOS Press and indexed by Thomson Reuters' "Emerging
Sources Citation Index" and Scopus by Elsevier.

 

 

==================

PRESENTATION FORMAT

==================

Authors of accepted papers accepted papers at the workshops will be
requested to provide a video recorded presentation of their work.
Presentations will be available to all members of the association. Videos
must have a length between 5 and 8 minutes according to the rules given by
the workshop organizers. The video should be in MP4 format (1280x720). More
details will be published here.

 

 

=============

ORGANIZATION

=============

Cataldo Musto - University of Bari, Italy 

Daniele Magazzeni - JP Morgan, UK

Salvatore Ruggieri - University of Pisa, Italy

Giovanni Semeraro - University of Bari, Italy

 

 

=======================

PROGRAM COMMITEE (TBC)

=======================

Luca Maria Aiello, Nokia Bell Labs

Federico Bianchi, Università Bocconi - Milano

Cristina Conati, University of British Columbia

Roberto Confalonieri, Free University of Bozen-Bolzano

Alessandro Giuliani, University of Cagliari

Riccardo Guidotti, University of Pisa

Andrea Iovine, University of Bari

Kyriaki   Kalimeri, ISI Foundation

Antonio Lieto, University of Turin

Francesca Lisi, University of Bari

Stefania Montani, Università Piemonte Orientale

Andra Omicini, University of Bologna

Marco Polignano, Università degli Studi di Bari Aldo Moro

Gaetano Rossiello, IBM Research

Stefano Teso, Katholieke Universiteit Leuven

 


    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2