ACM SIGMM Interest List


Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Werner Bailer <[log in to unmask]>
Thu, 9 Feb 2023 03:45:22 -0500
text/plain (65 lines)
Call for Papers

Explainability in Multimedia Analysis (ExMA)

Special Session at CBMI 2023
20-22 September 2023
Orleans, France

The rise of machine learning approaches, and in particular deep learning, has led to a significant increase in the performance of AI systems. However, it has also raised the question of the reliability and explicability of their predictions for decision-making (e.g., the black-box issue of the deep models). Such shortcomings also raise many ethical and political concerns that prevent wider adoption of this potentially highly beneficial technology, especially in critical areas, such as healthcare, self-driving cars or security.
It is therefore critical to understand how their predictions correlate with information perception and expert decision-making. The objective of eXplainable AI (XAI) is to open this black box by proposing methods to understand and explain how these systems produce their decisions.

Among the multitude of relevant multimedia data, face information is an important feature when indexing image and video content containing humans. Annotations based on faces span from the presence of faces (and thus persons), over localizing and tracking them, analyzing features (e.g., determining whether a person is speaking) to the identification of persons from a pool of potential candidates or the verification of assumed identities. Unlike many other types of metadata or features commonly used in multimedia applications, the analysis of faces affects sensitive personal information. This raises both legal issues, e.g. concerning data protection and regulations in the emerging European AI regulation, as well as ethical issues, related to potential bias in the system or misuse of these technologies. 

This special session focuses on AI-based explainability technologies in multimedia analysis, and in particular on:

- the analysis of the influencing factors relevant for the final decision as an essential step to understand and improve the underlying processes involved;
- information visualization for models or their predictions;
- interactive applications for XAI;
- performance assessment metrics and protocols for explainability;
- sample-centric and dataset-centric explanations;
- attention mechanisms for XAI;
- XAI-based pruning;
- applications of XAI methods; and
- open challenges from industry or emerging legal frameworks.

This special session aims at collecting scientific contributions that will help improve trust and transparency of multimedia analysis systems with important benefits for society as a whole.

We invite the submission of long papers describing novel methods or their adaptation to specific applications or short papers describing emerging work or open challenges. The review process is single-blind, i.e. submissions do not need to be anonymized.

Important dates:

Paper submission: April 12, 2023 
Notification of acceptance: June 1, 2023 
Camera ready paper: June 15, 2023
Conference dates: September 20 – 22, 2023


Chiara Galdi, EURECOM, France ([log in to unmask]) 
Werner Bailer, JOANNEUM RESEARCH, Austria ([log in to unmask])
Romain Giot, University of Bordeaux, France ([log in to unmask])
Romain Bourqui, University of Bordeaux, France ([log in to unmask])



[log in to unmask]

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:


Once you have created a password, you can log in and view or change your
subscription settings at: