SEWORLD Archives

SEWORLD

SEWORLD@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject: CFP IEEE Software Special Issue "Explainable AI for Software Engineering (XAI4SE)"
From: SEWORLD Moderator <[log in to unmask]>
Reply-To:SEWORLD Moderator <[log in to unmask]>
Date:Tue, 12 Jul 2022 23:43:53 -0000
Content-Type:text/plain

*Call for Papers for an IEEE Software Special Issue on Explainable AI for
Software Engineering (XAI4SE)*
More details:
https://www.computer.org/digital-library/magazines/so/cfp-expainable-ai-software-engineering
*Submission Deadline: 15 November 2022*

*Guest Editors:*

   - Chakkrit (Kla) Tantithamthavorn, Monash University, Australia.
   [log in to unmask]
   - Jürgen Cito, TU Wein, Austria. [log in to unmask]
   - Hadi Hemmati, York University, Canada. [log in to unmask]
   - Satish Chandra, Meta USA. [log in to unmask]

Artificial Intelligence/Machine Learning (AI/ML) have been widely used in
software engineering to automatically provide recommendations to improve
developer productivity, software quality, and decision-making. This
includes tools like code completion (like GitHub’s Copilot and Amazon’s
Code Whisperer), code search, automated task recommendation, automated
developer recommendation, automated defect/vulnverability/malware
prediction, detection, localization, and repairs, and many more.

However, many of these solutions are still not practical, explainable, and
actionable. A lack of explainability often leads to a lack of trust in the
predictions of AI/ML models in SE, which in turn hinders the adoption of
AI/ML models in real-world software development practices [1,2,3,7,13].
This problem is especially more pronounced for modern pre-trained language
models of code that are large, black-box, and complex in nature like
CodeBERT, GraphCodeBERT, CodeGPT, CodeT5, etc. Therefore, Explainable AI
for SE is a pressing concern for the software industry and academia. In the
light of predictions made in SE contexts, practitioners would like to know
Why has this code has been generated? Why is this person best suited for
this task? Why is this file predicted as defective? Why is this task
required the highest development effort?, etc.

Recently, a practitioner’s survey study [3] found that explanations from
AI/ML models in SE are critically needed, yet remain largely unexplored.
Recent work also showed that explainable AI techniques can be used to make
AI/ML models for software engineering more practical, explainable, and
actionable, while being able to improve the AI/ML model’s quality [12,13],
model fairness, discrimination and bias [14,15]. However, XAI4SE is an
emerging research topic. Thus, this theme issue is calling for papers that
broadly cover the following topics (but are not limited to):

   - Empirical studies on the need, motivation, and challenges of
   explainable AI for SE.
   - Novel theories, tools, and techniques for generating textual or visual
   explanations for SE tasks (e.g., what is the best form of explanations for
   software engineering tasks that are most understandable by software
   practitioners?)
   - Empirical studies or short reflection articles on the fundamentals of
   human-centric XAI design that incorporate aspects of psychology, learning
   theories, cognitive science, and social sciences.
   - Novel explainable AI techniques or applications to new SE tasks that
   serve various purposes, e.g., testing, debugging, visualizing,
   interpreting, and refining AI/ML models in SE.
   - Explainable AI methods to detect and explain potential biases when
   applying AI tools in SE.
   - Novel evaluation frameworks of explainable AI techniques for SE tasks.
   - Empirical studies to investigate if different stakeholders need
   different explanations.
   - Empirical studies on the impact of using explainable AI techniques in
   software development practices.
   - Empirical studies of human-centric explainable AI for software
   engineering.
   - Practical guidelines for increasing the explainability of AI/ML models
   in software engineering.
   - Visions, reflections, industrial case studies, experience reports, and
   lessons learned of explainable AI for software engineering.
   - Papers reporting negative or inconclusive results in any of the above
   areas are also encouraged to be submitted with lessons learned and
   implications for future studies.

-- 
*Dr. "Kla" Chakkrit Tantithamthavorn*
ARC DECRA Fellow
Senior Lecturer in Software Engineering
<https://research.monash.edu/en/persons/chakkrit-tantithamthavorn>
Director of Engagement and Impact

*Monash University*

*Faculty of Information and Technology*

Room 217, 20 Exhibition Walk

Clayton VIC 3800 Australia

E: [log in to unmask]
W: http://chakkrit.com

CRICOS Provider 00008C/01857J

We acknowledge and pay respects to the Elders and Traditional Owners of the
land on which our four Australian campuses stand. Information for
Indigenous Australians
<https://www.monash.edu/brandbook/empowering-tools/monash.edu/indigenous-australians>

We're committed to diversity and inclusion
<https://www.monash.edu/about/diversity-inclusion>

============================================================
To contribute to SEWORLD, send your submission to
mailto:[log in to unmask]

http://sigsoft.org/resources/seworld.html provides more
information on SEWORLD as well as links to a complete
archive of messages posted to the list.
============================================================

ATOM RSS1 RSS2