Explainable Artificial Intelligence for Sentiment Analysis
Erik Cambria, Nanyang Technological University, Singapore
Akshi Kumar, Delhi Technological University, India
Mahmoud Al-Ayyoub, Jordan University of Science and Technology, Jordan
Newton Howard, Oxford University, UK
Knowledge-Based Systems (impact factor: 5.921)
BACKGROUND AND MOTIVATION
Artificial-intelligence driven models, especially deep learning models, have achieved state-of-the-art results for various natural language processing tasks including sentiment analysis. We get highly accurate predictions using these in conjunction with large datasets, but with little understanding of the internal features and representations of the data that a model uses to classify into sentiment categories. Most techniques do not disclose how and why decisions are taken. In other words, these black-box algorithms lack transparency and explainability.
Explainable artificial intelligence (XAI) is an emerging field in machine learning that aims to address how artificial-intelligence systems make decisions. It refers to artificial-intelligence methods and techniques that produce human-comprehensible solutions. XAI solutions will enable enhanced prediction accuracy with decision understanding and traceability of actions taken. XAI aims to improve human understanding, determine the justifiability of decisions made by the machine, introduce trust and reduce bias.
This special issue aims to stimulate discussion on the design, use and evaluation of XAI models as the key knowledge-discovery drivers to recognize, interpret, process and simulate human emotion for various sentiment analysis tasks. We invite theoretical work and review articles on practical use-cases of XAI that discuss adding a layer of interpretability and trust to powerful algorithms such as neural networks, ensemble methods including random forests for delivering near real-time intelligence.
Concurrently, works on social computing, emotion recognition and affective computing research methods which help mediate, understand and analyze aspects of social behaviors, interactions, and affective states based on observable actions are also encouraged. Full length, original and unpublished research papers based on theoretical or experimental contributions related to understanding, visualizing and interpreting deep learning models for sentiment analysis and interpretable machine learning for sentiment analysis are also welcome.
TOPICS OF INTEREST
- XAI for sentiment and emotion analysis in social media
- XAI for aspect-based sentiment analysis
- XAI for multimodal sentiment analysis
- XAI for multilingual sentiment analysis
- XAI for conversational sentiment analysis
- Ante-hoc and post-hoc XAI approaches to sentiment analysis
- Semantic models for sentiment analysis
- Linguistic knowledge of deep neural networks for sentiment analysis
- Explaining sentiment predictions
- Trust and interpretability in classification
- SenticNet 6 and other XAI-based knowledge bases for sentiment analysis
- Sentic LSTM and other XAI-based deep nets for sentiment analysis
- Emotion categorization models for polarity detection
- Paraphrase detection in opinionated text
- Sarcasm and irony detection in online reviews
- Bias propagation and opinion diversity on online forums
- Opinion spam detection and intention mining
Submission Deadline: 25th December 2020
Peer Review Due: 1st April 2021
Revision Due: 15th July 2021
Final Decision: 30th September 2021
CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents.
Towards a sustainable earth: Print only when necessary. Thank you.