CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Andreas Holzinger <[log in to unmask]>
Reply To:
Andreas Holzinger <[log in to unmask]>
Date:
Wed, 21 Mar 2018 12:28:14 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (155 lines)
MAKE-Explainable AI (MAKE – eXAI)

CD-MAKE 2018 Workshop on explainable Artificial Intelligence

GOAL

This workshop aims to bring together international cross-domain experts 
interested in artificial intelligence/machine learning to stimulate 
research, engineering and evaluation in explainable AI – towards making 
machine decisions transparent, re-traceable, comprehensible, 
interpretable, explainable and reproducible. Accepted papers will be 
presented at the workshop orally or as poster and published in the IFIP 
CD-MAKE Volume of Springer Lecture Notes in Artificial Intelligence 
(LNAI). All submissions will be peer reviewed by at least three experts –
see authors instructions here: https://cd-make.net/authors-area/submission

BACKGROUND

Explainable AI is NOT a new field. Actually the problem of 
explainability is as old as AI and maybe the result of AI itself. While 
early expert systems consisted of handcrafted knowledge, which enabled 
reasoning over at least a narrowly well-defined domain, such systems had 
no learning capabilities and were poor in handling of uncertainties when 
(trying) to solve real-world problems. The big success of current AI 
solutions and ML algorithms is due to the practical applicability of 
statistical learning approaches in arbitrarily high dimensional spaces. 
Despite their huge successes their effectiveness is still limited by 
their inability to ”explain” their decisions in an understandable and 
retraceable way. Even if we understand the underlying mathematical 
theories, it is complicated and often impossible to get insight into the 
internal working of the models/algorithms and tools and to explain how 
and why a result was achieved. Future AI needs contextual adaptation, 
i.e. systems that help to construct explanatory models for solving 
real-world problems. Here it would be beneficial not to exclude human 
expertise, but to augment human intelligence with artificial intelligence.

TOPICS

In line with the general theme of the CD-MAKE conference of augmenting 
human intelligence with artificial intelligence, and Science is to test 
crazy ideas – Engineering is to bring these ideas into Business – we 
encourage to submit work on, but not limited to:

Frameworks, architectures, algorithms, tools for post-hoc/ante-hoc 
explainability
Theoretical approaches of explainability and transparent AI
Human intelligence vs. Artificial Intelligence (HCI — KDD)
Interactive machine learning with human(s)-in-the-loop (crowd intelligence)
Explanation User Interfaces and Human—Computer Interaction (HCI) for 
explainable AI
Fairness, accountability and trust
Ethical aspects, law and social responsibility
Business aspects of transparent AI

MOTIVATION

The grand goal of future explainable AI is to make results 
understandable and transparent and to answer questions of how and why a 
result was achieved. In fact: “Can we explain how and why a specific 
result was achieved by an algorithm?” In the future it will be essential 
not only to answer the question “Which of these animals is a cat?”, but 
to answer “Why is it a cat [Youtube Video]” – “What are the underlying 
explanatory facts that the machine learning algorithms made this decision”.

This highly relevant emerging area is important for all application 
areas, ranging from health informatics [1] to cyber defense [2], [3]. A 
particular focus is on novel HCI and user interfaces for interactive 
machine learning [4].

[1] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & 
Douglas B. Kell (2017). What do we need to build explainable AI systems 
for the medical domain? arXiv:1712.09923.
[2] David Gunning (2016) DARPA program on explainable artificial 
intelligence
[3] Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger 
(2018). Can we trust Machine Learning Results? Artificial Intelligence 
in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.
[4] Todd Kulesza, Margaret Burnett, Weng-Keen Wong & Simone Stumpf 
(2015). Principles of explanatory debugging to personalize interactive 
machine learning. Proceedings of the 20th International Conference on 
Intelligent User Interfaces (IUI 2015), 2015 Atlanta. ACM, 126-137, 
doi:10.1145/2678025.2701399.

Example: One motivation is the new European General Data Protection 
Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 
2018, and affects practically all machine learning and artificial 
intelligence applied to business. For example it will be difficult to 
apply black-box approaches for professional use in certain business 
applications, because they are not re-traceable and rarely able to 
explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. 
The regulation was adopted on 27 April 2016 and becomes enforceable from 
25 May 2018 after now a two-year transition period and, unlike a 
directive, it does not require national governments to pass any enabling 
legislation, and is thus directly binding – which affects practically 
all data-driven businesses and particularly machine learning and AI 
technology.

WORKSHOP ORGANIZERS

Ajay CHANDER, Stanford University and Fujitsu Labs of America, Sunnyvale, US
Randy GOEBEL, University of Alberta, Edmonton, CA
Katharina HOLZINGER, Secure Business Austria, SBA-Research Vienna, AT
Freddy LECUE, Accenture Technology Labs, Dublin, IE and INRIA Sophia 
Antipolis, FR
Zeynep AKATA, University of Amsterdam, NL
Simone STUMPF, City, University London, UK
Peter KIESEBERG, Secure Business Austria, SBA-Research Vienna, AT
Andreas HOLZINGER, Medical University Graz, AT

SCIENTIFIC PROGRAMME COMMITTEE

see the conference main committee:
https://cd-make.net/committees
but we are also seeking addtional reviewers with special interest in 
this field –
if you want to volunteer as reviewer please contact Andreas Holzinger

David W. AHA, Naval Research Laboratory, Navy Center for Applied Research
in Artificial Intelligence, Washington, DC, US
Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis and 
Informtation Systems IAIS, Sankt Augustin, and University of Bonn, DE
Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Marco Tulio RIBEIRO, Guestrin Group, University of Washington, Seattle, 
WA, US
Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Sameer SINGH, University of California UCI, Irvine, CA, US
Alison SMITH, University of Maryland, MD, US
Mohan SRIDHARAN, University of Auckland, NZ
Ramya MALUR SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US

-- 
- Towards Augmenting Human Intelligence  with Artificial Intelligence -
-----------------------------------------------------------------------
Assoc.Prof. Dr. Andreas HOLZINGER, Group Leader, Research Unit, HCI-KDD
Institute for Medical Informatics / Statistics, Medical University Graz
Auenbruggerplatz 2/V,  A-8036 Graz, AUSTRIA,  Phone: ++43 316 385 13883
Group Homepage: http://hci-kdd.org   Personal: http://www.aholzinger.at
MAKE Conf: https://cd-make.net  3-Min MAKE Video: https://goo.gl/0hcPOY
Visiting Prof. for Machine Learning in Health Informatics at  TU Vienna
-----------------------------------------------------------------------
Science is testing crazy ideas - Engineering is bringing it to Business


    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2