CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Lionel Robert <[log in to unmask]>
Mon, 1 Jul 2019 05:44:15 -0400
text/plain (164 lines)
*AIS Transactions on HCI (THCI) <https://aisel.aisnet.org/thci/>*
*Special issue on AI Fairness, Trust and Ethics*


*Special Issue Editors:*
Lionel P. Robert Jr., University of Michigan
Gaurav Bansal, University of Wisconsin-Green Bay
Nigel Melville, University of Michigan
Tom Stafford, Louisiana Tech University


*Submission Deadline: Full papers due February 15, 2020*
AI is rapidly changing every aspect of our society from how we conduct
business, socialize and exercise. AI has amplified our productivity as well
as biases. John Giannandrea, who leads AI at Google, recently lamented in
the MIT Technology Review that the dangers posed by the ability of AI
systems to learn human prejudices were far greater than those posed by
killer-robots. This phenomenon is problematic because AI systems are making
millions of decisions every minute many of which are invisible to the users
and incomprehensible to the designers. Their opaqueness is a significant
cause of worry and leaves many unanswered questions.

Fairness, Trust and Ethics are at the core of many of the issues underlying
the implications of AI. Fairness is undermined when managers rely blindly
on “objective” AI outputs to “augment” or replace their decision making.
Managers often ignore the limitations of their assumptions and the
relevance of the data that was used to train and test AI models, resulting
in bias decisions that are hard to detect or appeal. Trust is undercut,
when AI is used to render false or misleading images of individuals saying
or doing things that are simply not true. These false images are making it
difficult for society to trust what they see or hear. Ethical challenges
are presented when decisions used by AI lead to further inequalities in the
society. Examples include: displaced workers and shortages of affordable
housing due to rental apartments and housing units being diverted to higher
paying Airbnb short term vacationers.

Despite the potential transformative effects, research on AI in the
Information Systems field is still scarce, and as a result, our knowledge
on the impacts of AI are still far from conclusive. Yet, it is very
important from the business and technical perspective that we research and
examine issues of fairness, trust and ethics with AI. This examination is
critical as issues of fairness, trust and ethics lie at the heart of
addressing the new challenges facing the development and use of AI
throughout our society. This is especially true, as there has been a rapid
increase in the number of applications of AI in an ever increasing number
of new areas. In all, AI has the potential to disrupt and dramatically
change the interactions between humans and technologies.

This Special Issue on AI Fairness, Trust and Ethics calls for research that
can unpack the potential, challenges, impacts, and theoretical implications
of AI. We welcome research from different perspectives regardless of the
approach or methodology. Submissions with novel theoretical implications
that span disciplines are strongly encouraged. We seek submissions that can
improve our understanding about the impacts of AI in organizations and our
broader society.

*Potential topics include (but are not limited to):*

   - Defining fair, ethical and trustworthy AI
   - Antecedents and consequents for fair, ethical and trustworthy AI
   - Designing, implementing and deploying fair, ethical and trustworthy AI
   - Theories of fair, ethical and trustworthy AI
   - Policy and governance for fair, ethical and trustworthy AI
   - Appropriate and inappropriate applications of AI
   - Legal responsibilities for decisions made by AI
   - AI biases
   - AI algorithm transparency – how to improve
   - The dark side of AI
   - AI equality vs AI equity
   - Implications of unfair, unethical and untrustworthy AI


*Key Dates:*
Optional one page abstract submissions: Oct 1, 2019
Selected abstracts invited for poster presentations at Pre-ICIS 2019 SIGHCI
workshop on Dec 15, 2019
First round submissions: Feb 15, 2020
First round decisions: April 15, 2020
Second round submissions: July 15, 2020
Second round decisions to authors: Sep 15, 2020
Third and final round submissions: November 1, 2020
Final decisions to authors: November 15, 2020
Targeted publication date: December 31, 2020

To submit a manuscript, read the "Information for Authors" and "THCI
Policy" pages, then go to http://mc.manuscriptcentral.com/thci.

*Contact:*
All questions about submissions should be emailed to:
*[log in to unmask]
<[log in to unmask]>.*


Linke to: *Call For Papers: Special issue on AI Fairness, Trust and Ethics
<https://bit.ly/2JcrDT7>*


Best regards,

Lionel


*New Paper(s):*
Du, N., Haspiel, J., Zhang, Q., Tilbury, D., Pradhan, A., Yang, X. J.
and *Robert,
L. P. *(Accepted 2019). *Look Who’s Talking Now: Implications of AV’s
Explanations on Driver’s Trust, AV Preference, Anxiety and Mental Workload*,
*Transportation Research Part C: Emerging Technologies*, (pdf
<http://hdl.handle.net/2027.42/149154>), forthcoming, link to the article
provided by the author: http://hdl.handle.net/2027.42/149154 and
http://arxiv.org/abs/1905.08878.

*Robert, L. P. *(2019*)*. *Are Automated Vehicles Safer than Manually
Driven Cars?*, *AI & Society*, (pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/149146/AI%26S%20Are%20Automated%20Vehicles%20Safe%20Final%20Version.pdf?sequence=1&isAllowed=y>
), link to publisher's site: https://doi.org/10.1007/s00146-019-00894-y copy
provided by the author: http://hdl.handle.net/2027.42/149146.

*Robert, L. P.* ( 2019). *The Future of Pedestrian-Automated Vehicle
Interactions*, *XRDS: Crossroads*, 25(3), pp. 30-33. (pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/148533/Robert%202019.pdf?sequence=1&isAllowed=y>
), DOI: https://doi.org/10.1145/3313115 article provided by author:
http://hdl.handle.net/2027.42/148533  or http://arxiv.org/abs/1904.06417 or
http://ssrn.com/abstract=3370618.

Petersen, L., *Robert, L.P.*, Yang, X. J. Tilbury, D. (2019). *Situational
Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task
Performance*, *SAE International Journal of Connected and Automated
Vehicles*, 2(2), (pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/148141/SA%20Trust%20-%20SAE-%20Public.pdf?sequence=1&isAllowed=y>),
DOI:10.4271/12-02-02-0009 link to the article
https://saemobilus.sae.org/content/12-02-02-0009/ and copy provided by the
author: http://hdl.handle.net/2027.42/148141 and
http://arxiv.org/abs/1903.05251.



Lionel P. Robert Jr.
Associate Professor, School of Information
<https://www.si.umich.edu/people/lionel-robert>
Core Faculty, Michigan Robotics Institute
<https://robotics.umich.edu/core-faculty/>
Affiliate Faculty, National Center for Institutional Diversity
<https://lsa.umich.edu/ncid>
Affiliate Faculty, Michigan Interactive and Social Computing
<http://misc.si.umich.edu/>
Director of MAVRIC <https://mavric.si.umich.edu>
Co-Director of DOW Lab
University of Michigan
Email: [log in to unmask]
UMSI Website <https://www.si.umich.edu/directory/lionel-robert> | Personal
Website  <https://sites.google.com/a/umich.edu/lionelrobert/home>
MAVRIC: https://mavric.si.umich.edu

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2