CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Antonella De Angeli <[log in to unmask]>
Reply To:
Antonella De Angeli <[log in to unmask]>
Date:
Thu, 22 Feb 2007 09:15:35 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (100 lines)
CALL FOR PAPERS
                          ABUSE AND MISUSE OF SOCIAL AGENTS
                    Special Issue of Interacting with Computers

For decades, science fiction writers have envisioned a world in which
robots and computers act like human assistants, virtual companions, and
artificial helpmates. Nowadays, for better or for worse, that vision is
becoming reality. A number of human-like interfaces and machines are under
development in research centers around the world, and several prototypes
have already been deployed on the Internet and in businesses. Even in our
homes, service robots, such as vacuum cleaners and lawn movers, are
becoming increasingly common. These creatures are the first-generation
social agents: machines designed to build relationships with users while
performing tasks with some degree of autonomy. Social agents display a
range of human-like behaviors: they communicate using natural language,
gesture, display and recognize emotions, and are even designed to mirror
our facial expressions and show empathy.

Until recently, scientific investigations into the psychological aspects of
human relationships with social agents have mainly addressed the positive
effects of this relationship, such as an increase in trust and task
facilitation (e.g., Bickmore & Picard, 2005). Nevertheless, as the
interaction bandwidth evolves to encompass a broader range of social and
emotional expressiveness, there is the possibility of the user and social
agent displaying anti-social, hostile, and disinhibited behaviors.
Workshops held at Interact2005 and CHI2006 (De Angeli, Brahnam, & Wallis,
2005; De Angeli, Brahnam, Wallis, & Dix, 2006) have suggested that
anthropomorphic metaphors can inadvertently rouse the user to display
dissatisfaction through angry interactions, sexual harassment, and volleys
of verbal abuse.

At first glance, verbally or even physically abusing social agents and
service robots may not appear to pose much of a problem—nothing that could
be accurately labelled abuse since computers and machines are not people
and thus not capable of being harmed. Nevertheless, the fact that abuse, or
the threat of it, is part of the interaction opens important moral,
ethical, and design issues. As machines begin to look and behave more like
people, it is important to ask how they should behave when threatened and
verbally and physically attacked. Another concern is the potential that
socially intelligent agents have of taking advantage of users, especially
children, who are prone to attribute to these characters more warmth and
human qualities then they actually posses. Many parents, for instance, are
disturbed by the amount of information social agents are able to obtain in
their interactions with children. It is feared that these relationship-
building agents could be used as potent means of marketing and advertising.

For this special issue of Interacting with Computers, we are soliciting
papers from a range of disciplines (psychology, HCI, robotics, and cultural
studies) that address the negative side of human-computer interaction.
Papers on all aspects of the topic are welcome, but we are particularly
interested in papers that address the following questions:
* How does the misuse and abuse of social agents affect the user’s
computing experience?
* How does disinhibition with social agents differ from Internet
disinhibition?
* What are the psychological, sociological, and technological factors
involved in negative interactions between users and social agents?
* What design factors (e.g., embodiment, communication styles,
functionality) trigger or restrain disinhibited behaviours?
* What are the social and psychological consequences of negative human-
computer interactions?
* How can we develop machines that learn to avoid user abuses?
* How can agent technology be exploited to take advantage of users and what
ethical and design mechanisms can counteract this possibility?
* Is it appropriate for machines to ignore aggression? If conversational
agents do not acknowledge verbal abuse will this only serve to aggravate
the situation?
* If potential clients are abusing virtual business representatives, then
to what extent are they abusing the businesses or the social groups the
human-like interfaces represent?

** Guest Editors:
Antonella De Angeli (University of Manchester), UK
Sheryl Brahnam (Missouri State University), US

** Submissions:
Contributions should not exceed 10,000 words and should follow the IwC
Guideliness for Authors available at
http://authors.elsevier.com/JournalDetail.html?PubID=525445&Precis= and
Authors intending to submit a paper should send an Abstract to
Sheryl Brahnam: sbrahnam (*) facescience (*) org (replace * with
appropriate email symbols).

All papers submitted will be double blind, peer reviewed.

** Important Dates:
March 9: Abstract Submission
April 5, 2007: Paper Submission
May 25: Notification of acceptance
June 15: Camera ready
Contact:
Sheryl Brahnam sbrahnam: (*) facescience (*) org (replace * with
appropriate email symbols).

    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://sigchi.org/listserv
    ---------------------------------------------------------------

ATOM RSS1 RSS2