ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
text/plain; charset="UTF-8"
Tue, 28 Jun 2022 12:38:55 +0200
Alessandra Rossi <[log in to unmask]>
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Alessandra Rossi <[log in to unmask]>
text/plain (149 lines)
Dear colleagues,

This is a gentle reminder that the paper submission for the fifth edition
of SCRITA Workshop <> at IEEE RO-MAN 2022 are open!

This workshop will be a full day event on 29 August 2022, in conjunction
with the IEEE RO-MAN 2022 <>
conference, held in sunny and beautiful Naples, Italy.

The workshop is open to a broad audience from academia and industry
researching social robotics, machine learning, robot behavioural control,
and user recommendation. We will foster the exchange of insights on past
and ongoing research, and contribute to the discussion of innovative ideas
for tackling unresolved issues by providing new and inspirational
directions of research with brilliant invited speakers and a panel of
experts in the field.

Please find attached below the call for papers.

For any questions and information, do not hesitate to contact us (emails

Kind regards,

Dr. Alessandra Rossi
Assistant Professor
PRISCA research lab
Department of Electrical Engineering and Information Technology
University of Naples Federico II, Naples, Italy


* *** WorkhopTrust, Acceptance and Social Cues in Human-Robot Interaction
        31st IEEE International Conference on Robot & Human Interactive
Communication (RO-MAN 2022)
August 29th - September 2nd, 2022
Naples, Italy

**Important Dates**

**Submission deadline**: 15 July 2022
**Acceptance**: 31 July 2022

People's ability of accepting and trusting robots is fundamental for a
fruitful and successful coexistence between humans and people. While
advanced progresses are reached in studying and evaluating the factors
affecting acceptance and trust of people in robots in controlled or
short-term (repeated interactions) setting, developing service and personal
robots, that are accepted and trusted by people where the supervision of
operators is not possible, still presents an open challenge for scientists
in robotics, AI and HRI fields. In such
unstructured static and dynamic human-centred environments scenarios,
robots should be able to learn and adapt their behaviours to the
situational context, but also to people’s prior experiences and learned
associations, their expectations, and their and the robot’s ability to
predict and understand each other's behaviours. This workshop focuses on
addressing the challenges and development of the dynamics between people
and robots in order to foster short interactions and long-lasting
relationships in different fields, from educational, service,
collaborative, companion, care-home and medical robotics. Although the
previous editions valued the participation of leading researchers in the
field and several exceptional invited speakers who tackled down some
fundamental points in this research domains, we wish to continue to further
explore the role of trust in robotics to present groundbreaking
research to effectively design and develop socially acceptable and
trustable robots to be deployed "in the wild".

* Topics of interest include but are not limited to:*
  * Impact of Social Cues on Trust in HRI
  * Measuring Trust in HRI
  * Trust Violation and Recovery Mechanism in HRI
  * Effects of Humans’ Acceptance on Trust of Robots
  * Humans Sense of Control and Trust in Robots
  * Trust and Assistive Robotics
  * Overtrust in Robots
  * Antecedent of Trust and Robot Trust
  * Enhancing Humans Trust in Robots
  * Enhancing Trust in a Robot Companion
  * Privacy Implications on Trust in HRI
  * Mental Models and Trust in HRI
  * Trust and Safety in HRI
  * Ethics Implications on Trust in HRI
  * Trustworthy AI
  * XAI in HRI
  * Legal Frameworks for Trustworthy Robotics

    *Templates and Submission Procedure*

We encourage participants to submit two-page abstracts or full papers  (up
to 6 pages) on original and unpublished research. We will also welcome
submissions of two-page position papers on topics covering the scope of the
workshop. All accepted papers will have an oral presentations.

We further welcome authors of the accepted papers to present a video or
demonstrate their works and achievements. Video demonstrations should be
accompanied by an up to 2 pages abstract describing the work and

Authors should submit their papers formatted according to the IEEE
two-column format
<>, which is also
used for contributions to the main conference. Use the following templates
to create the paper and generate or export a PDF file: LaTeX <> or MS-Word <>.

PDF submission will be possible via EasyChair <>. All papers are
reviewed using a single-blind review process: authors declare their names
and affiliations in the manuscript for the reviewers to see, but reviewers
do not know each other's identities, nor do the authors receive information
about who has reviewed their manuscript.

Authors who want to include a video in their submission can indicate their
plans via EasyChair and will be sent a link to a confidentially upload the
video file. At least one of the authors of the accepted papers needs to
register for the workshop.

  *  Invited Speakers*

The following keynote speakers have already agreed to participate in this
  * Alan Wagner, Penn State University, USA
  * Moojan Ghafurian, University of Waterloo, Canada
  * Takayuki Kanda, Kyoto University, Japan

    *Panel Session*

The following experts have already agreed to participate in this workshop:
  * Alessandro Di Nuovo, Sheffield Hallam University, UK
  * Kerstin Sophie Haring, University of Denver, USA
  * Guillem Alenyà, Institut de Robòtica i Informàtica Industrial, Spain

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see: