CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Marvel, Jeremy A. (Fed)" <[log in to unmask]>
Reply To:
Marvel, Jeremy A. (Fed)
Date:
Mon, 10 Aug 2020 09:55:44 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (80 lines)
Apologies for cross-posting.



Due to a large number of requests, we have extended the submission deadlines for the THRI Special Issue on Test methods for human-robot teaming performance evaluations.  The updated submission schedule is as follows:

  *   Submission period begins: 30 November, 2019
  *   Submission deadline: 15 October, 2020 (initially 15 August, 2020)
  *   Notification of initial reviews: 15 February, 2020 (initially 15 December, 2020)
  *   Paper revision/resubmission deadline: 30 March, 2021 (initially 30 January, 2020)
  *   Notification of final decisions: 30 May, 2021 (initially 30 March, 2020)
  *   Tentative publication date: June 2021

Special Issue CFP Website:  https://www.nist.gov/el/intelligent-systems-division-73500/transactions-human-robot-interaction-special-issue

Submission Website:  https://mc.manuscriptcentral.com/thri



The impact of technology in collaborative human-robot teams is both driven and limited by its performance and ease of use.  As robots become increasingly common in society, exposure to and expectations for robots is ever-increasing.  However, the means by which performance can be measured has not kept pace with the rapid evolution of human-robot interaction (HRI) technologies.  The resulting situation is one in which people demand more from robots, but have relatively few mechanisms by which they can assess the market when making purchasing decisions, or integrating the systems already acquired. As such, robots specifically intended to interact with people are frequently met with enthusiasm, but ultimately fall short of expectations.

HRI research is focused on developing new and better theories, algorithms, and hardware specifically intended to push innovation.  Yet determining whether these advances are, indeed, actually driving technology forward is a particular challenge.  Few repeatability studies are ever performed, and the test methods and metrics used to demonstrate effectiveness and efficiency are often based on qualitative measures for which all external factors may not necessarily be accounted; or, worse, they may be based on measures that are specifically chosen to highlight the strengths of new approaches without also exposing the limitations.  As such, despite the rapid progression of HRI technology in the research realm, advances in applied robotics lag behind.  Without verification and validation, the gap between the cutting edge and the state of practice will continue to expand.

The necessity for validated test methods and metrics for HRI is driven by the desire for repeatable, consistent, and informative evaluations of HRI methodologies to demonstrably prove functionality.  Such evaluations are critical for advancing the underlying models of HRI, and for providing guidance to developers and consumers of HRI technologies to meter expectations while promoting adoption.

This special issue of Transactions of Human-Robot Interaction<https://thri.acm.org/>, Test methods for human-robot teaming performance evaluations, is specifically intended to highlight the test methods, metrics, artifacts, and measurement systems designed to assess and assure HRI performance in human-robot teams.  A broad spectrum of application domains encompass the topic of HRI teaming, and special attention is being paid to those test methods that are broadly applicable across multiple domains.  These domains include medical, field, service, personal care, and manufacturing applications.  This special issue will focus on highlighting the metrics used for addressing HRI metrology, and identifying the underlying issues of traceability, objective repeatability and reproducibility, benchmarking, and transparency in HRI.

List of Topics

For this special issue, topics of interest include but are not limited to:

  *   Test methods and metrics for evaluating human-robot teams
  *   Case studies in industry, medicine, service, and personal care robot applications, with particular attention to use cases that have verifiable analogues across multiple application domains
  *   Documented HRI data set generation, formatting, and dissemination for human-robot performance benchmarking and repeatability studies
  *   Design and evaluation of human-centric robot interfaces, including wearable technologies
  *   Repeatability and replication studies of previously published HRI research, specifically including metrics for evaluation that take into account demographics and cultural impacts
  *   Studies exploring the cultural impact of HRI performance measures
  *   Validated, quantitative analogues of qualitative HRI metrics
  *   Quantitative and statistical models of human performance for offline HRI evaluation
  *   Best practices and real-world case studies in human-robot teaming
  *   Verification and validation of HRI studies involving human-robot teams
  *   Evaluation of novel human-robot team designs and methods

Accepted contributors to the upcoming 2020 Workshop on Metrics & Test Methods for Human-Robot Teaming<https://hri-methods-metrics.github.io/> at the 2020 ACM/IEEE HRI Conference<https://humanrobotinteraction.org/2020/> will also be invited to submit longer-form papers for the special issue.  As workshop acceptances will be going out at the beginning of 2020, their participation will be confirmed at that time.

Important Dates

  *   Submission period begins:  30 November, 2019
  *   Submission deadline:  15 October, 2020
  *   Notification of initial reviews:  15 February, 2020
  *   Paper revision/resubmission deadline: 30 March, 2021
  *   Notification of final decisions:  30 May, 2021
  *   Tentative publication date:  June 2021

With best regards,
~Jeremy


Jeremy A. Marvel, Ph.D.
Computer Scientist, Project Leader Performance of Human-Robot Interaction
U.S. Department of Commerce
National Institute of Standards and Technology
Engineering Laboratory, Intelligent Systems Division
100 Bureau Drive, Stop 8230
Gaithersburg, MD 20899 USA
Tel:  301-975-4592
Fax:  301-990-9688
[log in to unmask]<mailto:[log in to unmask]>


    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2