Apologies for cross-posting.
Workshop website: https://hri-methods-metrics.github.io/
* Abstracts of systems, test methods, data sets, replicability studies, and metrics for evaluating HRI in human-robot teams
* Abstracts of proposals for repeatability/replicability studies in HRI
* 14 February 2020: Submission Deadline for Extended Abstracts
* 28 February 2020: Notification of acceptance for presentations
* 23 March 2020: Full-day Workshop
* THRI:SI Test methods for human-robot teaming performance evaluations<https://www.nist.gov/el/intelligent-systems-division-73500/transactions-human-robot-interaction-special-issue>
Despite large advances in robot interfaces and user-centric robot designs, the need for effective HRI continues to present challenges for the field of robotics. A key barrier to achieving effective human-robot teaming in a multitude of domains is that there are few consistent test methods and metrics for assessing HRI effectiveness. The necessity for validated metrology is driven by the desire for repeatable and consistent evaluations of HRI methodologies.
This full-day workshop at the 2020 ACM/IEEE HRI Conference<https://humanrobotinteraction.org/2020/> will address the issues surrounding the development of test methods and metrics for evaluating HRI performance across the multitude of system and application domains, including industrial, social, medical, field and service robotics. This workshop is driven by the need for establishing consistent standards for evaluating HRI in real-world applications, and how the interfaces, technologies, and underlying theories impact the effective collaboration of human-robot teams. Specific goals include the following:
* to develop and encourage the use of consistent test methods and metrics in evaluating HRI technologies, producing quality data sets of pragmatic applications, and validating human subject studies for HRI;
* to establish benchmarks and baselines along a spectrum of key performance indicators for assessing and comparing novel HRI systems and applications;
* to support a discussion about best practices in metrology and what features should be measured as the underlying theory of HRI advances;
* to encourage the creation and sharing of high-quality, consistently-formatted datasets for HRI research; and
* to promote the development of reproducible, metrics-oriented studies that seek to understand and model the human element of HRI teams.
Presentations by contributing authors will focus on the documentation of the test methods, metrics, and data sets used in their respective studies. Keynote and invited speakers will be selected from a targeted list of HRI researchers across a broad spectrum of application domains. Poster session participants will be selected from contributors reporting late-breaking evaluations and their preliminary results.
Discussions are intended to highlight the various approaches, requirements, and opportunities of the research community toward assessing HRI performance, enabling advances in HRI research, and establishing trust in HRI technologies. Specific topics of discussion will include:
* reproducible and repeatable studies with quantifiable test methods and metrics;
* systems papers discussing applications and task-specific metrics;
* human-robot collaboration and teaming test methods;
* human data set content transferability, and traceability;
* HRI metrics (e.g., situation and cultural awareness);
* human-machine interface metrics; and
* industry-specific metrology requirements.
Peer-reviewed, full-paper submissions by contributing authors will be automatically be submitted to a special issue of the Transactions on Human Robot Interaction<https://www.nist.gov/el/intelligent-systems-division-73500/transactions-human-robot-interaction-special-issue>, scheduled for publication in March of 2021. Abstracts submitted to the workshop are strongly encouraged to also submit full transcripts to the journal.
A workshop report documenting the presentations, discussions, and ensuing take-away and action items will be produced, and made publicly available. An additional summary paper will be written targeting publication in the proceedings of the 2021 ACM/IEEE HRI conference.
Finally, this workshop is the second in a series of workshops leading toward formalized HRI performance standards. The IEEE Robotics and Automation Society (RAS) will hosting and supporting this standardization effort. Early workshops are intended to target community and consensus building, and on the establishment of a culture of repeatable and reproducible, metrology-based research in HRI. A third workshop is planned for the 2021 ACM/IEEE International Conference on Human Robot Interaction, and will specifically address the action items identified in this year's workshop.
With best regards,
Dr. Jeremy A. Marvel, National Institute of Standards and Technology (NIST), USA
Shelly Bagchi, National Institute of Standards and Technology (NIST), USA
Megan Zimmerman, National Institute of Standards and Technology (NIST), USA
Murat Aksu, National Institute of Standards and Technology (NIST), USA
Brian Antonishek, National Institute of Standards and Technology (NIST), USA
Dr. Yue Wang, Clemson University
Dr. Ross Mead, Semio
Dr. Terry Fong, National Aeronautics and Space Administration (NASA), USA
Dr. Heni Ben Amor, Arizona State University
Jeremy A. Marvel, Ph.D.
Computer Scientist, Project Leader Performance of Human-Robot Interaction
U.S. Department of Commerce
National Institute of Standards and Technology
Engineering Laboratory, Intelligent Systems Division
100 Bureau Drive, Stop 8230
Gaithersburg, MD 20899 USA
[log in to unmask]<mailto:[log in to unmask]>
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org