Apologies for cross-posting.
The need for effective human-robot interaction (HRI) continues to present challenges for the field of robotics. As new technologies are integrated into human-robot teams in a myriad of application domains, exposures to and expectations from robots are growing rapidly. A key factor that limits the success of human-robot teams is the lack of principled metrics for assessing the effectiveness of HRI.
The necessity for validated test methods and metrics for human-robot teaming is driven by the desire for repeatable and consistent evaluations of HRI methodologies. Such evaluations are critical for advancing underlying models of HRI, as well as establishing traceable mechanisms for vendors and consumers of HRI technologies to assess and assure functionality.
This full-day workshop will address the issues surrounding the development of test methods and metrics for evaluating the performance of human-robot teams across the multitude of human-centered application domains, including industrial, social, medical, field and service robotics. This workshop will focus on establishing the diversity of approaches for addressing HRI metrology in different robotic domains, and identifying the underlying issues of traceability, objective repeatability, and transparency in HRI metrology.
The workshop organizers are seeking full paper submissions for presentation. Papers should be limited to 6 pages in the IEEE Proceedings format<https://www.ieee.org/conferences/publishing/templates.html>. Accepted papers will also be invited to submit their papers to a special journal issue to be published at a later date. Extended abstracts and posters will also be accepted for consideration for presentation during a poster session.
Solicited paper topics include:
* Test methods and metrics for evaluating human-robot teams
* Best practices and real-world case studies in human-robot teaming
* HRI data set generation, formatting, and dissemination for human-robot teams
* Verification and validation of HRI studies involving human-robot teams
* Benchmarking performance of human-robot teams
* Evaluation of novel human-robot team designs and methods
Submit papers via EasyChair<https://easychair.org/my/conference.cgi?conf=2019hrimetrology>. More information, including a tentative schedule of events, can be found on the workshop home page<https://www.nist.gov/news-events/events/2019/03/test-methods-and-metrics-effective-hri-collaborative-human-robot-teams>.
* Paper submission deadline: Friday, 25 January, 2019
* Notice of acceptance for presentation: Friday, 8 February, 2019
* Submission of posters and extended abstracts for presentation: Friday, 8 February, 2019
* Notice of acceptance of posters for presentation: Friday, 15 February, 2019
* Workshop date: Monday, 11 March, 2019
With best regards,
Dr. Jeremy A. Marvel, National Institute of Standards and Technology (NIST), USA
Shelly Bagchi, National Institute of Standards and Technology (NIST), USA
Megan Zimmerman, National Institute of Standards and Technology (NIST), USA
Murat Aksu, National Institute of Standards and Technology (NIST), USA
Brian Antonishek, National Institute of Standards and Technology (NIST), USA
Dr. Yue Wang, Clemson University
Dr. Ross Mead, Semio
Dr. Terry Fong, National Aeronautics and Space Administration (NASA), USA
Dr. Heni Ben Amor, Arizona State University
Jeremy A. Marvel, Ph.D.
Computer Scientist, Project Leader Performance of Human-Robot Interaction
U.S. Department of Commerce
National Institute of Standards and Technology
Engineering Laboratory, Intelligent Systems Division
100 Bureau Drive, Stop 8230
Gaithersburg, MD 20899 USA
[log in to unmask]<mailto:[log in to unmask]>
For news of CHI books, courses & software, join CHI-RESOURCES
mailto: [log in to unmask]
To unsubscribe from CHI-ANNOUNCEMENTS send an email to
mailto:[log in to unmask]
For further details of CHI lists see http://listserv.acm.org