CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
Date:
Thu, 2 Feb 2017 21:26:44 +0000
Reply-To:
David Michael Sirkin <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Content-Type:
text/plain; charset="iso-8859-1"
From:
David Michael Sirkin <[log in to unmask]>
Parts/Attachments:
text/plain (37 lines)
Call for Participation:

What Actors Can Teach Robots - CHI 2017 Workshop, Denver CO, May 6, 2017

EXTENDED Submission deadline: February 10th, 2017
EXTENDED Notification date: February 12th, 2017
Finalized submission deadline: February 17th, 2017

For more information and to submit: https://hri.stanford.edu/minimal/

This workshop will be a forum for discussing minimal social robots and prototyping new ones, also building on methodology from acting training. The program includes invited and accepted participant presentations, improvisational and video-prototyping exercises, and a design challenge that will take place during the workshop. Sociability, though challenging to operationalize technologically, is an incredibly efficient channel for communicating with people. While much previous work in social robotics has explored complex platforms, the premise of this workshop is to take a simple concept and push it as far as one can go.

The intent of this workshop is to share knowledge between researchers who use minimalist design strategies in their own design work, and anyone in related areas who is curious about how sociability can play a role in technological interfaces. While the workshop activity will center on robotic technology, we encourage submissions examining any computational system that seeks to interpret or interface with people, as such research could provide useful perspective in understanding the complex communications of simple sensors/actions.

Prospective participants should prepare either (1) a 1-page extended abstract and a brief video (we can accept the video after submission) or (2) a 3-page extended abstract (including sketches or storyboards). The extended abstract should present research related to minimal social robots, and will be scored based on its relevance to the workshop theme, novelty, insightfulness, and writing quality. The video, if included in the submission, should consist of a 30-60 second interaction sequence between one or more cardboard boxes. Storytelling should occur, for example, via illustrative sequences of motion, however, speech, text and indications of facial expression are banned from inclusion. Videos will be rated for relevance to workshop theme and entertainment value, and may be informal and playful.

Accepted authors will be expected to present a 5-minute Spotlight talk during the morning of the workshop, participate in a poster session in the afternoon, as well as prototyping a minimal robot interaction during the course of the workshop. At least one author of each accepted paper must attend the workshop and at least one day of the conference.

The paper submission page, and additional information can be found on the workshop website: https://hri.stanford.edu/minimal/

Please email the workshop organizers if you have any questions:

  *   Naomi Fitter <[log in to unmask]>
  *   Heather Knight <[log in to unmask]>
  *   Nik Martelaro <[log in to unmask]>
  *   David Sirkin <[log in to unmask]>

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2