ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Stephanie Gross <[log in to unmask]>
Reply To:
Stephanie Gross <[log in to unmask]>
Tue, 12 Jun 2018 21:05:32 +0200
text/plain (106 lines)
* Apologies for cross postings *


Workshop on Cognitive Architectures for Situated Multimodal Human Robot 
Language Interaction <> (ICMI 2018 
October 16th, in Boulder, Colorado
Paper submission deadline: June 29, 2018


The workshop will take place in conjunction with the 20th ACM 
International Conference on Multimodal Interaction (ICMI 2018) in 
Boulder, Colorado on the 16th of October.
In many application fields of human robot interaction, robots need to 
adapt to changing contexts and thus be able to learn tasks from 
non-expert humans through verbal and non-verbal interaction. Inspired by 
human cognition, we are interested in various aspects of learning, 
including multimodal representations, mechanisms for the acquisition of 
concepts (words, objects, actions), memory structures etc., up to full 
models of socially guided, situated, multimodal language interaction. 
These models can then be used to test theories of human situated 
multimodal interaction, as well as to inform computational models in 
this area of research.

Call for Papers

The workshop aims at bringing together linguists, computer scientists, 
cognitive scientists, and psychologists with a particular focus on 
embodied models of situated natural language interaction. Workshop 
submissions should answer at least one of the following questions:

* Which kind of data is adequate to develop socially guided models of 
language acquisition, e.g. multimodal interaction data, audio, video, 
motion tracking, eye tracking, force data (individual or joint object 
* How should empirical data be collected and preprocessed in order to 
develop cognitively inspired models of language acquisition, e.g. should 
either HH or HR data be collected?
* Which mechanisms are needed by the artificial system to deal with the 
multimodal complexity of human interaction? How can the information 
transmitted via different modalities be combined at a higher level of 
* Models of language learning through multimodal interaction: How should 
semantic representations or mechanisms for language acquisition look 
like to allow an extension through multi-modal interaction?
* Based on the above representations, which machine learning approaches 
are best suited to handle the multimodal, time-varying and possibly high 
dimensional data? How can the system learn incrementally in an 
open-ended fashion?

Invited Speakers

Keynotes will be given by John Laird, Professor at the faculty of the 
Computer Science and Engineering Division of the Electrical Engineering 
and Computer Science Department of the University of Michigan, and Chen 
Yu, Professor at the Computational Cognition and Learning Lab at Indiana 

Important Dates

Paper submission deadline: June 29, 2018
Notification of acceptance: July 20, 2018
Final version: August 3, 2018
Workshop: October 16, 2018
Submission Instructions
Articles should be 4-6 pages, formatted using the ACM template of the 
ICMI conference. For each accepted contribution, at least one of the 
authors is required to attend the workshop.


Stephanie Gross, Austrian Research Institute for Artificial 
Intelligence, Vienna, Austria
Brigitte Krenn, Austrian Research Institute for Artificial Intelligence, 
Vienna, Austria
Matthias Scheutz, Department of Computer Science at Tufts University, 
Massachusetts, USA
Matthias Hirschmanner, Automation and Control Institute at Vienna 
University of Technology, Vienna, Austria

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see