ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Naoto Iwahashi <[log in to unmask]>
Reply To:
Naoto Iwahashi <[log in to unmask]>
Mon, 2 Jul 2007 15:10:15 +0900
text/plain (114 lines)

Workshop on Multimodal Interfaces in Semantic Interaction 

at International Conference on Multimodal Interfaces 

Date: November 15, 2007
Venue: Nagoya Noh Theater, Japan

With the advances in ubiquitous networks, data mining, communication robots,
and sensing technologies, various information on the real world has become
available in real time. This information, presented to the user, not only
makes it possible to support his or her intellectual activities but may also
be utilized as context, thus opening great possibilities of achieving
situated intelligent functions. 

The information systems and robots that support human activities in everyday
life should ideally have functions that allow them to interact with humans
adaptively according to context, such as the situation in the real world and
each human's individual characteristics. For instance, these functions might
include the ability to understand the user's intention through his or her
utterances, as well as the ability to provide suitable information at
appropriate timing. 
In order to realize such interaction-as semantic interaction-it is necessary
to extract and use the valuable context information needed for understanding
interaction from the obtained real-world information. This context
information is multimodal information at several levels: 1) raw information
obtained from sensors, 2) information obtained through categorization, and
3) the relationships between categories. 

In semantic interaction, it is important for the user and the machine to
share knowledge and an understanding of a given situation. Thus, it is
necessary to infer the user's intention and to represent the machine's inner
state naturally through speech, images, graphics, manipulators, and so on.
This is achieved based on the multimodal context information. Accordingly,
the development of multimodal interfaces is a very important research theme.

The goal of this workshop is to gather researchers active in the above
field, or related domains, to discuss theories, basic technologies, and
application systems. We are looking for position papers as well as research
papers that debate or contribute to the following (and other related) areas:

* Extraction of context information from the real world
* Situated interaction using context information
* Theories and basic technologies on the grounding of language
* Situated dialogue system
* Human-robot interaction
* Inference of intention and mental state from user's behavior
* Processing of emotion and paralinguistic information
* Embodiment in semantic interaction
* Adaptation, learning, and development for semantic interaction
* Active sensing for semantic interaction

The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
* Long papers must be no longer than 8 pages, including title, examples,
references, etc. 
* Short papers and demo descriptions should aim to be 4 pages or less
(including title, examples, references, etc.).
Please use the official ACM format:

Any questions regarding submissions can be sent to the organizers.
Authors are encouraged to make illustrative materials available, on the web
or otherwise. 

Invited Speakers:

Deb Roy (Massachusetts Institute of Technology, USA) 
Gerhard Sagerer (Bielefeld University, Germany) 

Important Dates:

Paper Submission deadline: August 24
Acceptance notification: September 14
Full-version camera-ready paper due: September 30


Naoto Iwahashi (National Institute of Information and Communications
Technology) [log in to unmask] 
Mikio Nakano (Honda Research Institute Japan) [log in to unmask] 

Program Committee:

Hideki Asoh (National Institute of Advanced Industrial Science and
Technology, Japan) 
Masahiro Araki (Kyoto Institute of Technology, Japan) 
Michael Beetz (Munich University of Technology, Germany) 
James Glass (Massachusetts Institute of Technology, USA) 
Christian Goerick (Honda Research Institute Europe, Germany) 
Tetsunari Inamura (National Institute of Informatics, Japan) 
Tetsunori Kobayashi (Waseda University, Japan) 
Helen Meng (Chinese University of Hong Kong, Hong Kong) 
Takayuki Nagai (The University of Electro-Communications, Japan) 
Tsuneo Nitta (Toyohashi University of Technology, Japan) 
Natsuki Oka (Kyoto Institute of Technology, Japan) 
Hiroyuki Okada (Tamagawa University, Japan) 
Hiroshi G. Okuno (Kyoto University, Japan) 
Candace Sidner (BAE Systems, USA) 
David Traum (University of Southern California, USA)

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see