CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
Date:
Mon, 13 Aug 2007 10:56:50 +0900
Reply-To:
Iwahashi Naoto <[log in to unmask]>
Content-Transfer-Encoding:
7bit
Subject:
From:
Iwahashi Naoto <[log in to unmask]>
X-cc:
NAKANO Mikio <[log in to unmask]>
Content-Type:
text/plain; charset="US-ASCII"
MIME-Version:
1.0
Comments:
RFC822 error: <W> MESSAGE-ID field duplicated. Last occurrence was retained.
Parts/Attachments:
text/plain (116 lines)
===== Call For Papers ===== 
Deadline Extension: New Deadline AUGUST 31 
The submission site is open: https://precisionconference.com/~icmi/ 

Workshop on Multimodal Interfaces in Semantic Interaction
  http://www.slc.atr.jp/iwmisi/ 

at International Conference on Multimodal Interfaces
  http://www.acm.org/icmi/2007/ 

Date: November 15, 2007
Venue: Nagoya Noh Theater, Japan

Invited Speakers: 
Gerhard Sagerer (Bielefeld University, Germany) 
Jordan Zlatev (Lunds University, Sweden) 

Important Dates:
Paper Submission deadline: August 31 (Extended) 
Acceptance notification: September 14 
Full-version camera-ready paper due: September 30 

With the advances in ubiquitous networks, data mining, communication robots,
and sensing technologies, various information on the real world has become
available in real time. This information, presented to the user, not only
makes it possible to support his or her intellectual activities but may also
be utilized as context, thus opening great possibilities of achieving
situated intelligent functions. 

The information systems and robots that support human activities in everyday
life should ideally have functions that allow them to interact with humans
adaptively according to context, such as the situation in the real world and
each human's individual characteristics. For instance, these functions might
include the ability to understand the user's intention through his or her
utterances, as well as the ability to provide suitable information at
appropriate timing. 
In order to realize such interaction-as semantic interaction-it is necessary
to extract and use the valuable context information needed for understanding
interaction from the obtained real-world information. This context
information is multimodal information at several levels: 1) raw information
obtained from sensors, 2) information obtained through categorization, and
3) the relationships between categories. 

In semantic interaction, it is important for the user and the machine to
share knowledge and an understanding of a given situation. Thus, it is
necessary to infer the user's intention and to represent the machine's inner
state naturally through speech, images, graphics, manipulators, and so on.
This is achieved based on the multimodal context information. Accordingly,
the development of multimodal interfaces is a very important research theme.

The goal of this workshop is to gather researchers active in the above
field, or related domains, to discuss theories, basic technologies, and
application systems. We are looking for position papers as well as research
papers that debate or contribute to the following (and other related) areas:

* Extraction of context information from the real world
* Situated interaction using context information
* Theories and basic technologies on the grounding of language
* Situated dialogue system
* Human-robot interaction
* Inference of intention and mental state from user's behavior
* Processing of emotion and paralinguistic information
* Embodiment in semantic interaction
* Adaptation, learning, and development for semantic interaction
* Active sensing for semantic interaction

The program committee welcomes the submission of long papers for full
plenary presentation as well as short papers and demonstrations. Short
papers and demo descriptions will be featured in short plenary
presentations.
* Long papers must be no longer than 8 pages, including title, examples,
references, etc. 
* Short papers and demo descriptions should aim to be 4 pages or less
(including title, examples, references, etc.).
Please use the official ACM format: 
http://www.acm.org/sigs/pubs/proceed/template.html
Submission of papers must be done electronically in PDF format using the
link: 
https://precisionconference.com/~icmi/

Any questions regarding submissions can be sent to the organizers.
Authors are encouraged to make illustrative materials available, on the web
or otherwise. 

Organizers:
Naoto Iwahashi (National Institute of Information and Communications
Technology) [log in to unmask] 
Mikio Nakano (Honda Research Institute Japan) [log in to unmask] 

Program Committee:
Hideki Asoh (National Institute of Advanced Industrial Science and
Technology, Japan)
Masahiro Araki (Kyoto Institute of Technology, Japan) 
Michael Beetz (Munich University of Technology, Germany) 
James Glass (Massachusetts Institute of Technology, USA) 
Christian Goerick (Honda Research Institute Europe, Germany) 
Tetsunari Inamura (National Institute of Informatics, Japan) 
Frederic Kaplan (Ecole Polytechnique Federale de Lausanne, France)
Tetsunori Kobayashi (Waseda University, Japan) 
Helen Meng (Chinese University of Hong Kong, Hong Kong) 
Takayuki Nagai (The University of Electro-Communications, Japan) 
Tsuneo Nitta (Toyohashi University of Technology, Japan) 
Natsuki Oka (Kyoto Institute of Technology, Japan) 
Hiroyuki Okada (Tamagawa University, Japan) 
Hiroshi G. Okuno (Kyoto University, Japan) 
Candace Sidner (BAE Systems, USA) 
David Traum (University of Southern California, USA) 

=======================

    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://sigchi.org/listserv
    ---------------------------------------------------------------

ATOM RSS1 RSS2