CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jonathan Grudin <[log in to unmask]>
Reply To:
Jonathan Grudin <[log in to unmask]>
Date:
Thu, 24 Jan 2002 05:43:49 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (102 lines)
Submission due date: June 1 2002

ACM Transactions on Computer-Human Interaction is soliciting
manuscripts for a special issue on Mobile and Adaptive Conversational
Interfaces, described in detail below by the guest editors.

Spoken language and multimodal interfaces have always been part of
HCI and CHI, although papers in the area have not been numerous. This
may be related to the fact that despite high expectations and
expenditures in many quarters, these interfaces have not succeeded in
the discretionary-use contexts that CHI has focused on. Sharon Oviatt
is among the few researchers in this area who have contributed
regularly to CHI. Work of Oviatt and her collaborators has been
inspired and rigorous, and I hope that this issue edited by her and
longtime MIT researcher Stephanie Seneff draws equally thoughtful and
rigorous submissions that provide the rest of the CHI community a
view of the current state of this field.

Jonathan Grudin
Editor, ACM Transactions on Computer-Human Interaction



MOBILE AND ADAPTIVE CONVERSATIONAL INTERFACES

A Special Issue of ACM Transactions on Computer-Human Interaction (TOCHI)

Editors:   Sharon Oviatt ([log in to unmask]) and Stephanie Seneff
([log in to unmask])

Conversational interfaces represent a new direction and challenge for
the interface design community. While past spoken language interfaces
have focused on command and control and dictation-style interaction,
research now is attempting to design interfaces that can process
increasingly more spontaneous, interactive and natural conversational
spoken language. As part of this trend, some newer interfaces are
beginning to recognize speech in combination with related input
modalities (e.g., touch, gesture, gaze, facial expressions) and
sensors (e.g., location, proximity). Multimodal or sensor-enhanced
conversational interfaces aim to expand the interface's flexibility
and expressive power, while also supporting the system's capacity to
process information more reliably and transparently when users are
mobile in adverse field environments. Broadly defined, conversational
interfaces also encompass those designed to support the transmission
of interpersonal or human-computer conversation, as in telephony
systems, without necessarily aiming to recognize a user's spoken
language or other input. For example, innovative cell phone
interfaces may be designed to manage conversation in public places,
permit multimodal interaction, or enhance user safety while mobile.
All of these types of conversational interface focus on supporting
users as they engage in communication-intensive tasks, whether
conversation is recognized or simply transmitted via the interface,
and would be appropriate submissions for this special issue.
Submissions on interfaces for transmitting conversation should
specifically address mobile and adaptive processing issues.

Another key trend in conversational interface design is the need for
tailoring to meet the needs and usage patterns of individual users,
especially while mobile. Successful personalization of conversational
interfaces and system processing requires new interfaces, techniques,
and architectures that are capable of strategically adapting
processing to a particular user, task or activity, dialogue, input
modes, and environmental context. Research demonstrating advances in
this area, which often represent the intersection of human-computer
interaction and artificial intelligence, also are welcome for this
special issue. For example, this may include research aimed at
adapting system processing to accommodate an individual user's
cognitive load, or to direct and adapt services to accommodate a
user's current location and availability for interruption. In
addition, modeling of the adaptive patterns that occur naturally
during communication between interlocutors, for example between a
user and an animated character that responds with text-to-speech and
nonverbal behavior, can provide an empirical foundation for
developing effective adaptive strategies used in next-generation
conversational systems.

Submissions on conversational interface design in any of the above
areas may focus on design, empirical, or system implementation
issues, as long as they make an original, high quality contribution
to the understanding and realization of effective conversational
systems. Since advances on these topics ideally require
cross-fertilization of perspectives and techniques represented by
different research areas, including human-computer interaction,
speech and multimodal technologies, telecommunications, ubiquitous
and mobile, and artificial intelligence techniques, submissions are
especially welcome that span or "bridge" these and related areas.
Surveys or reviews also will be considered. In all cases, however,
submissions to this TOCHI special issue should be written for a
generalist HCI audience, including definition of terms, explanatory
background, and HCI-relevant literature on the main research topic
addressed.

Further information regarding guidelines for preparing and formatting
manuscripts and general TOCHI submission procedures are available at:
http://www.acm.org/tochi/. Please indicate in your cover letter that
your manuscript is being submitted to the special issue on "Mobile
and Adaptive Conversational Interfaces." The deadline for receiving
submissions is June 1st, 2002. All contributions will be peer
reviewed to the usual standard of TOCHI. For further information or
to discuss a possible contribution, please contact Sharon Oviatt
([log in to unmask]).

ATOM RSS1 RSS2