CHI-RESOURCES Archives

ACM SIGCHI Resources (Mailing List)

CHI-RESOURCES@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Brent Beckley <[log in to unmask]>
Reply To:
Brent Beckley <[log in to unmask]>
Date:
Thu, 4 May 2017 11:23:51 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (77 lines)
Morgan & Claypool Publishers and ACM Books is proud to announce the recent
publication of a new book in the ACM Books series.  Be sure to visit our
booth at CHI 2017!

 

The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations,
User Modeling, and Common Modality Combinations

By Sharon Oviatt, Bjorn Schuller, Philip Cohen, Daniel Sonntag, Gerasimos
Potamianos, Antonio Kruger

ISBN: 9781970001648  | PDF ISBN: 9781970001655 | Hardcover ISBN:
9781970001679

Copyright C 2017 | 636 Pages | Publication Date: May, 2017
Print (Individual eBook):
http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?produc
ts_id=1067
Prepub Backorders Available

 

The Handbook of Multimodal-Multisensor Interfaces, Volume 1 provides the
first authoritative resource on what has become the dominant paradigm for
new computer interfaces-user input involving new media (speech, multi-touch,
gestures, writing) embedded in multimodal-multisensor interfaces. These
interfaces support smart phones, wearables, in-vehicle and robotic
applications, and many other areas that are now highly competitive
commercially. This edited collection is written by international experts and
pioneers in the field. It provides a textbook, reference, and technology
roadmap for professionals working in this and related areas. This first
volume of the handbook presents relevant theory and neuroscience foundations
for guiding the development of high-performance systems. Additional chapters
discuss approaches to user modeling and interface designs that support user
choice, that synergistically combine modalities with sensors, and that blend
multimodal input and output. This volume also highlights an in-depth look at
the most common multimodal-multisensor combinations-for example, touch and
pen input, haptic and non-speech audio output, and speech-centric systems
that co-process either gestures, pen input, gaze, or visible lip movements.
A common theme throughout these chapters is supporting mobility and
individual differences among users. These handbook chapters provide
walkthrough examples of system design and processing, information on tools
and practical resources for developing and evaluating new systems, and
terminology and tutorial support for mastering this emerging field. In the
final section of this volume, experts exchange views on a timely and
controversial challenge topic, and how they believe multimodal-multisensor
interfaces should be designed in the future to most effectively advance
human performance.

 

If you have any questions, please contact me directly at
[log in to unmask]

 

Brent Beckley
Direct Marketing Manager
Morgan & Claypool Publishers

 <https://twitter.com/MorganClaypool> @MorganClaypool (Twitter)

 



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
       mailto:[log in to unmask]
    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2