CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Dirk Schnelle-Walka <[log in to unmask]>
Reply To:
Dirk Schnelle-Walka <[log in to unmask]>
Date:
Thu, 25 Jan 2018 10:11:12 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (142 lines)
Multimodal Interaction in Automotive Applications 

================================================= 

 

With the smartphone becoming ubiquitous, pervasive distributed computing is
becoming a reality. Increasingly, aspects of the internet of things find
their way into many aspects of our daily lives. Users are interacting
multimodally with their smartphones and expectations with regard to natural
interaction have increased dramatically in the past years. Even more, users
have started to project these expectations towards all kind of interfaces
encountered in their daily lives. Currently, these expectations are not yet
fully met by car manufacturers since the automotive development cycles are
still much longer compared to software industry. However, the clear trend is
that manufacturers add technology to cars to deliver on their vision and
promise of a safer drive. Multiple modalities are already available in
today’s dashboards, including haptic controllers, touch screens, 3D
gestures, voice, secondary displays, and gaze. 

In fact, car manufacturers are aiming for a personal assistant with deep
understanding of the car and an ability to meet driving-related demands and
non-driving-related needs to get the job done. For instance, such an
assistant can naturally answer any question about the car and help schedule
service when needed. It can find the preferred gas station along the route,
or even better – plan a stop and ensure to arrive in time for a meeting. It
understands that a perfect business meal involves more than finding a
sponsored restaurant, and includes unbiased reviews, availability, budget,
trouble-free parking and notifies all invitees of the meeting time and
location. Moreover, multimodality can be a source for fatigue detection. The
main goal for multimodal interaction and driver assistance systems is on
ensuring that the driver can focus on his primary task of a safe drive. 

 

This is why the biggest innovations in today’s cars happened in the way we
interact with the integrated devices such as the infotainment system. For
instance, it has been shown that voice based interaction is less distractive
than interaction with visual haptic interface, but it is only one piece in
the way we interact multimodally in today’s cars, shifting away from the GUI
as the only source of interaction. This also leads to additional efforts to
establish a mental model for the user. With the plethora of available
modalities requiring multiple mental maps, learnability decreased
considerably. Multimodality may also help here to decrease distraction. In
the special issue we will present the challenges and opportunities of
multimodal interaction to help reducing cognitive load and increase
learnability as well as current research that has the potential to be
employed in tomorrow’s cars. 

In this special issue, we especially invite researchers, scientists, and
developers to submit contributions that are original and unpublished and
have not been submitted to any other journal, magazine, or conference. We
expect at least 30% of novel content. We are soliciting original research
related to multimodal smart and interactive media technologies in areas
including - but not limited to - the following: 

* In-vehicle multimodal interaction concepts 

* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts 

* Reducing driver distraction and cognitive load and demand with multimodal
interaction 

* (pro-active) in-car personal assistant systems 

* Driver assistance systems 

* Information access (search, browsing etc) in the car 

* Interfaces for navigation 

* Text input and output while driving 

* Biometrics and physiological sensors as a user interface component 

* Multimodal affective intelligent interfaces 

* Multimodal automotive user-interface frameworks and toolkits 

* Naturalistic/field studies of multimodal automotive user interfaces 

* Multimodal automotive user-interface standards 

* Detecting and estimating user intentions employing multiple modalities 

 

Guest Editors 

============= 

Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany 

Phil Cohen, Voicebox, USA 

Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany 

 

Submission Instructions 

======================= 

 

1-page abstract submission: 05.02.2018 

Invitation for full submission: 15.03.2018 

Full Submission: 28.04.2018 

Notification about acceptance: 15.06.2018 

Final article submission: 15.07.2018 

Tentative Publication: ~ 09/2018 

 

Companion website: https://sites.google.com/view/multimodalautomotive/ 

 

Authors are requested to follow instructions for manuscript submission to
the Journal of Multimodal User Interfaces
(http://www.springer.com/computer/hci/journal/12193) and to submit
manuscripts at the following link:
https://easychair.org/conferences/?conf=mmautomotive2018.

 


    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2