CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Maria De Marsico <[log in to unmask]>
Reply To:
Maria De Marsico <[log in to unmask]>
Date:
Tue, 26 Apr 2016 13:27:05 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (176 lines)
*Call for Papers (apologies for multiple copies) *



*MHMC 2016 – International Workshop on Multimodal Interaction in Industrial
Human-Machine Communication*



*In connection with the 21st IEEE International Conference on Emerging
Technologies and Factory Automation*



*September 6, 2016, Berlin*



*http://etfa2016.org/images/track-cfp/MHMC_CfP.pdf
<http://etfa2016.org/images/track-cfp/MHMC_CfP.pdf>*



*Aims and Objectives*

Nowadays, industrial environments are full of sophisticated
computer-controlled machines. In addition, recent developments in pervasive
and ubiquitous computing provide a further support for advanced activity
control. Even if the exploitation of these technologies is very often
committed to specialized workers, who have been purposely trained to
use complex equipments, easy and effective interaction is a key factor that
can bring many benefits – from faster task completion to error prevention,
cognitive load reduction and higher employee satisfaction.

Multimodal interaction means using “non-conventional” input and/or output
tools and modalities to communicate with a device. The  main purpose
of multimodal interfaces is to combine both multiple input modes — usually
more “natural” than traditional input devices, such as touch, speech, hand
gestures, head/body movements and eye gaze — and solutions in which
different output modalities are used in a coordinated manner — such as
visual displays (e.g. virtual and augmented reality), auditory cues (e.g.
conversational agents) and haptic systems (e.g. force feedback
controllers). Besides handling input fusion, multimodal interfaces can also
handle output fission, in an essentially dynamic progress. Sophisticated
multimodal interfaces can integrate complementary modalities to get the
most out of the strengths of each mode, and overcome weaknesses. In
addition, they can support handling different environmental situations as
well as different user (sensory/motor) abilities.

Although multimodal interaction is becoming more and more common in our
everyday life, industrial applications are still rather few, in spite of
their potential advantages. For example, a camera could allow a machine to
be controlled through hand gesture commands, or the user might be monitored
in order to detect potential dangerous behaviors. On the other side, an
augmented or virtual reality system could be employed to provide an
equipment operator with sophisticated visual cues, where auditory/olfactory
displays might be used as an additional alerting mechanism in risky
environments. Besides being used in real working situations to increase the
amount and quality of available information, augmented/virtual reality
interaction can be also exploited to implement an effective and safe
training plan.



This workshop aims at gathering works presenting different forms of
multimodal interaction in industrial processes, equipment and settings with
a twofold purpose:

·         Taking stock of the current state of multimodal systems for
industrial applications.

·         Being a showcase to demonstrate the potential of multimodal
communication to those who have never considered its application in
industrial settings.



Both proposals of novel applications and papers describing user studies are
welcome.




*Summary of topics*Topics of interest include, but are not limited to, the
following:



*1.    **Multimodal Input*

·         *Vision-based input*

·         *Speech input*

·         *Tangible interfaces*

·         *Motion tracking sensors*

*2.    **Multimodal Output*

·          *Virtual Reality*

·          *Augmented Reality*

·          *Auditory displays*

·          *Haptic (or tactile) interfaces*

·          *Olfactory displays*

*3.    **Combination of “traditional” input and output modalities and
multimodal solutions*

Any form of integration of conventional input and output modalities (e.g.
keyboard, mouse, buttons, standard LCD monitors, audiovisual content, etc.)
with multimodal communication.



*Important dates*

Deadline for submission of workshop papers:* May 20*

Notification of acceptance of workshop papers:* July 10*

Deadline for submission of final workshop papers:* July 30*



*Submission of papers*

The working language of the workshop is English. Papers are limited to 8
double column pages in a font no smaller than 10-points.

Manuscripts can be submitted here:
http://etfa2016.org/2015-08-23-13-16-59/submit-papers.



*Workshop Co-Chairs:*

·         Maria De Marsico, “Sapienza” University of Rome, Italy
Email: [log in to unmask]

·         Giancarlo Iannizzotto, University of Messina, Italy
Email: [log in to unmask]

·         Marco Porta, University of Pavia, Italy
Email: [log in to unmask]


More information can be found on the workshop page (
http://etfa2016.org/etfa-2016/workshops) and on the conference website (
http://etfa2016.org/).


-- 
---------------------------------------------------------------------------
Maria De Marsico
Associate Professor
Sapienza University of Rome
Department of Computer Science
Via Salaria 113 - 00198 Rome - Italy
email: [log in to unmask]
tel: +39 06 49918312

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2