ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
text/plain; format=flowed; charset=utf-8
Fri, 15 May 2015 13:36:36 +0000
Marco Porta <[log in to unmask]>
Marco Porta <[log in to unmask]>
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
text/plain (211 lines)
Call for Papers (apologies for multiple copies)

MHMC 2015 – 1st International Workshop on Multimodal Interaction in 
Industrial Human-Machine Communication

In connection with the 20th IEEE International Conference on Emerging 
Technologies and Factory Automation

September 8, 2015, Luxembourg

(deadline extended to May 31)

Aims and scope

Nowadays, industrial environments are full of sophisticated 
computer-controlled machines. In addition, pervasive and ubiquitous 
computing provide a further support for advanced activity control. Even 
if their exploitation is very often committed to specialized workers, 
who have been purposely trained to use the available advanced equipment, 
easy and effective interaction is a key factor that can bring many 
benefits – from faster task completion to error prevention, cognitive 
load reduction and higher employee satisfaction.

Multimodal interaction means using “non-conventional” input and/or 
output tools and modalities to communicate with a device. Multimodal 
contents are built/organized/displayed through more sensory channels at 
the same time; however, the main feature of multimodal interfaces is to 
combine both multiple input modes — usually more “natural” than 
traditional input devices, such as touch, speech, hand gestures, 
head/body movements and eye gaze — and solutions in which different 
output modalities are used in a coordinated manner — such as visual 
displays (e.g. virtual and augmented reality), auditory cues (e.g. 
conversational agents) and haptic systems (e.g. force feedback 
controllers). Besides handling input fusion, multimodal interfaces can 
also handle output fission, in an essentially dynamic progress. 
Sophisticated multimodal interfaces can integrate complementary 
modalities to get the most out of the strengths of each mode, and 
overcome weaknesses. In addition, they can support handling different 
environmental situations as well as different user (sensory/motor) 

Although multimodal interaction is becoming more and more common in our 
everyday life, industrial applications are still rather few, in spite of 
their potential advantages. For example, a camera could allow a machine 
to be controlled through hand gesture commands, or the user might be 
monitored in order to detect potential dangerous behaviors. On the other 
side, an augmented or virtual reality system could be employed to 
provide an equipment operator with sophisticated visual cues, where 
auditory/olfactory displays might be used as an additional alerting 
mechanism in risky environments. Besides being used in real working 
situations to increase the amount and quality of available information, 
augmented/virtual reality interaction can be also used to implement an 
effective and safe training plan. Being relieved by the anxiety related 
to the use of real equipment, often expensive and/or dangerous, can 
improve trainees experience, increase their proficiency and provide 
better achievements.

This workshop aims at gathering works presenting different forms of 
multimodal interaction in industrial processes, equipment and settings 
with a twofold purpose:

·         Taking stock of the current state of multimodal systems for 
industrial applications.

·         Being a showcase to demonstrate the potential of multimodal 
communication to those who have never considered its application in 
industrial settings.

Both proposals of novel applications and papers describing user studies 
are welcome.

Important dates

Deadline for submission of workshop papers: May 31

Notification of acceptance of workshop papers: June 20

Deadline for submission of final workshop papers: July 1

Summary of topics

Topics of interest include, but are not limited to, the following:

1.    Multimodal Input

·         Vision-based input

·         Speech input

·         Tangible interfaces

·         Motion tracking sensors

2.    Multimodal Output

·          Virtual Reality

·          Augmented Reality

·          Auditory displays

·          Haptic (or tactile) interfaces

·          Olfactory displays

3.    Combination of “traditional” input and output modalities and 
multimodal solutions

Any form of integration of conventional input and output modalities 
(e.g. keyboard, mouse, buttons, standard LCD monitors, audiovisual 
content, etc.) with multimodal communication.

Submission of papers

The working language of the workshop is English. Papers are limited to 8 
double column pages in a font no smaller than 10-points. Manuscripts 
must be submitted electronically in PDF format, PDF version from 1.4 
(Acrobat 5) thru 1.7 (Acrobat 9). Guidelines for preparing 
proceedings-style manuscripts are available for Microsoft Word, Open 
Office and LaTeX users in a specific template.

You can find information about the formatting specifications for IEEE 
Xplore at the following places:

·         Creating manuscripts for IEEE IES conferences 

·         Instruction on how to convert Microsoft Word .doc format to an 
IEEE conform PDF 

Paper Acceptance

Each accepted paper must be presented at the workshop by one of the 
authors. Workshops papers will be included in the IEEE Xplore. The final 
manuscript must be accompanied by a registration form and a registration 
fee payment proof. All conference attendees, including authors and 
session chairpersons, must pay the conference registration fee, and 
their travel expenses.

No-show Policy

The ETFA 2015 Organizing Committee reserves the right to exclude a paper 
from distribution after the conference at IEEE Xplore if the paper is 
not presented at the workshop.

Workshop Co-Chairs:

·         Maria De Marsico, “Sapienza” University of Rome, Italy
Email: [log in to unmask]

·         Giancarlo Iannizzotto, University of Messina, Italy
Email: [log in to unmask]

·         Marco Porta, University of Pavia, Italy
Email: [log in to unmask]

More information on the workshop page 
( and on the 
conference website (

Marco Porta, Ph.D.
Computer Vision & Multimedia Lab
Dipartimento di Ingegneria Industriale e dell'Informazione
Università di Pavia - Via Ferrata 5 - 27100 - Pavia - Italy
Phone: +39 0382 985486, Fax: +39 0382 985373
E-mail: [log in to unmask]
Skype: marco.porta_00

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see