CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset=UTF-8
Date:
Mon, 25 Jul 2016 03:43:29 -0700
Reply-To:
Francesca Bonin <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Francesca Bonin <[log in to unmask]>
Parts/Attachments:
text/plain (146 lines)
 ****Apologies for multiple posting****


We’d like to point your attention to the upcoming, third, edition of the
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
(MA3HMI2016) workshop.

This year, it will be held in conjunction with ICMI
(http://icmi.acm.org/2016/), in Tokyo, Japan. We’re expecting an
interesting, mixed audience to present and discuss the challenges and
research in multimodal analyses for artificial agents.

 

Please find below the call for papers.

We hope to see you in Tokyo!

 
 
*
3rd International Workshop on Multimodal Analyses enabling Artificial Agents
in Human-Machine Interaction (MA3HMI 2016)
*
 

November 16th, 2016 in Tokyo, Japan. In conjunction with ICMI2016.

http://MA3HMI.cogsy.de <http://MA3HMI.cogsy.de>  

 

Scope

One of the aims in building multimodal user interfaces and combining them
with technical devices is to make the interaction between user and system as
natural as possible. The most natural form of interaction may be how we
interact with other humans. Current technology is far from human-like, and
systems can reflect a wide range of technical solutions. Transferring the
insights for analysis of human-human communication to human-machine
interactions remains challenging. It requires that the multimodal inputs
from the user (e.g., speech, gaze, facial expressions) are recorded and
interpreted. This interpretation has to occur at both the semantic and
affective levels, including aspects such as the personality, mood, or
intentions of the user. These processes have to be performed in real-time in
order for the system to respond without delays ensuring that the interaction
is smooth. The MA3HMI workshop aims at bringing together researchers working
on the analysis of multimodal data as a means to develop technical devices
that can interact with humans. In particular, artificial agents can be
regarded in their broadest sense, including virtual chat agents, empathic
speech interfaces and life-style coaches on a smart-phone. More general,
multimodal analyses support any technical system in the research area of
human-machine interaction. We focus on the real-time aspects of
human-machine interaction. We address the development and evaluation of
multimodal, real-time systems. We solicit papers that concern the different
phases of the development of such interfaces. Tools and systems that address
real-time conversations with artificial agents and technical systems are
also within the scope of the workshop.

 

Topics

a) Multimodal Annotation

- Representation formats for merged multimodal annotations

- Best practices for multimodal annotation procedures

- Innovative multimodal annotation schemes

- Annotation and processing of multimodal data sets

- Real-time or on-the-fly annotation approaches

b) Multimodal Analyses

- Multimodal understanding of user behavior and affective state

- Dialogue management using multimodal output

- Evaluation and benchmarking of human machine conversations

- Novel strategies of human-machine interactions

- Using multimodal data sets for human-machine interaction

c) Applications, Tools, and Systems

- Novel application domains and embodied interaction

- Prototype development and uptake of technology

- User studies with (partial) functional systems

- Tools for the recording, annotation and analysis of conversations

 

Important Dates

Submission Deadline: August 28th, 2016

Notification of Acceptance: October 2nd, 2016

Camera-ready Deadline: October 9th, 2016

Workshop Date: November 16th, 2016

 

Submissions

Prospective authors are invited to submit full papers (8 pages) and short
papers (5 pages) in ACM format as specified by ICMI 2016. Accepted papers
will be published as post-proceedings in the ACM Digital Library. All
submissions should be anonymous.

 

Organizers

Ronald Böck, University Magdeburg, Germany

Francesca Bonin, IBM Research, Ireland

Nick Campbell, Trinity College Dublin, Ireland

Ronald Poppe, Utrecht University, Netherlands



--
View this message in context: http://acm-sigchi.1086187.n5.nabble.com/CFP-3rd-International-Workshop-on-Multimodal-Analyses-enabling-Artificial-Agents-tp25706.html
Sent from the HCI - acm.org - Announce mailing list archive at Nabble.com.

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2