CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
text/plain; charset=ISO-8859-1
Date:
Tue, 11 Feb 2014 09:57:44 +0100
Reply-To:
Gabriel Skantze <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
From:
Gabriel Skantze <[log in to unmask]>
Parts/Attachments:
text/plain (74 lines)
*Aims and Scope*

Speech-based communication with robots faces important challenges for their
application in real world scenarios. In contrast to conventional
interactive systems, a talking robot always needs to take its physical
environment into account when communicating with users. This is typically
unstructured, dynamic and noisy and raises important challenges. The
objective of this special issue is to highlight research that applies
speech and language processing to robots that interact with people through
speech as the main modality of interaction. For example, a robot may need
to communicate with users via distant speech recognition and understanding
with constantly changing degrees of noise. Alternatively, the robot may
coordinate its verbal turn-taking behaviour with its non-verbal one such as
generating speech and gestures at the same time. Speech and language
technologies have the potential of equipping robots so that they can
interact more naturally with humans, but their effectiveness remains to be
demonstrated.  This special issue aims to help fill this gap.

The topics listed below indicate the range of work that is relevant to this
special issue, where each article will normally represent one or more
topics. In case of doubt about the relevance of your topic, please contact
the special issue associate editors.

*Topics*

   - sound source localization
   - voice activity detection
   - speech recognition and understanding
   - speech emotion recognition
   - speaker and language recognition
   - spoken dialogue management
   - turn-taking in spoken dialogue
   - spoken information retrieval
   - spoken language generation
   - affective speech synthesis
   - multimodal communication
   - evaluation of speech-based human-robot interactions

*Special Issue Associate Editors*

Heriberto Cuayįhuitl, Heriot-Watt University, UK (contact: [log in to unmask])
Kazunori Komatani, Nagoya University, Japan
Gabriel Skantze, KTH Royal Institute of Technology, Sweden

*Paper Submission*

All manuscripts and any supplementary materials will be submitted through
Elsevier Editorial System at http://ees.elsevier.com/csl/. A detailed
submission guideline is available as "Guide to Authors" at
here<http://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guide-for-authors>.
Please select "SI: SL4IR" as Article Type when submitting the manuscripts.
For further details or a more in-depth discussion about topics or
submission, please contact Guest Editors.

*Dates*

23 May 2014: Submission of manuscripts
23 August 2014: Notification about decisions on initial submissions
23 October 2014: Submission of revised manuscripts
10 January 2015: Notification about decisions on revised manuscripts
01 March 2015: Submission of manuscripts with final minor changes
31 March 2015: Announcement of the special issue articles on the CSL
website http://www.journals.elsevier.com/computer-speech-and-language/

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2