<Please notice: The submission deadline for this workshop has been extended to 09 May, 2005.>


JULY 5, 2005 (envisaged time: 3 - 6 p.m.)

ICAD05, University of Limerick, Ireland (www.icad.org)


Peter Froehlich ([log in to unmask])

ftw. Telecommunications Research Centre Vienna, Austria

Michael Pucher

ICSI, International Computer Science Institute, Berkeley, California

ftw. Telecommunications Research Centre Vienna, Austria


Designers of user interfaces for the auditory modality are often forced to make a fundamental decision: when to use speech, when to use non-speech sound, and how to combine these? Related design questions are about constraints and opportunities for sound and speech for different types of systems, e.g. multimedia systems, mobile applications, speech telephony services or special applications for handicapped people.

We have 5 goals for the workshop:

1. To provide a comparative up-to-date overview of research into speech and sound in the user interface.

2. To inform the auditory display community about the current status of speech technology and opportunities for auditory interfaces.

3. To stimulate more empirical evidence about the usefulness and usability of speech and sound in various usage contexts.

4. To demonstrate current best-practice examples for combining speech and sound in the interface.

5. To foster an informed discussion about ways to integrate speech and sound into auditory user interfaces.

The workshop will consist of introductory talks by the organisers, a session for papers and demonstrations (see submission details below), and a concluding panel discussion.


We are interested in two types of submissions:

-Papers should provide empirical evidence about the usefulness and usability of speech and sound and combinations thereof in various usage contexts. We would like to encourage systematic user-based investigations (experiments, focus groups, usability evaluations, etc.). The influence of various environmental factors, application areas and user groups needs much more investigation. Furthermore, paralinguistic cues, mainly prosody, but also the use of different voices in speech synthesis could be a matter of investigation.

-Demonstrations: Demonstrations should aim at showcasing applications and systems that combine speech and sound in an interesting way. Contributions could include, but are not limited to: designs for improving user interface aesthetics, enabling technologies (e.g. mark-up languages), auditory interface concepts for combining auditory speech and non-speech input with output, or speech-based interfaces with earcons or auditory icons.

The deadline for paper and demonstrations submissions has been extended to 09 May, 2005. Please send ideas, drafts and results to [log in to unmask] 

For more information, please see the original text of our workshop proposal at http://userver.ftw.at/~frohlich/workshop.htm 

Please do not hesitate to contact us for further information:


Mag. Peter Fröhlich

ftw. Telecommunications Research Center Vienna

Tech Gate Vienna | Donau-City-Str. 1 | A-1220 Wien

tel: +43 1 5052 830 85

fax: +43 1 5052 830 99

email: [log in to unmask]

www: http://userver.ftw.at/~frohlich/


                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://sigchi.org/listserv