ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Vincent Aleven <[log in to unmask]>
Fri, 1 Mar 2002 19:11:12 -0500
text/plain; charset="us-ascii" ; format="flowed"
Vincent Aleven <[log in to unmask]>
text/plain (109 lines)
With apologies for multiple postings.


                            ITS 2002 Workshop on
                 Empirical Methods for Tutorial Dialogue Systems

                         June 3 or 4 (to be announced)
                              San Sebastian, Spain

To be held in Conjunction with ITS 2002, the Sixth International
Conference on Intelligent Tutoring Systems, June 5-8, 2002, Biarritz,


Deadline for paper submissions: March 30, 2002.

Tutorial Dialogue Systems is currently an area of great emphasis in
the research field of Intelligent Tutoring Systems. This workshop
will focus on (1) issues surrounding the evaluation of tutorial
dialogue systems (2) issues surrounding analysis and annotation of
naturalistic tutorial dialogue corpora as well as logfiles of
human-computer tutoring dialogues (3) publicly available resources
for corpus analysis as well as for building and evaluating tutorial
dialogue systems.

As a sign that the area of Tutorial Dialogue Systems is maturing,
many implemented systems have already undergone a rigorous evaluation
and others are approaching this stage. Nevertheless, it is far from
clear what is the best approach for evaluating these types of
systems. While student learning is key, just as it is when evaluating
other types of intelligent tutoring systems, some answers and methods
will be different when dealing with dialogue systems, due to the fine
grain size and linguistic nature of the data. For example, what are
key indicators of dialogue effectiveness and how do they relate to
learning? We encourage the submission of papers describing overall
system evaluations, specific component evaluations, and position
papers discussing issues related to evaluating tutorial dialogue

In addition to the obvious value of successful system evaluations,
much can be learned from "failed evaluations". Thus, as part of our
system evaluation segment, we would like to provide a forum in which
failed evaluations can be discussed so that we can learn from each
other's experiences in this regard. To further provide a venue from
learning from one another's experiences, we would like to invite the
submission of papers describing experimental designs for planned
evaluations to be discussed by the workshop participants.

Finally, a great deal of work has been done in the Tutorial Dialogue
community recently to collect and analyse corpora of human-human and
human-machine tutorial dialogues. This has brought to the foreground
open questions related to the coding schemes being developed for
tutorial dialogue, the kinds of features of tutorial dialogue that
should be captured in the coding schemes, and the level of analysis
that is most useful in order for example to help designers of
Tutorial Dialogue Systems. We would like to provide a forum in which
experiences in annotating corpora, findings from corpus studies,
alternative annotation approaches, and surrounding issues can be

The format of the workshop will be short paper presentations followed
by commentary by designated commentators and group discussion. We
will also have demo and poster presentations, and possibly panel
discussions. We encourage the submission of papers, posters, and
demos that are related to any of the three focus areas of the

We invite the submission of long papers (up to 10 pages), short
papers (up to 4 pages), and demo/poster abstract (up to 2 pages).
Page limits include tables, figures, and references but exclude the
cover page. Each submission must include 1 cover page which should
    * Title of the paper with an abstract of no more than 500 words;
    * A few keywords giving a clear indication of topic and subtopic;
    * Author names with affiliations, addresses, and phone numbers;
    * Email address of the principal author.

Accepted papers will be published in the proceedings of the workshop.

The deadlines are:
March 31: submission of proposed papers,
April 14: paper acceptance notification,
April 22: final version of accepted submissions.
Electronic submissions, preferably in PDF format, are preferred. Send
submissions by e-mail to [log in to unmask] or by post to: Vincent
Aleven, HCI Institute, Carnegie Mellon University, 5000 Forbes Ave,
Pittsburgh, PA 15213, USA.

Carolyn Penstein Rose (co-chair), University of Pittsburgh, USA
Vincent Aleven (co-chair), Carnegie Mellon University, USA
Jeff Rickel, University of Southern California, USA
Johanna Moore, University of Edinburgh, UK
Art Graesser, University of Memphis, USA
Pamela Jordan, University of Pittsburgh, USA
Diane Litman, University of Pittsburgh, USA
Barbara Di Eugenio, University of Illinois at Chicago, USA
Jack Mostow, Carnegie Mellon University, USA
Mark Core, University of Edinburgh, UK
Claus Zinn, University of Edinburgh, UK