Designing Sketch Recognition Interfaces
A CHI 2010 Workshop
Sunday, April 11, 2010, Atlanta, GA
Tracy Hammond, Texas A&M University
Edward Lank, University of Waterloo
Aaron Adler, BBN Technologies
With help from selected members of the Sketch Recognition Lab at Texas
A&M University: Paul Taele, Josh Peschel, Marty Field, Manoj Prasad,
Workshop website: http://srl.csdl.tamu.edu/workshops/2010/chi/
Sketch Recognition is the automated understanding of hand-drawn
diagrams. Many open problems still exist in designing sketch
recognition systems, including, but not limited to:
• What is an acceptable amount of user instruction and/or user
training examples such that the system is still usable? How does the
domain factor into the tradeoff between a ready-to-use system versus
one requiring learning or training by a user?
• Feedback is invariably an important feature in sketch recognition
systems. What are various possible methods of feedback? Which types
of feedback are appropriate in which situations?
• Erasing is challenging to implement, especially when considering
aspects such as stroke-level versus bitmap-level erasing, and the
effects on re-recognition. What is the best way to implement erasing
in order to provide a truly usable drawing experience?
• The images displayed on the screen are of primary importance to the
user. Displaying a user’s original strokes has significant benefits,
as does displaying the system’s interpretation. How can sketch
recognition systems harness the benefits of both methods of display?
• Errors, caused by a recognizer or by a human, will occur. What is
the best way to deal with error detection and recovery?
• Sketches of a single design contain a combination of drawing,
editing, and attention strokes. An area of continuous debate has been
mode versus modeless systems. What is the best way to distinguish
between these stroke types from a user interface perspective?
• What editing capabilities are necessary to make a truly usable experience?
• Sketch recognition systems have been mostly limited to a specific
type of domains. What other domains could benefit from diagram
understanding? How could sketch recognition be applied to these
domains? What are the difficulties (and possible solutions) for
applying sketch recognition such a domain?
• Sketching is not just about drawing. When people sketch, they talk,
they interact, they touch, they gesture. How can the benefits of
multimodal interfaces be incorporated to create a usable application?
What are some of the pitfalls of multimodal interaction? How can the
pitfalls be overcome?
• Repeatedly, the question “What is the killer application?” is posed
to the sketch recognition community. What factors of design and
usability are necessary before any such killer application can exist?
(Okay, then if you want to suggest a killer application after that, go
ahead, but it is not necessary.)
• What else is holding back sketch recognition systems from being
truly positive computer-human interaction experiences?
This workshop expects to dedicate one day to the examination of these
open research problems. The day will start with a 2-minute madness
where each contributor presents his or her past experience and/or
research contributions in the field of designing sketch recognition
interfaces. Then, interesting and diverse opinions on the above
topics will be chosen to briefly present their ideas and then lead a
panel on the topic. Many of these sessions will involve people
breaking up into groups of a manageable size to provoke innovative
discussions. If there are enough graduate student participants, then
at some point, we plan to divide the group into students and
non-students, and ask each group to discuss about the future of the
field. We expect that the students of today will be the leaders of the
field tomorrow, and as such will provide an interesting perspective on
the future of the field. We hope that the results of this workshop
will provide interesting fodder for a book on the future of sketch
Researchers are invited to submit position papers detailing their
perspective of one or more of the currently open questions in
designing sketch recognition interfaces. Each position paper should
be 2-4 pages in length, and should include a brief one-paragraph
(approximately 250 words) bio of their experiences with sketch
recognition interfaces. Participants will be selected by the
innovativeness and new directions of thought provided in the essays.
Answers that provide new insight to the community will be valued more
than contributions that either simply state ‘it depends’ or conform to
the current state of ideas.
• Submission Date: January 6th, 2010
• Notification Date: January 30th, 2010
• Final Paper Submission: February TBD, 2010
• Please email submission at or before the due date to [log in to unmask]
At least one author of the accepted paper must attend the workshop and
one or more days of the CHI 2010 conference. If more than one author
per paper is planning to attend they must contact the organizers
first, as there will be a limit on the total number of workshop
We look forward to your participation.
Director, Sketch Recognition Lab
Assistant Professor, Computer Science Department
Texas A&M University
To unsubscribe, send an empty email to
mailto:[log in to unmask]
For further details of CHI lists see http://sigchi.org/listserv