ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Tue, 27 May 2008 13:15:48 +0100
Adrian Clear <[log in to unmask]>
text/plain; delsp=yes; format=flowed; charset=WINDOWS-1252
Adrian Clear <[log in to unmask]>
text/plain (130 lines)

2nd International Workshop on Ubiquitous Systems Evaluation (USE '08)

Sunday, 21st September, 2008 - Seoul, South Korea
In conjunction with UbiComp 2008 []

Paper submissions due: 06 July 2008

Following on from last year's workshop in Innsbruck, USE '08 aims to  
bring together practitioners from a wide range of disciplines to  
discuss best practice and challenges in the evaluation of ubiquitous  
systems. Recognised evaluation strategies are essential in order that  
the contribution of new techniques can be quantified objectively.  
Experience has shown that evaluating ubiquitous systems is extremely  
difficult; approaches tend to be subjective, piecemeal or both.  
Individual approaches to evaluation risk being incomplete and  
comparisons between systems can be difficult.

Several interesting questions and discussion points arose as a result  
of last year's workshop:

We have a pressing need for realistic, interesting, non-trivial  
demonstrator scenarios
How do we overcome difficulties in developing repeatable experiments  
and user studies that rely on real-world context?
Is it possible to categorise the features of UbiComp systems? Can we  
match a set of evaluation techniques to each feature?
How can we evaluate systems that adapt to their users (e.g., by  
learning their preferences)
How might we address the impact on privacy when we release datasets?
How can we evaluate user response to technology they are not supposed  
to be aware of? (i.e., the disappearing computer)
Is it possible to define a suite of techniques and guidelines for  
their application to form a general framework for the evaluation of  
UbiComp systems?
How can we make techniques available and accessible for others to use  
in evaluation of their research?

USE '08 will consist of presentations of peer-reviewed papers that  
will be selected based on their technical merit and potential to  
stimulate discussion. Presentations will be followed by structured  
discussion on the merits of proposed techniques and their  
appropriateness for inclusion in a UbiComp Systems evaluation  
framework. We seek submissions of 4-6 pages in length. A summary of  
last year's workshop will be published later this year in IEEE  
Pervasive Computing. It is the intention of the workshop organisers to  
do similar this year.

Topics of interest:
Given the diversity of work in UbiComp systems, this workshop will  
cover a broad range of themes. We solicit submissions that address  
issues including, but not limited to, the following:

 Demonstrator Scenarios (with associated metrics for evaluation)
     * personal area networks
     * smart spaces
     * metropolitan area systems
     * fixed and ad hoc infrastructures
     * sensor-rich and sensor-sparse environments
- Frameworks and methodologies that enable comparative evaluation of  
features of UbiComp middleware and systems, such as:
     * scalability
     * security and privacy
     * data management
     * data distribution
- Experience papers
     * working with experimental environments
     * difficulties encountered and lessons learned while evaluating  
UbiComp projects
     * results and comparison techniques
 Human factors
     * comparing the benefits and limitations of using in-situ/lab- 
based/virtual-based techniques for studying user interaction with  
different classes of application
     * techniques for the comparative evaluation of personal or user- 
driven experiences
     * measuring user load, ease of use, learning curves, etc.
     * evaluating user understanding of system behaviour
     * quantifying acceptable system reaction times
- Promoting comparative evaluation
    * Public data sets for UbiComp
    * Benchmarks
    * Matching evaluation techniques to features of UbiComp applications

We solicit submissions of full papers between 4-6 pages. Submission  
details are yet to be finalised. Please visit http:// for updates.

Important Dates:
July 06, 2008  Submission deadline
July 25, 2008  Notification of acceptance
August 08, 2008  Camera-ready for accepted papers
September 21, 2008  Workshop date

Programme Chairs:
Graeme Stevenson (University College Dublin)
Steve Neely (University College Dublin)
Christian Kray (University of Newcastle)

Publicity Chair:
Adrian Clear (University College Dublin)

Programme Committee:
Kay Connelly (University of Indiana)
Lorcan Coyle (University College Dublin)
Richard Glassey (University of Strathclyde)
Robert Grimm (New York University)
Jeffrey Hightower (Intel Research)
Marc-Olivier Killijian (LAAS-CNRS)
Ingrid Mulder  (Rotterdam University & Telematica Instituut)
Nitya Narasimhan (Motorola)
Trevor Pering (Intel Research)
Aaron Quigley (University College Dublin)
Katie A. Siek (University of Colorado)
Ian Wakeman (University of Sussex)

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see