CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Effie Law <[log in to unmask]>
Reply To:
Effie Law <[log in to unmask]>
Date:
Fri, 8 Dec 2006 05:41:46 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (344 lines)
Please find enclosed to this message the Call for Paper for the Workshop
"*Review, Report and Refine Usability Evaluation Methods (R3 UEMs)"*
to be held in Athens, Greece on 5th March 2007.

Apologizes for cross posting.
-----------------------------------------------------


WORKSHOP TITLE:

Review, Report and Refine Usability Evaluation Methods (R3 UEMs)

 

CHAIR AND CO-CHAIR

Dominique Scapin

Institut National de Recherche en Informatique et en Automatique (INRIA)

France

Email: [log in to unmask]

 

Effie Lai-Chong Law

Computer Engineering and Networks Laboratory (TIK)

ETH Zürich

Switzerland

Email: [log in to unmask]

 

BASIC INFORMATION:

A full day workshop will be held under the auspices of COST294-MAUSE 
(http://www.cost294.org)

Date:             5th March 2007 (Monday)

Location:         Athens, Greece

Website:          http://www.cost294.org/

 

MOTIVATION:

There exist a variety of usability evaluation methods (UEMs), which are 
employed in a wide spectrum of contexts by people with different 
backgrounds, goals and needs.  Selecting appropriate UEMs to meet 
contextual requirements and constraints is the foremost and crucial step 
that leads to useful evaluation outcomes and presumably effective 
redesign of the system of interest.  Furthermore, emerging information 
technologies (IT) such as ambient intelligence, pervasive computing and 
universal accessibility have triggered the development of new evaluation 
methods and tools, which have been adopted or trialled by a local 
research group but not yet well disseminated to the entire usability 
community. A refined and consolidated knowledge-pool about established 
as well as emerging UEMs based on expertise and experiences of usability 
practitioners and researchers is deemed desirable. It will not only 
enable the selection of right methods but also serve as valuable 
resources for informing experienced members of the usability community 
about new UEMs as well as for training newcomers about the development 
of UEMs.  With the aim to build such a knowledge-pool, the WG1 (working 
group) of the project COST294-MAUSE has undertaken the challenge to 
develop an approach to critically reviewing and analyzing UEMs.  

 

Specifically, WG1 has developed several instruments:

(i) A classification scheme of UEMs

Three major categories are DGMM (Data Gathering & Modelling Methods); 
UIEM (User Interactions Evaluation Methods); CMs (Collaborative 
Methods), each of which is further divided into sub-categories.

(ii) Two templates:

*     «Generic Methods» - to support descriptions of widely used UEMs at 
the generic level, i.e. mainly using reference material such as 
publication, courses, etc.

*     «Case Studies» - to support description of actual cases of UEM 
implementation, i.e. details on how a specific method was used, with its 
context, its precise usage of the method.

(iii) A guidance document for these templates.

 

Selected UEMs are categorized, critically reviewed and analyzed on 
different aspects, from bibliographical references to 
advantages/disadvantages, through a set of methodological attributes. 
Individual reviews are documented as a set of records in the MAUSE 
Digital Library.  Best practices of existing UEMs, covering operational, 
organizational and cultural dimensions, can be derived from these 
records and rendered accessible to the usability community.

 

Up to now, a number of UEMs have systematically been reviewed by some 
COST294-MAUSE partners with the use of the scheme and templates 
described above, including:

* Cognitive Walkthrough

* CASSM (Concept-based Analysis of Surface and Structural Misfits)

* Ergonomic Criteria

* K-MADe (Kernel of Model for Activity Description environment)

* MOT (Metaphors Of human Thinking)

* CUT (Cooperative Usability Testing)

* Personas

* Heuristic Evaluation

* User Performance Testing

* EU-CON II Evaluation

* Abstract Task Inspection

* A set of modelling methods: GOMS, KLM, NGOMSL, TAG, HTA, TKS, GTA and CTT

These reviews were compiled (August 2006) into a document entitled 
«Usability Evaluation Methods Classification, Description and Template: 
COST294-MAUSE WG1 2nd Interim Report» (hereafter - UEM Reviews).

 

To further enrich the scope and quality of this knowledge-pool, we aim 
to invite more contributions from the entire usability community.  Case 
studies from industry are particularly welcome.

 

GOALS:

 The workshop R3 UEMs aims to achieve four major goals:

* To invite more systematic, critical reviews on a variety of UEMs, 
including those listed above and many others;

* To perform meta-review of existing reviews listed above;

* To derive or refine best practices of established UEMs;

* To explore emerging UEMs, identifying their applicability and 
potentiality;  

 

EXPECTED NUMBER OF PARTICIPANTS:

30: Usability researchers and practitioners, including:

* Contributors who have already submitted reviews on UEMs (see «UEM 
Reviews»);

* Contributors who will submit reviews on UEMs that have not been yet 
covered;

* Meta-reviewers of existing reviews documented in the report;

* Creators of emerging UEMs;

 

CATEGORIES OF CONTRIBUTIONS:

Two major types of contributions are:

 

(a) Review of a UEM not currently covered (i.e., complete a template for 
a UEM not currently covered in «UEM Reviews»)

* If you are interested in participating, please send an expression of 
interest to the Chair and Co-chair of the Workshop who will provide 
access to draft documents and templates for new method coverage

* To participate, you would need to complete a template for your 
selected UEM

 

(b)  Meta-review of UEM described

* If you are interested in participating, please send an expression of 
interest to the Chair and Co-chair of the Workshop who will provide 
access to draft documents and templates for new method coverage and 
reviews of existing methods

* To participate, you would need to complete at least three meta-reviews 
of current method descriptions in «UEM Reviews»

 

 

 

 

IMPORTANT DATES:

15th January 2007:      Deadline for submission

31st January 2007:      Authors of accepted contributions notified

15th Feb 2007:    Deadline for application for sponsorship1

 

All submissions will be assessed on the relevance to the Workshop. It is 
expected that at least one of the authors of each contribution will 
attend the workshop.

 

WORKFLOW AND ACTIVITIES:

Prior to the Workshop, accepted contributions will be integrated into an 
enlarged and revised version of the report «UEM Reviews», which will be 
distributed to the Workshop's participants. A questionnaire will also be 
administered to collect the participants' opinions, comments and 
questions about the report.  The data thus collected will be 
consolidated and addressed in the Workshop. 

 

On the day of the Workshop, the following activities will be conducted:

(i) Invited Talk (~ 1 hour)

An expert will be invited to present a talk on revision of ISO standards 
and their impacts on industry work

 

(ii) Presentations of emerging UEMs (~ 2 hours):

Quality contributions that review emerging UEMs will be selected for 
presentation; 15 minutes are allocated for each emerging UEM. By 
emerging UEMs, we refer to those methods which have recently been 
developed (i.e. less than ten-year old), applied and validated, but are 
not yet widely used in the usability community and not yet described in 
the current WG1 interim report.

 

(iii) Group Discussions (~ 3 hours):

      Contributors who have reviewed or meta-reviewed a specific UEM 
will form a working group (i.e. approx. 5 members) to discuss various 
aspects of the UEM:

* Scoping

* Strengths and weaknesses

* Refinement suggestions

* Best practices and recommendation

* Proposed changes to the current review templates and review procedures

Depending on the number and nature of contributions, parallel sessions 
on different UEMs may be organized.

 

(iv) Plenary Reporting and Forum (~ 1 hour):

      Each working group is to present their findings (~ 10 minutes each 
with follow-up questions from the audience)

 

FOLLOW-ON ACTIVITIES:

Authors of quality contributions will be invited to engage in follow-on 
activities listed below:

* A tutorial of usability evaluation methods;

* A practical handbook/guidebook on usability evaluation methods;

* An open library of usability evaluation methods;

* A special issue in an HCI journal on methodologies of usability 
evaluation;

Proposals on other follow-on activities are welcome.

 

INTENDED AUDIENCE:

UI designers, usability researchers and practitioners, and advanced 
postgraduates in HCI

 

 

 

SUBMISSION:

Interested participants are required to use the given templates (Word 
file) to write up your contributions, which are to be submitted as an 
attachment of an email to:

Chair (Dominique Scapin: [log in to unmask]) and

Co-chair (Effie Law: [log in to unmask]). 

Further enquiries can be sent to Chair/Co-Chair.

1 For non-COST294 members: A fixed-rate or a full sponsorship covering 
travel and accommodation will be granted, depending on the number of 
eligible applicants. 

R3UEM-COST294


    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://sigchi.org/listserv
    ---------------------------------------------------------------

ATOM RSS1 RSS2