CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Effie Law <[log in to unmask]>
Reply To:
Effie Law <[log in to unmask]>
Date:
Sat, 1 Sep 2007 18:55:13 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (175 lines)
COST294-MAUSE Open Workshop

Downstream Utility: The Good, the Bad, and the Utterly Useless Usability 
Evaluation Feedback

(Call for Papers)

 

Date: 6th November 2007 (Tuesday)
Place: IRIT Lab, Université Paul Sabatier (Toulouse 3), France
Organizers: Effie Law, Marta Lárusdóttir, Mie Nørgaard

 

Motivation and Background

Downstream utility, in the context of usability evaluation method (UEM), 
has been described as:

"...downstream utility of UEM outputs ... depends on the quality of 
usability problem reports ... the persuasiveness of problem reports, a 
measure of how many usability problems led to implemented changes 
...evaluating the downstream ability of UEM outputs to suggest effective 
redesign solutions through usability testing of the redesigned target 
system interface."  (Hartson, Andre & Willinges, 2003, p.168)

"The extent to which the improved or deteriorated usability of a system 
can directly be attributed to fixes that are induced by the results of 
usability evaluations performed on the system" (Law, 2006, p.148)

These descriptions converge to a common, basic idea that usability 
evaluations of a system lead to redesign proposals whose effectiveness 
can be evaluated by re-testing the changed system.

Wixon's (2003) radical claim that it is irrelevant whether a system's 
total set of UPs can be uncovered, because the true goal of usability 
testing lies not in finding defects but in fixing them has set off a 
growing interest in the topic of downstream utility (cf. John and Marks 
[1997]). Though Cockton (2005) states that assessment of downstream 
utility is beyond the scope of pure evaluation methods, we argue that 
the critical element of downstream utility -- persuasiveness -- is 
somewhat determined by the choice of UEM and how it is executed. For 
instance, outcomes of heuristic evaluation presumably are less 
persuasive than those of user tests, and user tests performed by 
designers/developers themselves seem more persuasive than those 
performed by usability professionals. Consequently, it is meaningful to 
compare downstream utility of different UEMs as well as investigate how 
developers, designers and project managers, who are supposed to be 
beneficiaries of usability evaluation feedback, assess a method's 
utility and how contextual factors influence such an assessment. Note 
that we can learn not only from stories of success but also of failures!

 
Previous Work:

The project COST294-MAUSE hosts four working groups, of which especially 
Group 2 focuses on comparing UEMs. The group, which is led by Gilbert 
Cockton, held a project-based workshop in June 2007 (Salzburg, Austria) 
to describe and revise coding constructs used for comparing different 
instances of UEMs (e.g. persuasiveness, value to development, redesign 
complexity). These constructs was applied to sets of usability problems 
(UPs) from various domains. Coding construct definitions and problem 
sets are accessible via our project instrument - MAUSE Wiki. Potential 
contributors, who are interested in further refining/proposing new 
coding constructs and/or applying them to selected problem sets, can 
apply for an account by writing to Effie Law ([log in to unmask]).

 
Goals:

The workshop will seek to:

(1) Identify what type of information developers (and other 
stakeholders) find worthwhile or worthless in usability evaluation 
reports, and why;
(2) Identify which format of usability evaluation feedback developers 
(and other stakeholders) find useful and usable (e.g., video vs. 
written, specific ways of data clustering), and why;
(3) Study existing quantitative and qualitative methods to evaluate 
different UEMs downstream utility such as the quality of usability 
feedback, a UEM's ability to generate redesign solutions, and the 
effectiveness of such solutions;
(4) Validate the scope, reliability and usability of the pre-defined 
coding constructs and coding scheme (i.e., Cockton & MAUSE, 2007) for 
evaluating downstream utility;
(5) Refine the notion of downstream utility in usability evaluation;

 
Target Groups and Submission:

Software developers, usability researchers and practitioners, user 
interface designers, and students/academics of HCI (max: 25).

 To participate in the workshop a submission of a position paper of 2-4 
pages in SIGCHI format (www.chi2008.org/chi2008pubsformat.doc) is 
required. Topics of the papers could address one of the following topics:

* Case studies assessing the impact of usability evaluation feedback on 
system redesign in terms of quantitative measures and/or qualitative data;
* Experience reports illustrating which kinds of feedback from usability 
evaluation that stakeholders find particularly useful or utterly useless;
* Technical reports on applying pre-defined (or refined) coding 
constructs (i.e., Cockton & MAUSE, 2007) to a selected problem set or 
problem sets (see "Previous Work" above);
* Theoretical frameworks for analyzing different aspects of downstream 
utility, such as psychology of developers;

Submissions in Word/pdf should be sent to Effie Law ([log in to unmask]).

 
Workflow:

This one-day workshop will consist of the following activities:

(1) 9:00 -- 9:15         Welcome and Introduction
(2) 9:15 -- 11:00        Presentation of Position Papers
(3) 11:00 -- 11:30       Coffee break
(4) 11:30 -- 12:30 Panel:   The Good, the Bad, and the Utterly Useless 
Usability Evaluation Feedback
(5) 12:30 -- 13:30       Lunch
(6) 13:30 -- 15:30 Practical exercises in small groups: Applying 
pre-defined coding constructs of downstream utility to problem sets (cf. 
Cockton & MAUSE, 2007)
(7) 15:30 -- 16:00 Coffee break
(8) 16:00 -- 17:00       Discussion on results of practical exercises
(9) 17:00 -- 17:30 Plenary: group reporting

 
Important Dates:

1st September -- Issue of Call for Papers
1st October -- Deadline for paper submission
8th October -- Author notification
22nd October -- Revised papers

Program Committee:

* Gilbert Cockton, University of Sunderland, UK
* Følstad Asbjørn, SINTEF, Norway
* Kasper Hornbæk, University of Copenhagen, Denmark
* Ebba Hvannberg, University of Iceland, Iceland
* Effie Law, ETH Zürich/Univ. Leicester, Switzerland/UK
* Marta Lárusdóttir, Reykjavik University, Iceland
* Mie Nørgaard, University of Copenhagen, Denmark
* Philippe Palanque, IRIT Lab, Université Paul Sabatier, France
* Tobias Uldall-Espersen, University of Copenhagen, Denmark
* Jan Stage, University of Aalborg, Denmark
 

References

1. Cockton, G. (2005). "I can't get no iteration". Interfaces, 63, 4.
2. Cockton, G. & MAUSE (2007). COST294-MAUSE workshop on coding 
constructs definitions and coding problem sets, 7th June 2007, Salzburg, 
Austria (Accessible online at MAUSE Wiki, http://www.cost294.org)
3. Hartson, H.R., Andre, T.S., Williges, R.C. (2003). Criteria for 
evaluating usability evaluation methods. International Journal of Human 
Computer Interaction, 15(1), 145-181.
4. John, B., & Marks, S.J. (1997). Tracking the effectiveness of 
usability evaluation method. Behaviour and Information Technology, 
16(4/5), 188-202.
5. Law, E. L-C. (2006). Evaluating the downstream utility of user tests 
and examining the developer effect: A case study. International Journal 
of Human-Computer Interaction, 21 (2), 147-172.
6. Wixon, D.R. (2003). Evaluating usability methods: Why the current 
literature fails the practitioner. Interactions, 10(4), 29-34.


    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://sigchi.org/listserv
    ---------------------------------------------------------------

ATOM RSS1 RSS2