CHI-WEB Archives

ACM SIGCHI WWW Human Factors (Open Discussion)

CHI-WEB@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Todd Warfel <[log in to unmask]>
Reply To:
Todd Warfel <[log in to unmask]>
Date:
Sun, 11 Jun 2006 15:34:27 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (102 lines)
On Jun 8, 2006, at 11:01 AM, Ryan West wrote:

> If you're interested primarily in well defined performance metrics  
> (for baselining for example) it makes no difference whether the  
> study is administered by a flesh and blood facilitator or a program

Only if you define performance strictly on a quantitative basis. Not  
if you're qualifying performance w/qualitative metrics. More and  
more, we're finding that qualitative is more reliable or has greater  
impact on performance and experience than just relying on  
quantitative data. We're finding more and more participants not  
completing tasks because the experience is bad, or thinking they've  
completed a task, but not actually completing a task because the  
experience doesn't have a true "finished" indicator.

> Test setting does appear to make a difference however.  We found  
> that a group tested remotely had faster completion times and were  
> more likely to give up on a task - but were not less successful or  
> less satisfied.  We tend to believe people are a bit more cautious  
> and deliberate in a usability lab which accounts for the setting  
> differences.
>
> Also, "usability issues" are a bit more nebulous than performance  
> metrics.  We found a much richer set of issues in lab testing with  
> a facilitator than when analyzing written comments after unattended  
> testing.  We developed a means that allowed participants to  
> identify their root cause of problems when they failed a task which  
> was very effective but they still documented less issues.

I've read this study and I think there were a few issues w/the way  
the testing was done. If I understand the report correctly, the lab  
testing had the moderator placed in the other room behind the mirror,  
not in the room w/the participant. So, basically, the participant was  
on their own in the room w/a test script, or description of the task  
to be tested.

This concerns me a bit, as one of the key elements in usability  
testing is the dialogue between the moderator and participant. For  
instance, when we run usability tests, the script is more of an open  
guideline. We have tasks we need to test, but the framing of the task  
is dependent upon the conversation between the moderator and the  
participant.

For example, last year we did testing for a hosted service provider  
in the financial industry. Each conversation with the participants  
started of talking about one of their "deals" they had been working  
on. We used this information to structure the task based specifically  
on the "deal" they had been working on. So, for each participant, it  
was contextually specific to one of their actual projects. This makes  
it more real world.

How would this work in with the model you used in testing?

Also, in the study you provided a "note" area for the unattended  
tests. It's pretty common knowledge in our field that self-rating is  
unreliable. So, you're still not able to address the issue of a  
participant thinking they've completed the task, but not actually  
completing it. Additionally, you're not able to find out why they  
didn't complete the task if you're not there observing. While the  
note taking area does provide some value for unattended participants  
to provide feedback, I would expect that you'd have issues of:
* Making them provide written feedback is extra time and effort,  
which would eventually lose value in that you'd find less rich  
feedback as participants want to just get it done (Ugh, I have to  
write out my response again! This is like an essay test in university)
* You're putting a section at the top of the browser window, which  
takes up space away from the application, which could have negative  
impact on the application use (Do they see items that are not above  
the fold, which would be above the fold in the real world where that  
extra screen real estate isn't taken away?)

Each of these things changes the real environment into something  
else. So, you're not testing the true testing environment anymore.  
This might not be an issue for basic things like signing into an  
application, but for more involved transactions and processes, you  
can see where this method wouldn't be recommended.

Cheers!

Todd R. Warfel
Partner, Design & Usability Specialist
Messagefirst | designing and usability consulting
--------------------------------------
Contact Info
Voice:    (607) 339-9640
Email:    [log in to unmask]
AIM:       [log in to unmask]
Blog:      http://toddwarfel.com
--------------------------------------
In theory, theory and practice are the same.
In practice, they are not.



    --------------------------------------------------------------
        Tip of the Day: Email mailto:[log in to unmask]
               with any comments, questions or problems
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------

ATOM RSS1 RSS2