CHI-WEB Archives

ACM SIGCHI WWW Human Factors (Open Discussion)

CHI-WEB@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Transfer-Encoding:
quoted-printable
Sender:
"ACM SIGCHI WWW Human Factors (Open Discussion)" <[log in to unmask]>
Subject:
From:
Ryan West <[log in to unmask]>
Date:
Thu, 8 Jun 2006 08:01:28 -0700
Content-Type:
text/plain; charset="iso-8859-1"
MIME-Version:
1.0
Reply-To:
Ryan West <[log in to unmask]>
Parts/Attachments:
text/plain (35 lines)
Hi All -
 
I think most of us agree that the appropriateness of the method depends on the goals of the study.
 
At CHI this year, we presented data from an empirical comparison between lab based summative testing and automated (unattended) summative testing in which we found no significant differences in the performance metrics between the two data collection methods.  
 
ACM Digital Library link (I'll send pdf on request also):  
http://portal.acm.org/citation.cfm?id=1124867&coll=ACM&dl=ACM&CFID=778613 
97&CFTOKEN=17228172
 
If you're interested primarily in well defined performance metrics (for baselining for example) it makes no difference whether the study is administered by a flesh and blood facilitator or a program.
 
Test setting does appear to make a difference however.  We found that a group tested remotely had faster completion times and were more likely to give up on a task - but were not less successful or less satisfied.  We tend to believe people are a bit more cautious and deliberate in a usability lab which accounts for the setting differences.
 
Also, "usability issues" are a bit more nebulous than performance metrics.  We found a much richer set of issues in lab testing with a facilitator than when analyzing written comments after unattended testing.  We developed a means that allowed participants to identify their root cause of problems when they failed a task which was very effective but they still documented less issues.
 
So, if you're interested primarily in uncovering qualitative usability issues (however defined), there is still no substitute for formative lab testing and iteration.
 
Thanks
- Ryan
 
Ryan West  |  User Research  |  SAS Institute  |  [log in to unmask] 

 



> Date: Mon, 5 Jun 2006 23:24:34 -0400> From: [log in to unmask]> Subject: Re: remote unattended (AKA automated) usability testing software> To: [log in to unmask]> > I have a huge problem with this. Having run more usability tests than  > I can remember, one of the most useful, if not critical, data points  > is the dialogue between the moderator and the participant. That's  > something that simply gets lost in unattended remote testing.> > While something like http://tapefailure.com could be useful for doing  > remote testing, the loss of direct observation and dialogue is too  > important to our research. And the very nature of the relationship  > between the moderator and the participant can weigh heavily on the  > type and amount of feedback you receive during testing.> > During our training of a client several weeks ago, the gentleman who  > leads their UX practice brought up a good point. He was talking about  > the difference in the responses he noticed when using his title vs.  > not using his title during research. He noticed the same difference  > when his dress was dramatically different from the people he was  > researching. There was a significant difference in the responses he  > received during research. We've found the same thing. If I stand up  > in front of a group of people in a suit and tie vs. a t-shirt and  > jeans to do training, it's two completely different dynamics that  > have two different outcomes. I actually changed my presentation style  > a few times during the training to illustrate the point w/o telling  > the participants. Afterwards, the participants differently noticed  > the difference.> > If you're after numbers, then sure, do remote testing. But the point  > of our (my company's) research isn't numbers, it's good solid data  > for making informed goal driven design decisions. I might use weblog  > analysis to help guide some research questions, but I'm not relying  > on that for my decisions.> > > Cheers!> > Todd R. Warfel> Partner, Design & Usability Specialist> Messagefirst | designing and usability consulting> --------------------------------------> Contact Info> Voice:    (607) 339-9640> Email:    [log in to unmask]> AIM:       [log in to unmask]> Blog:      http://toddwarfel.com> --------------------------------------> In theory, theory and practice are the same.> In practice, they are not.> > > On Jun 5, 2006, at 3:42 PM, Kay Corry Aubrey wrote:> > > The Fidelity team ran two sets of the same test: one in the lab and  > > the other using remote unattended techniques. While the remote  > > unattended was misses the information you get from watching a live  > > user, it offered other advantages: you can run many more users so  > > you get a wider diversity of responses, people's comments tended to  > > be very rich and objective, you can collect other measures more  > > easily such as clickstream, time on task.> >> > Ron Perkins of Design Perspectives in Newburport, MA has also  > > written extensively on remote unattended testing. I don't have  > > quick access to his papers, but you could probably find his work in  > > the ACM Digital Library.> > >     -------------------------------------------------------------->         Tip of the Day: Email mailto:[log in to unmask]>                with any comments, questions or problems>      CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]>               MODERATORS: mailto:[log in to unmask]>        SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html>     --------------------------------------------------------------
    --------------------------------------------------------------
    Tip of the Day: Quote only what you need from earlier postings
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------

ATOM RSS1 RSS2