I am all for researching important questions empirically.  I just don't see how testing one person here and one there can answer an important research question.

To me, usability testing has two aspects:

(1) Aiding designers through iterative design and testing with 5-6 people at each iteration.  This is a form of feedback.

(2) Hypothesis testing, e.g. "Interface A is better than interface B" or "all people like breadcrumbs."  In this case, you are trying to make inferences from the sample results to the population and inferential statistics are in order.  In other words, you need to tell me what the chances are that your study results may have happened by chance.

In the second case, the study sample has to be bigger and I am wondering how many studies one has to do to get the big sample.

Why not just invite participants to the library and do a study that interest you on your own instead of involving a client that is paying you for a specific research that may not be of interest to you?

Alex


Derek L Olson <[log in to unmask]> wrote: Alex Genov Said:
 >Interfaces are invariably related to tasks (or jobs).  Tasks, on  
their part, are related to target user groups.  So you cannot recruit  
a specific user >group for a specific task and a specific interface  
and test other random interfaces with it.

This technique certainly isn't for everyone--and maybe isn't for most.

No question that it would be dubious to--in the midst of a usability  
test evaluating cockpit ergonomics using 22-year old military pilots  
as subjects--throw in a glaucoma website interface (with and without  
breadcrumbs) as an add-on.

In a lot of the usability testing our company does, however, we are  
after a pretty wide swath of the "Internet User" audience... In this  
scenario, it would not be off-base to add in some (non-random)  
interfaces that present widely-used website interface elements like  
breadcrumbs, etc.

Now, as for the "absolute usability" stuff... this again is not going  
to work for everyone. My aircraft carrier comparison isn't one I  
expect to use in my lifetime--and not just because it fails Alex's  
validity standards...  : )

I did spend a few seconds Googling the "absolute usability" stuff I  
mentioned, and as near as  I can tell, this is a pretty good synopsis  
of the "Usability Magnitude Estimation" technique I had come across  
perviously:
http://echouser.com/

Curious to hear people's thoughts on how this might be intelligently  
applied to Hal's original idea of distributed usability testing.

-D

Derek Olson

Foraker Design
5277 Manhattan Circle
Suite 210
Boulder, CO 80303
(303) 449-0202
www.foraker.com

    --------------------------------------------------------------
           Tip of the Day: Postings must be in plain text
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------


 		
---------------------------------
Do you Yahoo!?
 Everyone is raving about the  all-new Yahoo! Mail Beta.

    --------------------------------------------------------------
           Tip of the Day: Postings must be in plain text
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------