CHI-WEB Archives

ACM SIGCHI WWW Human Factors (Open Discussion)

CHI-WEB@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI WWW Human Factors (Open Discussion)" <[log in to unmask]>
Subject:
From:
Scott Berkun <[log in to unmask]>
Date:
Wed, 5 Apr 2000 22:41:43 -0700
X-To:
"Lundell, Jay" <[log in to unmask]>
Reply-To:
Scott Berkun <[log in to unmask]>
Parts/Attachments:
text/plain (73 lines)
> From: Lundell, Jay [mailto:[log in to unmask]]
> ...
> Does anyone have any data, arguments, etc. to address the ease of
> learning/ease of use issue? What can I tell people that will convince them
> that a difficult to understand UI will be a difficult to use UI?
>

The "once they learn how to use it, it will be easy for them." is a
potential trap. Someone could offer the slide rule as the innovative new way
to compute numbers using that motto. The fact that something is innovative
and initially hard to use does not say anything about whether you can learn
to be proficient with it. Innovation doesn't prove usability - it doesn't
prove anything. I've heard the same kinds of comments you have and I've
tried a couple of things over the years that helped, but nothing profound.

Since you don't have time for long studies, you could try to measure the
learning costs that you have (how much time does it take for users to obtain
a certain level of proficiency? how much instruction do they need?). I've
done two part studies where the user gets two sessions with the new UI,
either over the course of two days or an hour at a time with some sort of
break. You could then measure the rate of improvement across the sessions
and have numbers for comparison.  This isn't ideal, since it's only two
sessions, but it's something you can work with.

If you need sufficient proof for how high the learning curve is, you could
augment this with upfront training, or training during the break between
sessions. This doesn't replace long term studies, but you can come back and
say "we gave 10 participants an hour with the UI, then an hour of training,
and then another hour with the UI. Only 15% of users could complete half the
tasks in the second hour with our product" or whatever the outcome would be.
At a minimum, this will set you up for a discussion with your team of what
acceptable learning curves are, and what the goals should be (10 hours
before users can complete 80% of core tasks? 20 hours? etc.).

My experience has been that (American) users expectations for required time
investments are incredibly small. On IE we spent time with Netscape users
trying to understand what we had to do to get them to switch - it's
incredibly hard - not because of an incredible love for the product as much
as they did not see the value of having to learn something new to replace
something that was already working for them ("I don't care if it's easier to
use, I'd have to get used to something new and that's always hard - it
outweighs the potential value"). Kayla's story about Dreamweaver seems on
the money. We assume people love technology as much as we do - they don't.
Learning is hard and most people don't enjoy it. Part of why developers have
the "they'll just learn" attitude is because that's their own natural
attitude that has led to their success at programming, and they are
projecting it onto their users.

The slide rule could offer some benefit to me, but is the benefit worth the
amount of time it'd take to get that benefit? My fantasy is that we were
smart enough to provide consistent measures for this kind of information we
could put it right on the box or website, certified by the UPA or some org
- "In a study of 45 users of varying experience, Spiffo newsreader provided
10% faster reading of CHI-WEB after 30 hours of initial usage". Is a 10%
improvement worth 30 hours of my time? I think the potential user's internal
perception of this sort of equation explains why some products fail or
succeed.

My last thought is the crossovers for testing you have in this space with
documentation. Those training materials you use in the two part study could
come from your user assistance or documentation team. So you could also look
at the study as a way to measure the training/documentation quality. There
are always key concepts and bits of knowledge that accelerate the learning
curve and help/documentation is often shipped without analysis of it's
impact. (A related book is The Nurnberg Funnel : Designing Minimalist
Instruction for Practical Computer Skill by John Carrol - excellent book
about minimal documentation, including, I think, some measures for analysis
of documentation samples)

hope it helps,

-Scott

ATOM RSS1 RSS2