CHI-WEB Archives

ACM SIGCHI WWW Human Factors (Open Discussion)

CHI-WEB@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Justizin <[log in to unmask]>
Reply To:
Date:
Wed, 7 Mar 2007 15:28:47 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (145 lines)
Howdy all..

This is my first post to CHI-WEB, which I just joined, so I'd like to
wave hello to everyone.

I should probably also precurse my message by saying that I'm not
specifically familiar with the idea of a usability scorecard, but that
context clues suggest that it may bear familiarity to other forms of
tracking used in software engineering, which i am highly familiar
with.  If I'm completely off-base in this assumption, please excuse my
ramblings. :)

On 3/7/07, Scott Berkun <[log in to unmask]> wrote:
> Usability scorecards cut both ways - it can make your work more visible, but
> will also define metrics for you to be evaluated on. Is number of studies
> per month or subjects run the best way to measure your value? Probably not -
> those are just the easiest to measure, and that's the trap.

I can definitely relate to your "corporate refugee" mentality here,
but I feel it's dubious to suggest that any form of tracking will
undoubdtedly fall into the wrong hands.  I could go on for ages about
what you should do if your manager can't communicate, or if you are a
manager who can't communicate, but here's a far more valuable point:

  If your manager can communicate or you are a manager who can
communicate, it should be possible to build a team who defines its'
own success structure in such a way that you *can* track what you are
doing and benefit from that ongoing, reflexive process.

Surely you've heard of the SkunkWorks? ;)

> So lets back up: what problem are you trying to solve? What alternatives to
> solving that problem are you considering besides scorecards? (If scorecards
> are the only thing you're considering, something's wrong. You wouldn't
> design a UI that way, right?)

I definitely agree that a problem-solving approach is the best way to
approach something like this.  Is tracking good?  I dunno.  What are
you tracking?  Why? etc..

Let's get it all out on the table!

> > We also have a desire to create a more robust knowledge base of our
> usability data
>
> I've never seen a usability database that was worth the effort to create it.
> Reports are rarely read,  certainly not old ones, and no programmer or
> manager is going to spend time querying a database when easier alternatives
> (asking usability engineers or making things up) exist.

Again, I'm going to pull the self-employment card here and say that
while some things are futile because of the people you are working
with, not everything is futile.  It sounds like Chris and his folks
are setting out on an expedition of change, though again you may be
right that they've jumped to the conclusion that a "usability
scorecard", something i'm not entirely familiar with, is the solution
to .. possibly, everything being all wrong?  You've got to start
somewhere, right, so let's give him that.

> The best thing I've seen to solve this problem is to write summary reports
> every quarter. If you've done 10 studies on site navigation, at the end of
> the quarter someone writes a high level summary of what was learned across
> all those studies, focused on recommendations (not data) and lessons learned
> that will be of greatest value to non-usability engineers (and pick summary
> topics with this in mind).  Individual studies are often myopic and without
> summary reports the real lessons learned never get documented.
>
> Do a short presentation, open to all, on what was learned - those
> sessions/summaries will get read and often have more impact than the studies
> themselves.
>

I agree strongly in principle with this idea, it's very important to
get things out in the open periodically and to say, hey, we've got a
problem with the way we do something and it needs to change, but I
have also experienced that sometimes the person doing this, even if it
was me, is not as involved as they could be in the day-to-day "what
are we doing right / wrong" battle.

It's very important to be on the front lines and there are front lines
tools, usability scorecards possibly being one of them.  My experience
in engineering and customer service have taught me that having a rich
history of everything that happens with a system, a project, a client,
etc.. can be invaluable.

So, again, without knowing much about the idea of a usability
scorecard, I can say as an experienced software engineer, project
manager, and team leader that having information on a group of
projects about what has been done for each to reflect on can be
useful.

I know it can be frustrating to expose information which may be used
against you, and it should be obvious by now that I've found it useful
to work for myself so that I don't entirely have to answer to anyone
on this sort of ground, but the fact is that I do.  I have to answer
to clients and to members of F/OSS communities, and I try to make sure
those are the right sort of people to work with.

When you're working with the right people, and you can lower your
guard a bit, talking about what does and does not work is good for the
project, and that's good for everyone.

With the growing Intertwingularity of information and the trend toward
a ubiquitous web, as sir timbl calls it, keeping track of information
one end of a project may help people at the other end save its'
funding.  Maybe your usability work is great, and someone's
engineering projects are failing, and they come to the outstanding
conclusion one day that the difference between successful projects and
unsuccessful projects, from their end, are those which overall had a
lot of usability testing.  In that specific case, you may *really*
want them to have detailed information on how that went.  Possibly
they could make the connection that the presence of the usability
testing process had a strong influence on buyin when changes in plans
had to be made.  It's difficult to get a high-level view of things
when everyone is hiding their notes under their mattress at night.

As you say, it cuts both ways.  I suppose my only disagreement is that
the people involved in an endeavor can push the blade in one direction
or another.  If people are pushing it towards you, it's definitely not
time to start sharing metrics of your performance voluntarily, but if
you find yourself in the midst of people looking to incite change,
it's no time to push it towards them.

And, again, as you suggest, if there is a problem needing to be
addressed which may be solved by some tracking like usability
scorecards, this should be addressed by considering alternatives.

Cheers! ;)

-- 
Justin Alan Ryan
Director, Interaction Architecture
Auxilium Group, inc.: Gnudyne(tm), Qutang Networks(tm)
http://www.gnudyne.com/ | +1-415-738-7513

"You don't lead by pointing and telling people some place to go. You
lead by going to that place and making a case." -Ken Kesey

    --------------------------------------------------------------
    Tip of the Day: Suspend your subscription if using auto replies
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------

ATOM RSS1 RSS2