CHI-WEB Archives

ACM SIGCHI WWW Human Factors (Open Discussion)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI WWW Human Factors (Open Discussion)" <[log in to unmask]>
Mon, 11 Oct 2004 19:51:06 +0100
Amy Hogan <[log in to unmask]>
text/plain; charset=ISO-8859-1
Amy Hogan <[log in to unmask]>
text/plain (98 lines)
I had two replies to the original post.  Many thanks to Clay and Josephine for
their insights and comments...

> Does anyone know of an easy/short computer-based task that will make a group
> of users be self-reflexive about the strategies they are employing?
> Specifically, how can you make users think about the navigation strategies
> they use and report back either during or immediately after the task?  Also,
> does anyone know of two or three different navigation interfaces that will
> help in this process?

I am actually going through this exact exercise at my organization.
Technology decisions we have made have opened up a number of new
navigation options and interaction models to us, and we are going
through the process of trying to narrow them down to the best

The following are the initial contenders we were
looking at. I placed an asterisk next to the interaction/navigation
models we are prototyping and testing.

  *Desktop-like -- A launcher interface, and a workspace that allows
for the placement of shortcuts to application modules and objects
including lists.

  Application Suite -- Individual applications that would be
"installed" to the user's system and would be very atomic.

  Power App -- A super-flexible interface that would cater largely to
super-users. Allows users to query to find object(s) to work with, and
then allow for the user to perform any or all actions they have
permission to perform on those objects.

  *Tabbed Interface -- A tabbed web-like navigation which would narrow
the scope of the application context to a specific application
business module.

  Control Panel (Iconic) -- An interface akin to the OSX Control
Panel. Selecting an icon will set the application context and open the
selected business module in the Panel.

  MDI -- Standard MDI interface within a window. Toolbars would allow
functions within the focused sub-window.

  *Eclipse Perspectives -- A modular interface that allows the
stitching of application panels into workspaces. Out of box, a core
set of workspaces will be defined to meet the needs of specifc
modules. Users can create their own workspaces that can be saved and
shared. A simple list is used to select the current workspace.

As we are moving forward, I think that either Desktop and Tabbed
interface is going to prevail when it comes to ease of use and
understandability. It is likely we will inherit the Eclipse
Perspective workspace notion and adapt that to fit into the navigation

I feel pretty strongly that asking users to give feedback
will only result in information at user's level of knowledge.

Where you are more likely to get data you can tabulate and rely on is
with formal usability testing methodologies. It will require a little
more preparatory work, but in the long run, your results will be a lot
more reproducible, consistent and reliable.

Hope that is helpful!

-Clay Newton

If I am understanding your requirements correctly, I would recommend Keith
Instone's navigation stress test:

I once evaluated a comparison of advanced search pages that employed three
different behavior models.  The group sponsoring this project gave me three
prototypes. I randomized which presented first and asked users to conduct
the same search on all three.  I had a fairly large sample (for the business
world, at any rate) with over 15 users, as I recall. I only evaluated time
on task for the user's first experience -- from the other two interfaces, I
gathered only qualitative data, such as user preference.

My goal was to make a recommendation regarding the best search experience.
By comparing the quantitative and the qualitative data, I could declare one
of them a clear winner.

Here's the best part: These prototypes did not produce results.  I could
measure their response to the search screen without getting their response
muddied by disappointment with the system response -- which wasn't the goal,
anyway.  I needed to understand which behavior model made the most sense to

Perhaps you might be able to employ a similar experiment utilizing various
forms of navigation.  Although I did not have the luxury of a control group,
that might be valuable as well.

Josephine Scott