Print

Print


Thanks to all the kind people who answered my question about the usefulness
of using some sort of synchronous communication system such as chat or
instant messaging as compared to face-to-face meetings when collecting data
for experimental data.

In summary, the opinion was that there is the potential for loss of
information by using chat/IM/phone interviews, since body language is
hidden. So the importance of visual communication will have a direct impact
on how useful the data collected ends up being.

My belief is that this shouldn't have too much of an impact on the data in
my case, as I am trying to identify which features (and in what order)
experts use when identifying an object. However, if I do try chat/IM, I
will definitely also do some face-to-face interviewing in order to compare
the two sources!

-------------------------------------------
Answers to my email:
You might want to check to see what studies have been done with libraries
using chat reference systems now, that *might* fit in with what you want.
Blake Carver
------------------------
Media Richness:Continuum of Psychological Interactiveness
mail (the old-fashioned kind)
E-mail
Bulletin Board
Fax
Voice mail
E-chat
Audio annotation of files
Store-and-forward compressed video
Telephone Call
Live Board with point-to-point audio
Point-to-point video conference <56Kbps
Point-to-point video conference >112Kbps
Face-to-face meeting
The most effective if Face-to-Face meeting and the least effective is
old fashined mail.
Hope this helps.
Regards
Gopal T V
------------------------
about IM I suggest you (but maybe you already know it)
Nardi & Whittaker's paper (2000) Interaction and Outeraction:
Instant Messaging in Action
hoping it could be useful
federico
------------------------
We recently conducted a completely remote study. Recruited volunteers
in-house via e-mail and the study was all online. Volunteers were contacted
by phone to make sure they matched criteria, and then continued contact was
made via e-mail.
The study was done in-house for a preliminary beta test of a Web based
application. The volunteer pool was approximately 5,000 employees, and we
used about 30 participants for this phase.
The online study consisted of three parts: a demographic questionnaire, a
"journal" and a satisfaction survey. Demographic information was used
mainly to interpret individual results, since the pool of volunteers was
artificial (in-house).
Without getting into too much detail, users conducted tasks according to
instructions (delivered hard copy and available online) on the Web app, and
entered their feedback in the journal. All study results were dumped into
an Access db.
Because our testing efforts will expand across multiple locations and
hundreds of employees eventually, and we have limited resources and staff
to conduct the testing, we chose this method despite it's limitations. We
will conduct further one-on-one and more traditional ux testing down the road.
We found this method to work extremely well to (a) find preliminary
usability problems and (b) get great user feedback.
Obviously, this method expedited the data collection. The test was designed
with all kinds of dropdowns and choices to guide users, which also created
nice filter-able data for us.
------------------------
Why not just do a couple test runs and see if it works?
The biggest factor probably will be how comfortable your experts are with
chat. I've done a number of "conference calls" via IM and it can work well
with people who are used to IM. Even those who aren't generally do well,
but may get confused about who's typing (due to IM lag-time issues), etc.
The other thing I've noticed it that the UI of chat tends to encourage
short thoughts -- mainly because the typing window is small and not
resizeable.
So you might want to combine IM with a questions via email, either before
or after. I've done interview this way and it has the advantage that people
can think about their answers, so often times they're more thoughtful (or
at least more coherent). Depending on how structured the interview is, you
might want to try emailing questions first, then after you get the reply,
using IM for a follow-up Q&A. And if necessary to a final round of email
questions to follow-up on IM answers.
The final factor is how well your content can be described strictly
verbally. This is in two ways: 1) You might have content where drawings or
pointing to stuff helps (for example, try describing a spiral without using
your hands). 2)IM and email lose body language etc. These may or may not be
important, just stuff to consider.
George Olsen
------------------------
Your question implies that the words themselves are the only valuable
capture from a live user interview. However, I've always found that this is
rarely the case. The non-words, including pauses/searching for the right
term, inflection in the tone, pacing of speech, eye and body movements, etc.
all provide *visual* or non-word audio cues that would fail to be captured
in a chat environment, but which provide substantial additional context and
meaning over and above the words themselves.
You didn't say what kind of information you wanted to explore in your
interviews. Let's assume you're doing work modelling, and you're interested
in developing a task sequence model. When you interview the users (whether
live or via chat) the words capture *only* the literal task descriptions.
It's those other things mentioned above that cue you in to whether the user
thinks the tasks described are stupid or meaningless or frustrating or
effective or appropriate.
An additional interview technique that's really valuable and which the chat
environment would totally negate is observing the user *doing* the tasks at
the same time they describe how to do the tasl to you. The
differences/deviations between their words describing a task and their
actions in performing the task again provide the key insights and take-aways
that enable you create your own subsequent "desired" sequence model which
offers an improvement or value add over the user's current sequence.
IMHO doing interviews via chat would completely negate any opportunity you
would have to discover any kind of meaningful insights as to what's good or
bad in the present practice, and eliminate your chance to identify places
where your model or design could improve upon or offer value-added
changes/deviations to current practice which offer opportunities for true
innovation.
Besides, unless you're doing some kind of market research on something like
feature preferences, there's no real advantage to cranking though a large
number of different users. Seems to me I remember a great article entitled
something like "Is 8 really enough?" which I believe is from Karen
Goldblatz' group which said basically that a n=15 or 20 was plenty. You
should be able to do 20 onsite contextual interviews in two to three weeks
with a single team of 2 observers, depending on travel time.
------------------------
There is actually a large literature on the differences between
f2f and chat (and other media of communication). It lives
partly in the communications literature (look for Journal
of Communication, J. of Computer-Mediated Communication, etc.)
and partly in organisational behavior literature, and somewhat
in the HCI/CSCW and linguistics research lit. Ron Rice has an online
link collection,
http://www.scils.rutgers.edu/~rrice/ricelink.htm#COMPUTER-MEDIATED
<http://www.scils.rutgers.edu/~rrice/ricelink.htm#COMPUTER-MEDIATED>
COMMUNICATION, as does John December:
http://www.december.com/cmc/info/ <http://www.december.com/cmc/info/> .
My dissertation includes an entire chapter reviewing the different
studies of CMC (specifically for a study of chat). On interlibrary loan,
it's "The MUD Register," from Stanford in 1995. The book version is
available from Amazon, called "Conversation and Community:
Chat in a Virtual World." (The book has a much shorter lit review.)
If you want to collect data using chat, be aware: fewer words,
less detail, differences due to facility in typing, lack of body
language/tone/etc can make interpretation difficult. Nonetheless, for
simple content, it can be ok. For a truly well-rounded data collection,
I would supplement with f2f collection as well.
Lynn Cherny

-------------------------------------------
The original question was:
I am aware that there has been a lot of research comparing chat systems
with face-to-face meetings in a working environment, but I have never seen
any research that looked at the efficiency of chat or instant messaging for
collecting data for experimental purposes.

I am presently preparing an experiment where I aim to collect verbal data
from experts. If I used chat/IM, this would give me access to a lot more
distributed experts, and it would have the advantage over phone interviews
of not needing transcriptions.

But before I start, I want to make sure that there is no major problem with
this approach. (I am also considering doing the comparison between chat and
face-to-face if no one else has done it before).

If anyone is aware of research that compares chat/IM with face-to-face for
collecting verbal data from participants, please answer me directly, and I
will summarise for the group.


---------------------
Sylvie Noel, Ph.D. (psychologie), M. Sc. (ergonomie)
Chercheuse scientifique - Research Scientist
Centre de recherches sur les communications - Communications Research Centre
Industrie Canada - Industry Canada
(613) 990-4675

    --------------------------------------------------------------
        Tip of the Day: Forward out-of-office replies to
                    mailto:[log in to unmask]
     CHI-WEB: www.sigchi.org/web POSTINGS: mailto:[log in to unmask]
              MODERATORS: mailto:[log in to unmask]
       SUBSCRIPTION CHANGES & FAQ:  www.sigchi.org/web/faq.html
    --------------------------------------------------------------