TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Usability testing: more than 5 is counterproductive?
Subject:Usability testing: more than 5 is counterproductive? From:"Hart, Geoff" <Geoff-H -at- MTL -dot- FERIC -dot- CA> To:"TECHWR-L" <techwr-l -at- lists -dot- raycomm -dot- com> Date:Thu, 15 Mar 2001 10:59:04 -0500
Steve Hudsen reported: <<According to Jacob Nielson, a usability study with
more than 5 subjects is counterproductive.>>
You're actually misquoting Jakob. What he actually says is that you can
perform extremely useful usability testing with very few testers, and
repeating these small-scale tests iteratively can provide very good results,
much faster than with a formal full-scale usability test. That's
particularly true if you don't have the time or budget for the larger tests,
and it's certainly true that the larger tests can be unproductive or
economically indefensible if poorly designed.
<<Personally, I find this flies in the face of accepted statistical
analysis.>>
Yes and no. Statisticians will certainly tell you that increasing your
sample size increases the robustness of the test, but they'll also point out
that the Student's t-test and other well-accepted tests of significance were
designed specifically for small sample sizes. Moreover, it's well known that
"stratifying" your sample to focus on a single, distinct group, lets you use
much smaller sample sizes than if you were to take a shotgun approach and
test the entire population of users.
That's a quibble, though. The flaw in Nielsen's approach is that if you
apply it sloppily (always a risk if you're a rushed and overworked
techwhirler) or fail to repeat it iteratively until you stop turning up
problems, the approach is likely to leave important usability problems
undiscovered. A classic example: Nielsen and colleagues recently published
an article in PCMagazine on how they did one of these quick and dirty
usability evaluations on the www.ideas.com site and made some major
improvements. I agree that the improvements were major, yet I still found
the resulting site difficult to use. In particular, it seems to make it
difficult or impossible to perform some tasks that I'd consider common and
thus in need of an easy way to perform; for example, I created an idea for
sale in the "targeted audience" category then changed my mind, and found no
way to reassign it to the "open audience" category. (Those may not be the
actual terms; I'm working from memory here. There may also be a way to do
this, but I didn't find it. I confess to being in a hurry and willing to be
corrected on this.)
--Geoff Hart, FERIC, Pointe-Claire, Quebec
geoff-h -at- mtl -dot- feric -dot- ca
"User's advocate" online monthly at
www.raycomm.com/techwhirl/usersadvocate.html
"The most likely way for the world to be destroyed, most experts agree, is
by accident. That's where we come in; we're computer professionals. We cause
accidents."-- Nathaniel Borenstein
IPCC 01, the IEEE International Professional Communication Conference,
October 24-27, 2001 at historic La Fonda in Santa Fe, New Mexico, USA.
CALL FOR PAPERS OPEN UNTIL MARCH 15. http://ieeepcs.org/2001/
---
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit http://www.raycomm.com/techwhirl/ for more resources and info.