TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: determine success From:"Caroline Briggs (Pachaud) (Exchange)" <carolinp -at- EXCHANGE -dot- MICROSOFT -dot- COM> Date:Tue, 22 Dec 1998 14:23:05 -0800
When we do doc usability testing we ask subjects to perform a set of tasks
using the documentation. Typically we do this at a couple of points:
pre-beta 2, for example (while there's still time for course-correction) and
pre-final beta (as a sanity check). One of the main things you're testing is
the completeness and the usefullness of the navigation, the index, and the
table of contents--obviously if your user group can't find information that
you know is there, you've identified a clear problem! The other thing this
tends to expose is audience miscalculation. If the information is there, but
is buried in terms your user group can't understand, that's another type of
problem. It's actually a pretty good way to test your assertions. Think you
have too much conceptual information? Too little? Need roadmaps? Your user
group will be more than happy to tell you. One other thing, we found that
using program managers as usability subjects worked really well. They had
enough familiarity with the products so that they weren't totally lost, but
not enough hands-on time with the interface to make the test unfair.