TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Measuring effectiveness/quality of docs From:Dick Margulis <ampersandvirgule -at- WORLDNET -dot- ATT -dot- NET> Date:Fri, 13 Feb 1998 16:16:50 -0500
Beth Kane wrote:
>
> ?=
> Does anyone have any ideas about how we can "measure" the
> effectiveness/quality of our information development work efforts? For
> example, are there ways we can quantify the number of errors in docs,
> customer usability/satisfaction, etc.?
>
>
That's a multilevel question with a multilevel answer. Bear with me.
When you ask how Suzie ranks against the field with respect to the
number of spelling errors she makes, that is a verification question.
Verification has to do with measuring the degree to which the product
meets our internally defined requirements (okay, I'm simplifying; don't
hit me).
By doing a careful audit of Suzie's work, we can verify that she meets
whatever standard we have set for spelling errors. With sufficiently
clever analysis we can also estimate the cost to the organization for
each such error, based, perhaps, on the cost of detecting and correcting
it.
This has nothing to do with the effectiveness of Suzie's writing,
though. That is not a verification question; it is a validation
question.
Validation has to do with measuring the degree to which the product
meets the customer's needs.
So we might look at metrics of number and length of calls to the help
desk per copy sold. Or we might look at hours of software testing vs.
hours of software development. Or number of error conditions detected
during testing as a function of number of lines of code. (This would
have to do with evaluating the effectiveness of the documentation, given
an unvarying staff of developers and testers. If we were asking whether
the new programmer could find his way out of a paper bag, that would be
a verification question.)
The situation is similar in marketing communication. We can verify that
the brochure is properly printed, contains no errors of fact, and
conforms to our style. We can validate, by measuring sales in areas
where we distribute the brochure against areas where we don't, how
effective it is.
Different verification and validation tools apply in different
situations. A typical tool is a survey form/problem report form at the
back of a manual, inviting the reader to express an opinion as to the
utility and effectiveness of the document. It would take some time to
develop enough data this way to be able to compare one document's
effectiveness against another, and there are many statistical pitfalls
that need to be avoided. But such a form can be used to validate the
effectiveness of your editing/error-detection methods, too.
Anyway, that's a start at answering your question. Hope it helps.