TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Value added From:Ben Kovitz <apteryx -at- CHISP -dot- NET> Date:Thu, 4 Feb 1999 13:02:47 -0700
Kersten Richter wrote:
>Hello, I'm currently learning about the value added by technical documents
>in one of my classes. My professor told us that this value should always be
>measurable. My question is--how do you measure the value added by a
>technical document which is being used as a "show piece"? For example, a
>web page designed to show off? How do you measure the client satisfaction?
Good question.
Methinks your professor might be a dogmatist. There's lots of stuff that's
plenty real but that we don't know how to measure.
A web page designed to show off is actually advertising, and might or might
not be a "technical document." But even a user's manual serves public
relations, in that if it helps users get useful results with the software,
the company gets good word of mouth, sales of new products to that
customer, sales of upgrades, etc. Whether the manual has a polished look
or not certainly costs something, in terms of time and equipment needed to
generate it and the difference in salary between tech writers who can
create a polished look and those who can't.
Theoretically, we should be able to measure the value of public relations
by seeing whether sales increase or decrease and by how much. But in
reality, sales are a combined effect of billions of causal factors, most of
which are not in our control. There's simply no way to isolate the effect
of the user's manual on your next year's revenue from the effects of the
other few billion causal factors.
I saw an interesting TV program last week about the frogs that say
"Bud-Weis-Er." At the end of one of their Superbowl commercials, a lizard
asks rhetorically, "How is this supposed to sell beer?" The TV pundit had
the answer. He said that before the frog ads came out, Anheiser-Busch's
dominance in the market had led to a feeling that they were a kind of
faceless, corporate monolith. People choose their drinks on the basis of
much more than just the taste, of course. It's sort of a self-image thing,
associating yourself with the culture that's associated with the drink.
The frogs replaced the stodgy old Clydesdale image, making the company seem
human and friendly. You could drink a Bud without feeling like you were
part of that corporate-cog, bottom-line-is-everything mentality.
The value of this change is probably enormous, but anyone who thinks they
can empirically measure it with any degree of accuracy is either a
dogmatist or a charlatan. And while image is typically less important in
software, it still has effects (you think Microsoft would be getting
harassed by the government if they had Apple's public image?), and the
manual often projects an image almost as strongly as the product. User's
manuals for cars have hardly any effect on the company's image. But how do
you measure the difference between these effects on image? It's obvious
that there's a big difference, but I don't know any "image units" in terms
of which to measure it, nor any way to trace its effects to sales, law
suits, and hostile legislation.
One thing that you can measure, if you've got the resources, is how much
time you save your users. You can do this by getting two groups of users
and giving the manual to only one group, then measuring how long it takes
each group to do various things.
But of course, if someone were to make a decision about whether to include
or cut the manual by seeing if this expression is greater than zero:
(user's hourly salary * hours saved by manual) - (cost of writing manual)
he'd be committing the fallacy of attending only to what we know how to
measure only because we know how to measure it. A huge (and maybe even
partly measurable) effect of a good manual is to reduce calls to the tech
support department. The manual might save time but anger or condescend to
the users, perhaps increasing turnover or making them shop around for other
products. With the better manual, they might find better *uses* for the
software, which the above experiment wouldn't catch. (This is *not*
unusual.) Every manual that you make sets a precedent for quality, and is
often imitated by newbie tech writers--whether the techniques in it are
good or bad. So a good manual not only helps users, it helps future
manuals (and the users of those manuals). A better manual might lead the
users to come up with suggestions that they wouldn't have otherwise, which
could greatly improve the product on the next release. How much is that
worth? How do you measure these things? For that matter, a worse manual
might generate more suggestions for improving the product.
Another problem with performing that experiment is that it might well cost
more to perform the experiment than the manual gains in saved time.
And finally, I don't buy the idea that all good and bad is measurable in
dollars and cents. Suppose that you're at a telephone company, writing a
manual for your fellow employees (i.e. a completely internal product). A
manual written in the usual bureaucratese, of the sort that gives you a
headache to decode, might be understandable but makes people feel like
they're working a tedious job where it doesn't make any difference what
happens. A manual written on the assumption that the reader should
exercise common sense and judgement contributes to people's enjoying their
jobs and their lives. That probably does influence the bottom line in ways
that we can't accurately trace, but I think that even if it doesn't, doing
crummy work wastes your own life as well as the lives of the people who
have to read (or throw away) the documents you make. Anthropologists know
a lot more about "value added" than economists.
To sum up: every little thing we do counts, but we usually can't know
exactly how much it counts or even all the ways in which it counts.
By the way, I'd be interested to hear how your professor answers the sorts
of points that I and Tim Altom have made.