TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
The reason why quality control processes evolve is to solve problems with
quality. Formalizing the solution is just one way to ensure that the job
gets done, and often not the best one because it can lead to organizational
paralysis and (as Andrew Plato notes) it can stifle the freedom and
creativity necessary to produce truly good results. For example, some of the
researchers I work with rarely present correct final calculations in the
reports I edit, most often because they import detailed spreadsheet data
into the reports (which present only summary data) and forget to redo the
calculations to check for rounding errors. We developed a formal "technical
check" method, which lists the kinds of things internal reviewers should
look for (such as numbers), but our reviewers are human, and not all of them
can be bothered to follow the checklist. (The long-term solution would be to
develop a spreadsheet that automatically rounds the results of calculations
to the correct number of significant figures, thereby minimizing the
opportunity for errors to arise in the first place.)
Were we one of those organisations that slavishly follows an atherosclerotic
quality checklist, someone would proceed to abuse said reviewers until they
get with the program or went off in search of easier work. We aren't, so
nobody does (though occasional snide comments are made to remind people to
check their numbers). As a result, small math errors try to slip through the
cracks. They don't make it into print because I now routinely check their
math as part of my substantive edits. The research director provides a
reality check on the numbers before he approves a report for publication.
Thus, our process ensures that we have at least two checks (me and the
research director) plus nominally a third check (the internal review) as
well; it doesn't force us to blindly follow the sequence of events listed in
the "official" policy (e.g., sometimes I do the check after internal review,
sometimes before, depending on when my editing gives the most bang for the
buck for the review process as a whole).
So the key is to have a process that _identifies_ what checks and balances
are necessary, but to provide enough freedom that the spirit of the policy
(and not its letter) is honored.
"Technical writing... requires understanding the audience, understanding
what activities the user wants to accomplish, and translating the often
idiosyncratic and unplanned design into something that appears to make
sense."--Donald Norman, The Invisible Computer