TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Re: Techwhirling ain't a science? Maybe it should be!
Subject:Re: Techwhirling ain't a science? Maybe it should be! From:John Jeuken <John -dot- Jeuken -at- asml -dot- nl> To:TECHWR-L <techwr-l -at- lists -dot- raycomm -dot- com> Date:Thu, 28 Oct 1999 09:28:57 +0200
Geoff has brought up an interesting point about the similarities
between techcomm and the scientific method.
<snip>
> The assertion that technical communication isn't a science
> leads me to wonder why not. After all, the definition of
> scientific inquiry is as follows:
> 1. Based on an existing body of knowledge, form a
> hypothesis.
> 2. Test that hypothesis under known, (semi-?) controlled
> conditions.
> 3. Revise that hypothesis if necessary based on the results.
> 4. Repeat as needed ("replication", "independant
> confirmation", and "iteration").
If anything technical communication can be regarded as a social
science. It's a people thing. Without the people there is no
communication in technical communication.
When you're dealing with people there's no such thing as exact science.
One point that is usually overseen when defining
scientific knowledge is that "independent confirmation"
doesn't really validate that knowledge as much as "independent
refutation" does. The fact that you can confirm a hypothesis
isn't as important (or informative) as the fact that you can't refute it.
The moment a hypothesis is refuted (proven wrong) it is invalid.
This is what makes techcomm contentious. 'Standards' are constantly
being refuted. For instance, if studies show that the MAJORITY of the
research population find all caps slower to read, means that that there
is a MINORITY that at least doesn't care or could even find them
easier to read. So writing for that minority would mean what?
(Try telling your boss/client s/he's part of a minority ;-)
Academic researchers often only focus on small sections of our potential
audience and are always applying the law of averages.
This is why your specific audience analysis should validate what you do.
> 1. Based on audience analysis, determine which of several
> "best practices" and "standards" should apply to our particular
> audience.
> 2. Create a document based on the hypotheses in 1 and
> perform usability tests under a variety of (semi-?) controlled
> conditions.
> 3. Revise the document if necessary based on the results.
> 4. Repeat as needed ("replication", "independant
> confirmation", and "iteration").
>
> So maybe the problem isn't that technical writing isn't a
> science, but rather that we're not applying the scientific
> methodology we already know we should use?
<snip>
I would say use the above four step procedure, that Geoff has defined
as your 'best practice' and walk (don't run) with it.