TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
<<Regulatory requirement. If we write in our product spec that the user
manual for homecare ventilatory-assist equipment will be written at a
7th grade level, the FDA then requires us to live up to that (a bit
like an ISO requirement). >>
Don't know anything about FDA regulations, but with that caveat: The
simple solution, then, is not to mention the grade level at all in your
spec--that's patronizing to those who will be reading this. ("Oh great.
They dumbed it down so _I_ can understand.") Instead, aim for
simplicity, and hire a human to ensure that it's as simple as you
think. Even better: state in your specs that you will ask a few members
of your target audience to review your documentation (a reality check)
rather than relying on a synthetic metric to do the work for you. Then,
rather than having to tell the FDA "we used a software tool", you can
say "we tested it with real customers" and revised it until we were
sure it worked.
Far, far more effective. And if you're in the U.S., you'll have much
lower risk of a liability lawsuit because you used real people to test
your documents. Isn't that why the FDA insists on three levels of
clinical trials before releasing a drug on us?
<<I'm not so sure about that. The basics of the Flesch-Kincaid scale
are syllables per word and words per sentence. Those criteria are
certainly valid, even if one tries to beat the system by feeding it
nonsense. In other words: The scale can't read, but that doesn't mean
it's not a useful mathematical equation!>>
You're not sure because you didn't try my test, and because you haven't
read the research. <g> Tests such as Flesch are used because there's
this neurotic compulsion in a certain group within the educational
community to create metrics. Metrics are great because they don't
require any thought, and because you don't have to demonstrate that
they're meaningful. They're numbers, so they _mus_t be meaningful.
Sadly, that's not always true.
Think of it this way: There are two components to judging whether text
is simple to understand and communicates effectively. First, there are
purely mechanical measures: all else being equal, it's true that
shorter, less convoluted sentences with shorter and more familiar words
are easier to read.
Unfortunately, "all else" is never equal, which is where the second
part comes in. No current software can judge the quality (simplicity,
correctness, consistency, clarity, and ability to meet the audience's
needs) of the semantic content. A sentence that easily passes the
Flesch test can fail on each and every one of these, and that's hardly
a desirable outcome. The semantic content is far more important than
the purely mechanical aspects of the text, which is why I proposed that
you try randomizing the words: It's always easier to read long, complex
sentences that are well written than short, simple stretches of
gibberish.
--Geoff Hart ghart -at- videotron -dot- ca
(try geoffhart -at- mac -dot- com if you don't get a reply)