TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
> Meanwhile, measures like the Flesch-Kincaid (rather better than Gunning)
> are of some value when applied to general material (such as fiction and
> newspapers -- if anyone can tell the difference nowadays), but I really
> question their value for our kind of writing.
>
> Of course, the original posting wasn't asking "Is this any use?" but
> rather "How can I do it?" Still, if it's no use, why bother?
===
Michael -
You're right -- the readability tests we use were not designed for
complex, technical docs.
You ask, "Still, if it's no use, why bother?"
I've found that in hardware/software companies (the area I'm most
familiar with), having a "quick-and-dirty" assessment -- even if it
isn't precisely accurate -- is better than having no idea whatsoever of
the readability of the docs. For without any assessment, we tend to end
up with "Programs written by engineers...for engineers!"