TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: What Might a Writing Test Be? From:Chris Hamilton <chamilton -at- GR -dot- COM> Date:Fri, 27 Mar 1998 08:10:03 -0600
My concern about a test like this is that it's not what it proports to be: an
accurate model of the process that goes into writing good technical material.
It's a model of a subset of a the technical writing process. But it doesn't
test some very important things: getting information from SMEs (and getting
along with them, too), writing material from scratch, ability to handle reviews
without taking them personally, ability to handle late changes to the product,
etc., etc., etc.
I realize that there are other ways to test these things, but by putting a
numeric grade on a piece of the pie, you tend to make that piece of the pie
more important. The gut feelings you have about how someone handles a specific
part of the job you think is important could be trumped by the numbers that are
right there in black and white. And the numbers, while masquerading as
objective measures, are really quite subjective.
I think there's some benefit to testing under certain circumstances, and I
think this approach might be very much worthwhile under some circumstances, but
I think I'd tend to stay away from something this elaborate.
Also, what do you mean by courtesy? I don't understand the application of
courtesy in technical writing.
Kelli Bond wrote:
> I agree with Tracy that proofreading tests are easier to score than
> writing tests. However, a defensible writing test can be set up and
> administered using a wholistic scoring system (say, 0 to 4) tied to
> specific criteria:
>
> --clarity (0 to 4)
> --conciseness (0 to 4)
> --correctness (0 to 4)
> --content/completeness (0 to 4)
> --courtesy (0 to 4)
>
> The applicant is placed in a room with some source documents and
> computer equipment. He or she receives five to six hours to produce up
> to five pages of a user's guide. For the sake of time, I'd provide an
> existing document requiring moderate to heavy revisions plus three to
> five new, small sections of about one or two paragraphs each. Markups
> should be in the form of editorial comments/suggestions in the margin.
> Leave all mechanical and usage errors unmarked so that the writer can
> correct them on his or her own during the test.
>
> Scores under each category are defined on a grading sheet that the
> applicant doesn't see, but is distributed to scorers from within your
> department. For example, "correctness" and "conciseness" might read:
>
> CORRECTNESS
>
> 4 pts. Three or fewer errors in the finished document
>
> 3 pts. Four to six errors...
>
> 2 pts. Seven to nine errors...
>
> 1 pt. Ten to twelve errors...
>
> 0 Thirteen or greater errors...
>
> CONCISENESS
>
> 4 pts. Three or fewer instances of passive voice,
> expletives, etc. (add your own bugaboos)
>
> 3 pts Four to six instances...
>
> 2 pts Seven to nine instances...
>
> 1 pt Ten to twelve instances...
>
> 0 Thirteen or greater instances...
>
> For "completeness," list the pieces of information that need to be
> included in the finished product, count them up, and construct your
> scoring scale from there.
>
> You and another person should grade each test (which has been assigned a
> code number by your department support staffer or HR). All errors,
> including missing information, need to be carefully documented.
>
> If you were to use the criteria stated at the beginning of this post,
> the test would be worth 20 possible points. An average score of 14
> (70%), 16 (80%), or 18 (90%) could be your minimum standard for whatever
> the next step for the applicant might be--say, a second interview or
> hire/no hire.
>
> The scores assigned should be within two points of each other. If they
> aren't, get a third party to grade.
>
> My information is based on procedures that many schools, colleges, and
> universities--and organizations such as the Educational Testing
> Service--use for standardized essay tests. Hope this is of some help!
>
> Take care,
> Kelli Bond
> Principal Consultant
> KBA/DesignWrite (Orange County, California)
>
--
Chris Hamilton
chamilton -at- gr -dot- com
-------------------------
My views, not my employer's.