TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Summary of Test Plan Ideas From:"Lois Patterson" <lois -at- dowco -dot- com> To:"TECHWR-L" <techwr-l -at- lists -dot- raycomm -dot- com> Date:Wed, 11 Jul 2001 15:02:42 -0700
I have written the first draft of a test plan for the documentation and
online help. I think the term "test plan" is likely too formal for what I
wrote up. I essentially produced an itemized list like that described by
Bill Swallow below, with a few more questions about the content and
structure. Neither the online help nor the print documentation will "break"
the system, so that was not an issue. However, the QA department will be
testing the help. I wanted to make sure the testers had an idea of the
structure, content, and formatting issues that should be considered, in
addition to their strong focus on technical accuracy.
Here is a summary of the responses I received, with the author's name in
parentheses below each excerpt.
Lois Patterson
==================================================================
I was tasked with this very thing last summer, and I started with my QA
department. Between their expertise in writing test plans/scripts and my
knowledge of what I wanted tested in the docs and help, we were able to come
up with some good plans.
(Jennifer Delmerico)
==================================================================
The purpose of testing is to break the system. I don't see how print
documentation can break the system. There are ways to make help break the
system. In those cases, you test the context calls for all resources not
just resources with topics. And, you test all application invocations of the
help. Use a testing tool like SQA_TeamTest. A QA person should be able to
write scripts for these tests. If you have any other functionality in your
help like JavaScript rollovers and such, that should be tested as well.
But, test plans should reference the requirements. Did you write a
requirements document for your help? And, print manuals.
For print manuals, a review will reveal if the manuals have the content
required by the requirements specified in the doc plan.
If the purpose of the testing is usability, then get the book on Usability
in the Wiley TW series. The discussion in that book is through.
PDF: same as above plus bookmarks, find, search, links, doc info, attached
search indexes, consistent view and size when launched (across doc set)
Help: pretty much same as PDF where appropriate, plus any additional bells
and whistles you choose to incorporate. If context-sensitive, make sure all
topics are launched from the application, and all topics are accessible from
within the help file itself.
(Bill Swallow)
==================================================================
What kind of testing are you writing about?
If it is User Acceptance testing, I'll usualy start with the
Functional Specifications Documentation, and create the trest
document to check that all the things that the system was supposed to
dol, from the user's standppoint, it actualy does it.
BTW...there is an web-based online forum where they discuss all types
of testing: http://www.qaforums.com/
It is Very Active.
(John Posada)
===================================================================
The following is the process I usually use:
1. Does this document make sense? Do I understand the document a day later
when I have already forgotten the details of the conversation I had with the
engineer?
2. What are the essential steps and concepts the user must understand? Are
they in the document?
3. When the document is tested against the software, is the document
accurate? Has the user interface changed? Does a feature work differently in
the background, not noticeable from the user interface? Where are
4. How does the document flow? Are simple concepts presented first? Are
there examples? Do the examples in the documentation make sense? Are they
explained well or are they confusing?
5. Is the document wordy? For example, are there a lot of "that you"? The
word "that" can be removed.
6. Is the document mostly written in present tense? In some instances, the
document may have the past or future tense.
7. Does the document reference the user in second person unless we are
talking about the user's users? Whew, the end of that sentence sounds
confusing.
8. Any typos? Spelling or grammatical errors? Are steps misnumbered or
misreferenced?
(Christina Rothwell)
============================================================================
I am regularly buried within this same process, so apologies if this tells
you things you already know -- I learned the hard way.
1. Know what the scope of the test needs to be. Are you just testing
functions for "works"/"fails" -- or are you testing at the control level for
display, navigation, trigger and data elements?
2. Know your audience. Does your tester know how to get _to_ the first step
of the sequence you are running them through? (I've often had to write
system functionality tests for user interfaces without knowing who the
tester was -- I had a project manager once who ran a test that included
printing out a screenshot here and there to verify the display, and he
didn't know how to do that. We had to re-run those parts of the test
entirely to get the screenshots.)
3. If you have to cross reference test steps/cases/scenarios to other
documentation (e.g., trace to design or requirements docs for regulatory
purposes), I'd recommend nested numbering sequences to make everything
immediately findable. (for example, step 12 of case 8 within scenario 2 of
function group 3 of the application is uniquely identified as 3.2.8.12,
which can be mapped to requirement 4.5, "An electronic signature will be
required to remove product for sampling" and Design Specification 2.4.1, the
section on "Product Sampling", through a traceability matrix). this will
prevent you from spontaneously combusting after the fifth revision of an
application in a complex, regulated system.
4. State any assumptions made about the environment or test data. As far as
an auditor is concerned, if it isn't written down, it never happened.
Stating what is being tested in a summary statement before each section can
save many headaches. Such statements, if one-liners or
consistently-structured short phrases, can be used as the index/contents
entries for each test case (in Word they can become part of the TOC so the
user can better identify what the parts of the test are evaluating, and in
an HTML format, they can serve as hotlinks to the independent test cases).
5. It's easier to align individual steps in a table.
(Shauna Iannone)
=============
Here is my original post:
Does anyone have some good tips for writing comprehensive, relatively formal
test plans for documentation (including both online and print).
I will compile all information received in a summary post.
*** Deva(tm) Tools for Dreamweaver and Deva(tm) Search ***
Build Contents, Indexes, and Search for Web Sites and Help Systems
Available now at http://www.devahelp.com or info -at- devahelp -dot- com
TECH*COMM 2001 Conference, July 15-18 in Washington, DC
The Help Technology Conference, August 21-24 in Boston, MA
Details and online registration at http://www.SolutionsEvents.com
---
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit http://www.raycomm.com/techwhirl/ for more resources and info.