TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Advice on software testing? From:"Peter Neilson" <neilson -at- windstream -dot- net> To:techwr-l -at- lists -dot- techwr-l -dot- com Date:Wed, 10 Aug 2011 13:14:46 -0400
Not really true. I may see a bug, but understand that it results from a
bad design decision that I had previously not noticed. "You show
'completed' in yellow and 'important' in gold. Are you aware that those
who have only 8-bit color (and that fortunately includes two of the test
team) see those colors as the same? Can we be certain that none of the
customers have 8-bit color? Is it too late to change the colors?" [And
I've not even gotten to colorblindness and how to design for it.]
The "obvious" response (which I have seen in a similar situation) is "Get
the testers some equipment that's more modern," while the correct action
is to expand the design requirements to capture an inventory of actual
customer equipment or else to require the customers comply with the new
standard (explained in the _documentation_).
So if I report a bug, it is perfectly valid for me to state any of these
observations and indeed several others:
(1) Fails the test, as described [here].
(2) Nominally passes the test, but fails perceived usability [and why].
(3) Passes the test, but the standards are set incorrectly, leading to a
failure of a later test.
(4) The test is broken, failing to show failure.
(5) Everything passes, but so marginally that I'm worried.
(6) The product's code appears to contain paths that are either untestable
or unreachable.
With sufficient detail of reporting the tester can still continue to leave
the ship/don't-ship to others unless asked. "Number of bugs" is normally
an insufficient criterion. "Number of blocking bugs" usually is, and the
tester may correctly be required to identify potentially blocking bugs as
such.
An additional testing problem that can arise is the bug newly discovered
in a section of the product that was not changed. That can generate a lot
of finger-pointing. "Why didn't you find it before?" "What part of the
test did you change?" "Did we sneak in the new version of the xx library
by mistake?"
Battles between testers and developers are common. "It's NOT a bug. Read
the design specs." "Yes it is. It fails to work with Internet Explorer."
"Then that's a bug in IE, not the product." "But the customers use it with
IE!"
On Wed, 10 Aug 2011 12:07:41 -0400, Dan Goldstein
<DGoldstein -at- riverainmedical -dot- com> wrote:
Not sure what "position" can a tester have on a bug, other than, "The
bug exists, and here are the ramifications."
The tester's report should be read by those deciding whether or not the
product can be shipped. But the tester (and the report) can remain
neutral on the decision itself.
Create and publish documentation through multiple channels with Doc-To-Help.
Choose your authoring formats and get any output you may need. Try
Doc-To-Help, now with MS SharePoint integration, free for 30-days. http://www.doctohelp.com