TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Subject:Re: Use Cases From:SIANNON -at- VISUS -dot- JNJ -dot- com To:"TECHWR-L" <techwr-l -at- lists -dot- raycomm -dot- com> Date:Tue, 23 Oct 2001 8:48:57
I notice a difference in the definitions being presented for "use cases".
This is the first job I've worked where the term was used, so I am
intrigued by the different definitions being applied.
Sarah Lathrop says:
"Use cases also serve as a starting point for test cases and for end-user
documentation. A good set of use cases for software would include all the
scenarios that can happen depending on the user request and the conditions
under which the request is made. By writing use cases as part of the
requirements gathering process, decisions on how the system will handle all
the contingencies are thought our in advance and not left to the
programmers to decide on the fly or to miss entirely."
Where I am, use cases *are* the test cases -- they are not necessarily a
formal part of the requirements-gathering process, because the apps we are
working with often change existing manual processes, and so the customer
can't tell what *can* happen until we give them some idea of what they have
to work with. They say what is needed, we figure out how to do it, and then
build use cases from there, parallel with the functionality itself,
tweaking them throughout the development stage until testing time.
Granted, this is also a project that has looped in its lifecycle so many
times, and mutated so many times in response to changing/expanding customer
needs, that we've had to treat everything as a moving target. So basically,
the programmers *do* decide these things on the fly, since they are the
designers as well. Which brings us to the next point.
Angela Kea includes an *excellent* question:
"Do the programmers/designers create FSPs?" ...and proceeds to include an
excellent description of these specifications.
She underlines a very good point: we usually work from the assumption that
there is a certain level of personal documentation on the part of the
programmers. The problem is that not all programmers detail their specs,
and even when you can get them to comment their code, the comments are
often too cryptic or incomplete to be useful. Management often assumes it
is part of the programmer's job to keep on top of that level of detail, and
that generating a separate document to "restate" what the code is doing at
that level of detail is a waste of time/paper/man-hours/etc. When presented
with the "hit by a bus/wins the lottery" scenario, such management may
respond that the programmers are adults, and shouldn't be treated like
children and second-guessed at their jobs (understandable argument)--and
besides, in such an event someone can always look at the code itself and
figure things out. This last comment is a myth, in my opinion, since many
many hours can be wasted in such an event, some things (e.g., environmental
variables, influences and interdependencies) may *not* be decipherable from
the code, and preserving the "code silo" development style brings its own
host of problems that can far outweigh the "inconvenience" of more thorough
design documentation.
Basically, we are assuming a consistent development methodology that is
religiously adhered to. If you work in a place that has such, you are
pretty lucky. If you don't, you will be more likely to face the question
at hand: the role and definition of use cases in *your* development
process, and what scope do they entail.
In a direct response to the original poster, I gave the following
assessment, coming from my own perspective. Take it or leave it, as you
will:
"I can sympathize, since I'm dealing with a similar situation here. Part
of the problem is what the individual perspectives are on *what* is being
tested. What the Software Quality Engineer (read: QA master) thinks we're
testing is the functionality of a user interface down to a control level,
including constraints and stress/failure testing. What the developer
thought we're supposed to be testing is that the app. meets the
requirements set out for it, and that any control-level testing, or
stress/failure/constraint testing is part of debug, and the responsibility
of the developer. This is a major difference of scope.
The step tables in the use cases I pull together into our System
Functionality tests are very much "Click this button; confirm this is the
result" kinds of steps, because the person *running* the test is not
necessarily going to have ever seen the app. before -- we do that
deliberately, because those who are too close to the app. will often not
notice little things that can be a problem to a user who isn't already
overly well-versed in the app. Recently, I *have* tried to separate them
into "functionality" and "stress/failure testing" sections at the request
of the developers, but that's hard to do without just making extra work for
oneself.
What might help is if you clarify with the individuals involved the what
and how questions: what is being tested, and how does the use case prove
that it is functioning correctly?
An extended example: What is being tested is that the user can enter data
about a new product into the database (that is the requirement). In order
for that requirement to be fulfilled, all textboxes need to be enabled for
data entry when they are applicable to the product being entered. One of
the things the app. does happens to be the enabling and disabling of
specific textboxes in response to other actions on the screen (the textbox
for "widget color" is only enabled on an item with color options, for
example--"enforced permitted sequencing of steps" is how the FDA refers to
it). How the test addresses this is to have the user actually click in the
relevant textbox and type in a couple characters to ensure that the thing
is actually enabled. The developer may argue that if the background color
changes, that will prove the textbox is enabled, because it's part of the
same block of code that re-enables the box. The problem with following
that logic is that only the programmer knows he didn't accidentally copy
and paste half the block of code, or forget to reset a variable elsewhere
so that part didn't trigger right. The textbox could change color while
still being locked from data entry. The developer may believe that's his
problem, to be resolved in his debugging of the code. However, if that were
the case, why do we have formal testing at all? Programmers don't always
see quality testing as the CYA it is for them--they'd like to believe that
all their code will come out gold, and that they'll have thought of
everything,--but in the end they are only human, and so we test. "
Shauna Iannone
--------------------------------------------
A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.
-- Lazarus Long
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Announcing new options for IPCC 01, October 24-27 in Santa Fe,
New Mexico: attend the entire event or select a single day.
For details and online registration, visit http://ieeepcs.org/2001
Your monthly sponsorship message here reaches more than
5000 technical writers, providing 2,500,000+ monthly impressions.
Contact Eric (ejray -at- raycomm -dot- com) for details and availability.
---
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit http://www.raycomm.com/techwhirl/ for more resources and info.