IEEE Software Test Documentation, a summary
IEEE standard 829-1998 covers test plans in
section 4, test designs in section 5,
test cases in section 6, test
logs in section 9, test incident reports
in section 10, test summary reports in section
11, and other material that I have decided not to summarise in the
other sections.
Beware. This is only a summary. Anyone interested in claiming
conformance to the standard should read the standard. I advise people
interested in test documentation to read the standard, including the
extensive examples, first. You may well decide that the full paper
trail is too much paper for your needs, and decide to use the ideas
in the standard selectively. The key point is to have test cases
organised coherently, to do testing, to log what happens, and
to think about the outcomes.
A test plan answers the questions
- WHAT is to be tested,
- HOW it is to be tested,
- WHO is to do the testing,
- WHAT resources they will need,
- WHEN they will do it, and
- WHAT can go wrong.
A test plan has the following parts, in this order.
- Test plan identifier. A unique label so you can refer
to that document.
- Introduction. Outlines what is to be tested. The top level
test plan should point to related documents such as project plan,
quality assurance plan, configuration management plan, standards.
Lower-level plans should point to their parents. I suggest using
hypertext links to link test plans in temporal order, and to point to
any relevant material.
- Test items. What is to be tested? Be explicit about version.
Say how to get the test items into the test environment. Point to
whatever documentation of the test items exists. Point to any "incident
reports".
- Features to be tested. Say which features and combinations of
features are to be tested. You need not cover all the features of one
test item in one test plan.
- Features not to be tested. If you don't cover all the features
of a test item, you should say which ones you left out and why.
- Approach. Describe what is to be done in enough detail that
people can figure out how long it will take and what resources it will
require. What tools will you need? How thorough will testing have to be?
How can you tell how thorough it was? What might get in the way?
- Item pass/fail criteria. How will you know whether a test
item has passed its tests?
- Suspension criteria and resumption requirements. When is it ok
to stop this test for a while? What will you have to do when you start
again?
- Test deliverables. What documents should the testing process
deliver? Logs, reports, test input and output data, the things described
in this summary and a few more. You decide what you need.
- Testing tasks. What must be done to set up the test? What
must be done to perform the test? What has to be done in what order?
- Test environment needs. What must the test environment look
like? What would it be nice to have? Tools? People? Building space?
Bandwidth? How will these needs be met?
- Responsibilities. Who does what?
- Staff and traning needs. How many people with what skills will
you need? If there aren't enough people with the required skills, how are
they going to get them?
- Schedule Define milestones, estimate times, book resources.
- Risks and contingencies. What are you assuming that could go
wrong? What contingency plans to you have?
- Approvals. Which people must approve the plan? Get their
signatures.
A test design spells out what features are to be tested and how they
are to be tested. It includes the following parts, in this order:
- Test design specification identifier. A unique label so you
can refer to that document. Point to the test plan.
- Features to be tested. Point to the requirements for each
feature or combination of features to be tested. Mention features that
will be used but not tested.
- Approach refinements. Spell out how the test is to be done.
What techniques? How will results be analysed? What setup will be needed
for test cases?
- Test identification. Point to the test cases, with short
descriptions. (Some test cases might be part of more than one design or
plan.)
- Feature pass/fail criteria. Spell out how you will tell whether
a feature has passed its tests.
A test case is a single test you can run. The document has the
following parts, in this order:
- Test case identifier. A unique label so you
can refer to that document. Point to the test plan/design.
- Test items. List the items and features you will check.
Point to their documentation.
- Input specifications. Describe all the information passed to
the test item for this test. [Either point to files, or include the
information in such a way that it can be automatically extracted.]
- Output specifications. Describe all the behaviours required,
including non-functional requirements like time, memory use, network traffic.
Provide exact values if you can. [See previous note. See
cosc345/pcfpcmp.d/]
- Test environment needs. What hardware, software, and other
stuff do you need?
- Special procedural requirements. Any special setup, user
interaction, or tear-down actions?
- Inter-case dependencies. What other test cases must be done
first? Point to them. Why must they be done first?
Test procedure specification
Omitted.
Test item transmittal report
Omitted.
A test log answers the question "what happened
when testing was done?" [As much as possible, this should be automated.]
A test log includes the following sections in the following order:
- Test log identifier. A unique label so you
can refer to that document. Point to the test case.
- Description. Information common to all the items in the log
goes here. What was tested (with versions)? What was the environment?
Other documents say what was supposed to happen. This says
what did happen.
- Activity and event entries. Beginning/ending timestamps and
name of actor for each activity. Point to the test procedure. Who was
there and what were they doing? What did you see? Where did the results
go? Did the test work? When something surprising happened, what was
going on just before, what was the surprise, and what did you do about it?
Point to incident reports, if any.
If anything happens that should be looked into further, a test
incident report should be written. It should contain the following
sections in the following order:
- Test log identifier. A unique label so you
can refer to that document.
- Summary. Briefly, what happened? Point to the test case and
test log and any other helpful documents.
- Incident description. A detailed description of what happened.
See the standard for a list of topics.
- Impact. What effect will this have on the rest of the testing
process? How important is it?
A test summary report doesn't just summarize what happened, it
comments on the significance of what happened. It should contain the
following sections in the following order:
- Test summary report identifer. A unique label so you
can refer to that document.
- Summary. Summarise what was tested and what happened.
Point to all relevant documents.
- Variances. If any test items differed from their specifications,
describe that. If the testing process didn't go as planned, describe that.
Say why things were different.
- Comprehensiveness assessment. How thorough was testing, in
the light of how thorough the test plan said it should be? What wasn't
tested well enough? Why not?
- Summary of results. Which problems have been dealt with?
What problems remain?
- Evaluation. How good are the test items? What's the risk
that they might fail?
- Summary of activities. In outline, what were the main things
that happened? What did they cost (people, resource use, time, money)?
- Approvals. Who has to approve this report? Get their
signatures.
The End.