Test Management: Test Process Fundamentals

Testing

In order to perform structured tests, a general description of the task as found in most development models is not sufficient. Besides integrating testing into the development process it is also necessary to provide a detailed test procedure.

The development task consists of the process phases test planning and control, analysis and design, implementation and execution, evaluation of the test exit criteria and reporting, as well as test completion activities. Although the presentation and description of the individual tasks suggest a sequential procedure they may of course overlap or be performed in parallel.

Test Planning and Control

Planning a comprehensive task such as testing ought to start as soon as possible in the initial stages of software development.

Resource planning

The role and purpose of testing must be defined as well as all the necessary resources, including staff for task execution, estimated time, facilities, and tools.

The associated specifications are to be documented in the test plan. An organizational structure including test management needs to be in place and ought to be adapted, if necessary.

Test management is responsible for the administration of the test process, the test infrastructure, and testware. Regular control is necessary to see if planning and project progress are in line. This may result in the need for updates and adjustments to plans to keep the test process under control. The basis for controlling the test process is either staff reporting or relevant data and its evaluation by appropriate tools.

Since exhaustive testing is impossible, priorities must be set. Depending on the risks involved, different test techniques and test exit criteria must be specified when establishing a test strategy. Critical system components must be intensively tested. However, in the case of less critical components a less comprehensive test may suffice or testing may even be waived. The decision must be very well-founded to achieve the best possible allocation of the tests to the “important” parts of the software system.

Determining the test strategy

Test intensity is determined by the test methods employed and by the intended degree of coverage when executing the test cases. The degree of coverage is one of several criteria for deciding when a test is completed.

Determining test exit criteria

Software projects are often under pressure of time, a fact which must be anticipated during planning. Prioritizing tests causes the most critical software components to be tested first in case not all planned tests can be performed due to time or resource constraints.

Prioritizing tests

Without adequate tools the test process cannot be sufficiently performed. If tools are missing, their selection and procurement must be initiated early in the process.

Tool support

Moreover, parts of the test infrastructure themselves often need to be established, for instance the test environment, in which system components can be executed. They need to be put in place early so that they are available when coding of the test objects is completed.

Test Analysis and Design

The test strategy developed during planning defines the test design techniques to be used. As a first step of test analysis, the test basis needs to be checked to see if all required documents are detailed and accurate enough to be able to derive the test techniques in agreement with the test strategy. The specification of the test object determines its expected behavior. The test designer uses it to derive the prerequisites and requirements of the test cases.

Verification of the test basis

Depending on the analysis results it may be necessary to rework the test basis so that it can serve as a starting point for the test design techniques techniques defined in the test strategy. For example, if a specification is not accurate enough it may need to be improved. Sometimes it is the test strategy itself which may need to be changed, for instance, if it turns out that the selected test design techniques cannot be applied to the test basis.

During test design, test techniques are applied to identify the respective test cases, which are then documented in the test specification. Ultimately, the test project or test schedule determines the timing of the test execution sequence and the assignment of the test cases to the individual testers.

When specifying test cases, logical test cases must be defined first. Once this is done, concrete, i.e., actual input and expected output, values may be defined.

Logical and concrete test cases

However, this is done during implementation, which is the next step of the fundamental test process.

Logical test cases can be identified based on the specification of the test objects (black box techniques) or based on program text (white box techniques). Thus, the specification of the test cases may take place at quite different times during the software development process (before or after or parallel to coding, depending on the test techniques selected in the test strategy). Test planning and specification activities can and should take place concurrently with earlier development activities, as explicitly pointed out in the W-model or in extreme programming.

Black box and white box techniques

During test case specification the particular starting situation (precondition) must be described for each test case. Test constraints to be observed must be clearly defined. Prior to test execution it needs to be defined in the post-condition which results or which behavior is expected.

Test cases comprise more than just test data

In order to determine the expected results a test oracle is queried which predicts the expected outcomes for every test case. In most cases the specification or the requirements are used as the test oracle to derive the expected results from individual test cases.

Test oracle

Test cases can be distinguished according to two criteria:

Positive and negative test cases

  • Test cases for testing specified results and reactions to be delivered by the test object (including treatment of specified exceptional and failure situations)
  • Test cases for testing the reaction of the test objects to invalid or unexpected inputs or other conditions for which “exception handling” has not been specified and which test the test object for robustness

The required test infrastructure to run the test object with the specified test cases is to be established in parallel to the other activities so as to prevent delays in the execution of the test cases. At that point the test infrastructure should be set up, integrated, and also tested intensively.

Setting up the infrastructure

Test Implementation and Execution

In this step of the test process, concrete test cases must be derived from the logical test cases, and executed. In order to run the tests, test infrastructure and test environment must both be implemented and in place. The individual test runs are to be performed and logged.

The actual tests are to be run observing the priorities that we defined earlier. It is best to group individual test cases into test sequences or test scenarios in order to allow for the tests to be run efficiently and to gain a clear structure of the test cases.

Timing and test case sequence

The required test harness must be installed in the test environment before the test cases can be executed.

At the lower test levels, component and integration testing, it makes sense to run automated rather than manual tests (e.g., using JUnit [URL: JUnit])

During test execution an initial check is done to see if the test object is, in principal, able to start up and run. This is followed by a check of the main functions (“smoke test” or acceptance test during entry check of the individual test levels).

Checking main function completeness

If failures occur already at this stage further testing makes little sense.

Test execution must be logged accurately and completely. Based on test protocols, test execution must be traceable and evidence must be provided that the planned test strategy has actually been implemented. The test protocol also contains details concerning which parts were tested when, by whom, to what extent, and with what result.

Tests without a test protocol are useless

With each failure recorded in the test log a decision needs to be made whether its origin is thought to lie inside or outside the test object. For instance, the test framework may have been defective or the test case may have been erroneously specified.

Evaluating the test protocols

If a failure exists it needs to be adequately documented and assigned to a incident class.

Based on the incident class the priority for defect removal is to be determined. Successful defect correction needs to be ascertained: has the defect been removed and are we sure that no further failures have occurred?

Correction successful?

The earlier made prioritization has the effect that the most important test cases are executed first and that serious failures can be detected and corrected early.

Most important tests come first!

The principle of equal distribution of limited test resources over all test objects of a project is of little use since such an approach leads to equally intensive testing of critical and non-critical program parts.

Test Evaluation and Test Report

It needs to be checked if the test exit criteria defined in the plan have been met. This check may lead to the conclusion that test activities may be considered completed but may also show that test cases were blocked and that not all planned test cases could be executed. It may also mean that additional test cases are required to meet the criteria.

Test completion reached?

Closer analysis, however, may reveal that the necessary effort to meet all exit criteria is unreasonably high and that further test cases or test runs had best be eliminated. The associated risk needs to be evaluated and taken into account for the decision.

If further tests are necessary, the test process must be resumed and the step has to be identified from where test activities are to be resumed. If necessary, planning must be revised as additional resources are required.

Besides test coverage criteria, additional criteria may be used to determine the end of the test activities .

Test cycles develop as a result of observed failures, their correction, and necessary retesting. Test management must take such correction and test cycles into account in their planning . Otherwise, project delays are the rule. It is rather difficult to calculate the effort needed for the test cycles in advance. Comparative data from earlier, similar projects or from already completed test cycles may help.

Allow for several test cycles

In practice, time and cost often determine the end of testing and lead to the termination of test activities.

Exit criteria in practice: time and cost

Even if during testing there is more budget spent than planned, testing as a whole does cause savings through the detection of failures and subsequent correction of software defects. Defects not detected here usually cause considerably higher cost when found during operation.

At the end of this activity of the test process, a summary report must be prepared for the decision makers (project manager, test manager, and customer, if necessary) (see also [IEEE 829]).

Test report

Completing the Test Activities

Unfortunately, in practice, the closing phase of the test processes is mostly neglected. At this stage, the experiences gained during the test process should be analyzed and made available to other projects. In this connection, the presumed causes of differences between planning and implementation are of particular interest.

Learning from experience

A critical evaluation of the activities performed in the test process, taking into account effort spent and the achieved results, will definitely reveal improvement potential. If these findings are documented and applied to subsequent projects in an understandable manner, continuous process improvement has been achieved.

A further finishing activity is the “conservation” of the testware for future use. During the operational use of software systems, hitherto undetected failures will occur despite all previous testing, or customers will require changes. In both cases this will lead to revised versions of the program and require renewed testing. If testware (test cases, test protocols, test infrastructure, tools, etc.) from development is still available, test effort will be reduced during the maintenance or operational phases of the software.

Testware “conservation”



References
Software Testing Practice: Test Management
by Andreas Spillner; Thomas Rossner; Mario Winter; Tilo Linz