Kitabı oku: «Software Testing Foundations», sayfa 5
2.3.4 Test Design
How to test and which test cases to use
When designing tests you determine how you are going to test. At the design stage, test conditions are used to create test cases (or sequences of test cases). Here, you will usually use one of the test techniques detailed in Chapter 5.
Test cases can be specified on two “levels”: abstract and concrete (see Case Study: Abstract and concrete test cases below).
Abstract and concrete test cases
An abstract (or “high-level”) test case doesn’t include specific input values or expected results, and is described using logical operators. The advantage of such cases is that they can be used during multiple test cycles and with varying data but can still adequately document the scope of each case. A concrete (or “low-level”) test case uses specific input data and expected result values.
When you begin to design a test, you can use abstract as well as concrete test cases. Because only concrete test cases can be executed on a computer, abstract test cases have to be fleshed out with real input and output values. In order to utilize the advantages of abstract test cases (see above), you can derive abstract test cases from concrete cases too.
Test cases involve more than just test data
Preconditions have to be defined for every test case. A test also requires clear constraints that must be adhered to. Furthermore, you need to establish in advance what results or behavior you expect the test to produce. In addition to output data, results include any changes to global (persistent) data and states, as well as any other reactions to the execution of the test case.
The test oracle
You need to use adequate sources to predict test results. In this context, people talk about the “test oracle” that has to be “consulted” regarding the expected results. For example, specifications can serve as an oracle. There are two basic approaches:
The tester derives the expected output value from the input value based on the test object’s requirements and specifications as defined in the test basis.
If the inverse of a function exists, you can execute the inverse and compare its output with the input value for your original test case. This technique can, for example, be used when testing encryption and decryption algorithms
The following example illustrates the difference between abstract and concrete test cases.
Case Study: Abstract and concrete test cases
A dealership can give its salespeople the option of discounts to apply to the price of a vehicle. For prices below $15,000 there is no discount. For prices up to $20,000, a discount of 5% is appropriate. If the price is below $25,000, a 7% discount is possible. If the price is above $25,000, a discount of 8.5% is to be applied.
The above text enables us to derive the following relationships between price and discount:
Price < 15,000 | Discount = 0% |
15.000 ≤ Price ≤ 20,000 | Discount = 5% |
20.000 < Price < 25,000 | Discount = 7% |
Price ≥ 25,000 | Discount = 8.5% |
The text itself obviously offers potential for interpretation13. In other words, the text can be misunderstood, whereas the mathematical formulae derived from it are unambiguous.
Based on the formulae, we can define the following test cases (see table 2-2):
Table 2-2 Abstract test cases

In order to execute these tests, the abstract cases have to be converted to concrete cases—i.e., we have to apply specific input values (see table 2-3). Exceptional conditions and boundary cases are not covered here.
Table 2-3 Concrete test cases

The values shown here serve only to illustrate the difference between abstract and concrete test cases. We didn’t use a specific test technique to design these tests, and the cases shown aren’t meant to test the discount component exhaustively. For example, there is no test case that covers false input (such as a negative price). You will find more detail on the systematic creation of test cases using a specific test technique in Chapter 5.
In addition to specifying abstract and concrete test cases, test design also includes prioritizing your test cases and providing appropriate testing infrastructure:
Priorities and traceability
Test analysis has already prioritized the test conditions. These same priorities can be used and fine-tuned for your test cases (or sets of test cases). This way, you can assign different priorities to the individual tests within a set that are designed to verify a single test condition. High- priority tests are executed first.The same principle applies to the traceability of test conditions, which can be broken down to cover individual test cases or sets of cases.
Testing infrastructure and environment
The required testing infrastructure has to be evaluated and implemented. Test infrastructure consists of all the organizational elements required for testing. These include the test environment, testing tools, and appropriately equipped workstations. A test environment is required in order to run the test object on a computer and verify the specified test cases (see below). This environment comprises hardware, any necessary simulation equipment and software tools, and other supporting materials. In order to avoid delays while testing, the test infrastructure should be up and running (and tested) before testing begins.
Following on from the test analysis, the test design stage can reveal further defects in the test basis. Likewise, test conditions that were defined at the analysis stage can be fine-tuned at the design stage.
2.3.5 Test Implementation
Is everything ready to begin testing?
The task of test implementation is the final preparation of all necessary activities so that the test cases can be executed in the next step. Test design and test implementation activities are often combined.
Firm up the testing infrastructure
One of main tasks during implementation is the creation and integration of all testware, with special attention being paid to the details of the testing infrastructure. The test framework14 has to be programmed and installed in the test environment. Checking the environment is extremely important, as faults in the testing environment can cause system failures. In order to ensure that there are no delays or obstacles to testing, you also need to check that all additional tools (such as service virtualization, simulators, and other infrastructure elements) are in place and working.
Firm up your test cases
To ensure that your concrete test cases can be utilized without further changes or amendments, all test data have to be correctly transferred to the test environment. Abstract test cases have to be fed with specific test data.
As well as firming up your test cases, you need to define the test procedure itself—i.e., the planned tests have to be put into a logical sequence. This takes place according to the priorities you have defined (see section 6.2).
Test suite, test procedure, test script
To keep a test cycle effective and to retain a logical structure, test cases are grouped into test suites. A test suite consists of multiple tests grouped according to the planned sequence of execution. The sequence has to planned so that the postconditions of a test case serve as the preconditions for the following test case. This simplifies the test procedure, as it obviates the need for dedicated pre- and postconditions for every test case. A test suite also includes the cleanup activities required once execution is complete.
Automating test procedures using scripts saves a lot of time compared with manual testing.
The most efficient way to plan the execution of test cases, test suites, and test procedures is defined in a test execution schedule (see section 6.3.1).
The traceability of your test cases to the requirements and/or test conditions needs to be checked and, if necessary, updated. Test suites and procedures also have to be taken into account when checking traceability.
2.3.6 Test Execution
Completeness check
It makes sense to begin by checking that all the components you want to test and the corresponding testware are available. This involves installing the test object in the test environment and checking that it can be started and run. If this check reveals no obstacles, testing can begin.
Our Tip Check the test object’s main functions
Test execution should begin by checking the test object’s main function (see the “smoke test” section in the Other Techniques side note in section 5.1.6). If this reveals failures or deviations from the expected results, you need to remedy the corresponding defect(s) before continuing with testing.
Tests without logs are worthless
The test execution process—whether manual or automated according to the test execution schedule—has to be precisely and completely logged. You need to log each test result (i.e., pass, fail, blocked15) so that they are comprehensible to people who are not directly involved with the test process (for example, the customer). Logs also serve to verify that the overall testing strategy (see section 6.2) has been performed as planned. Logs should show which parts were tested, when, by whom, how intensively and with what results. If a planned test case or test sequence is left out for any reason, this needs to be logged too.
The importance of clarity and reproducibility
Alongside the test object (or test item), a whole raft of documents and other information are involved in test execution. These include: the test framework, input data, logs, and so on. The data and other information that relate to a test case or test run have to be managed so that the test can easily be repeated later using the same data and constraints. The IDs and/or version numbers of the tools used should be noted and recorded by configuration management (see section 6.5).
Is there really a failure?
If there is a difference between the expected and actual result, evaluating the test logs will help you to decide whether this really is due to a failure. If you do discover a failure, make sure you document it before you begin to look for its causes. It may be necessary to specify and execute supplementary test cases. You should also report the defect. See section 6.4.1 for more on test logging, section 6.3.4 for more on the defect report, and section 6.4 for a general discussion of defect management.
Retesting
Once you have corrected a fault, you need to check that the corresponding failure has also been resolved and that no new failures have been caused by the correction process. You may need to specify new test cases to verify the modified code.
Our Tip Check whether a defect is actually due to the test object itself
You will need to check carefully whether a failure really is due to the test object. There is nothing worse for a tester’s reputation than a reported failure whose cause is actually a flawed test case. At the same time, you have to take care not to be over-wary of such cases, and you mustn’t be scared to report potential failures, even if you are not sure of their cause. Both these situations are bad for the project.
Individual fault correction is not practical
Ideally, you will correct faults individually and retest to make sure that your corrections don’t unintentionally influence each other. However, this approach is simply not practical in the real world. If a test is executed by an independent tester rather than the developer, individual corrections won’t be possible. Reporting every fault to the developer and waiting for these to be remedied before retesting is not a justifiable effort. The usual approach is to correct bunches of faults and then set up a new version of the software for testing.
In addition to logging the differences between the expected and actual results, you need to evaluate coverage and—if necessary—log the run time of the tests. There are specialized tools available for these kinds of tasks (see Chapter 7).
Traceability is important here too
Bi-directional traceability is an important part of all the testing activities we have looked at so far. Here too, you need to check traceability and, if necessary, update the system so that the relationships between the test basis and the test conditions, cases, test runs, and results are up to date. Once a test sequence is completed, you can use its inbuilt traceability to evaluate whether every item in the test basis has been covered by a corresponding test process.
Test objectives fulfilled?
This way, you can check which requirements have passed planned and executed tests and which requirements couldn’t be verified due to failed tests or failures that were registered during testing. You may find that some requirements have not yet been verified because the corresponding tests have not yet been performed. Information of this kind enables you to verify whether the planned coverage criteria have been fulfilled and therefore whether the test can be viewed as successfully completed.
Effective coverage criteria and traceability enable you to document your test results so that they are easily comprehensible for all the project’s stakeholders.
Other common exit criteria are discussed in section 6.3.1.
2.3.7 Test Completion
The right time for test completion
Test completion is the final activity in the testing process and involves collating all the data collected by the completed test activities in order to evaluate the test experience and consolidate the testware and its associated materials. The correct moment for test completion varies according to the development model you use. It can be:
When the system goes live
The end (or discontinuation) of the test project
The completion of an agile project iteration (for example, as part of a retrospective or a review meeting)
The completion of testing activities for a specific test level
The completion of test activities for a maintenance release
At completion time, you also need to make sure that all planned activities have been completed and that the defect reports are complete. Open or unresolved failures (i.e., unresolved deviations from an existing requirement) remain open and are carried over to the next iteration or release. In agile environments, such unresolved cases are classed as new product backlog items for inclusion in the next iteration.
Change requests and modified requirements that stem from the evaluation of test results are handled similarly.
Test summary report
The test summary report (see section 6.3.4) aggregates all your testing activities and results, and includes an overall evaluation of the tests you have performed compared with your predefined exit criteria. The summary report is distributed to all stakeholders.
Archiving testware
Software systems are generally utilized over a long period of time, during which failures will turn up that weren’t discovered during testing. During its lifetime, a system will also be subject to change requests from its users (or customers). Both of these situations mean that the system has to be reprogrammed and the modified code has to be tested anew. A large portion of the testing effort involved in this kind of maintenance can be avoided if the testware you originally used (test cases, logs, infrastructure, tools, and so on) are still available and can be handed over to the maintenance department. This means that the existing testware only have to be adapted rather than set up from scratch when it comes to performing system maintenance. Testware can also be profitably adapted for use in similar projects. For some industries, the law requires proof of testing, and this can only be provided if all the testware are properly archived.
Our Tip Create an image or use a Container
Conserving testware after testing can be extremely laborious, so testers often capture an image of the test environment or use the Docker freeware to create a so-called Container—an easily transportable and reusable file that contains all the related resources and that can be installed and run as an independent test environment.
Learning from experience
The experience you gather when testing can be analyzed and used in future projects. Deviations between your plans and the actual activities you perform are just as interesting as looking for their causes. You should use your findings to unleash your potential for improvement and make changes to the activities you undertake for future iterations, releases, and projects. These kinds of changes help the overall test process to mature.
2.3.8 Traceability
Traceability between the test basis and test results
When we talk about testing activities, we often mention the importance of traceability. The following sections summarize the traceability between the test basis and test results and detail other advantages of effective traceability.
Traceability is essential to effective test monitoring and control that spans the entire test process. It establishes the relationships between each item in the test basis and the various test results. Traceability helps you to assess the degree of coverage you achieve—for example, checking whether all requirements have been covered by at least one test case. Such traceability data helps you control the overall progress of the test process.
Over and above checking coverage, effective traceability also supports the following aspects:
Traceability helps to assess the impact of changes by analyzing which changes to the requirements affect which test conditions, test runs, test cases, and test items
Higher, abstract level
Traceability enables you to elevate your results to a “higher” abstract level, thus making the testing process more easily comprehensible to non-specialists. Considering only the executed test cases doesn’t provide information on how thorough or broad-based the testing process was. It is only when you include traceability information that the output becomes intelligible to all relevant groups of stakeholders. It also helps to fulfill abstract IT governance criteria.
The same argument is valid for test progress and test summary reports—i.e., effective traceability makes them easier to understand. The status of each item in the test basis should be included in reports. One of the following statements can be made for each individual requirement:Tests were passedTests failed, failures occurredPlanned tests have not yet been executed
Traceability also enables us to make technical aspects of testing comprehensible to all stakeholders. Using company objectives as a yardstick, it enables them to evaluate product quality, procedural capacity, and project progress.
Tool-based support
In order to best utilize the advantages of traceability, some companies build dedicated management systems that organize test results in a way that automatically delivers traceability data. There are various test management tools available that inherently implement traceability.