Kitabı oku: «Software Testing Foundations», sayfa 4
2.3.2 Test Monitoring and Control
Ensuring traceability
Monitoring and control involve constant observation of the current testing activities compared with the planned activities, reporting any discrepancies, and the execution of the activities required to achieve the planned objectives under the changed circumstances. The update of the plan must also be based on the changed situation.
Are the exit criteria fulfilled?
Test monitoring and control activities are based on the exit criteria for each activity or task. The evaluation of whether the exit criteria for a test at a particular test level have been fulfilled can include:
■ Achievement of the degree of coverage defined in the test plan according to the available test results and logs. If the predefined criteria are fulfilled, the activity can be terminated.
■ The required component or system quality is determined based on test results and logs. If the required quality has been achieved, the test activity can be concluded.
■ If risk evaluation is part of the test plan and you need to prove that you have sufficient risk coverage, this can also be determined using the test results and logs.
Perform additional tests or take a risk?
If the required exit criteria have not been fulfilled by the tests you have performed, you need to design and execute additional tests. If this isn’t possible for any reason, you need to clarify the situation and evaluate the ensuing risk.
Progress and completion reports
Stakeholders expect to receive regular test progress reports on current testing progress compared with the overall plan. Alongside any deviation from the original plan, these reports should also contain information regarding any prematurely terminated tests (see above) or non-fulfillment of the planned exit criteria. Test summary reports are to be provided when project milestones are reached.
All test reports should contain details relevant to their recipients and include a progress report as well as test results. Reports should also answer or preempt management questions, such as the (expected) end time, planned vs. actual use of resources, and the amount of testing effort involved.
Progress monitoring can be based on the reports made by team members or on figures and analysis provided by automated tools.
2.3.3 Test Analysis
“What” do we need to test?
Test analysis involves determining what exactly needs to be tested. For this purpose, the test basis is examined to see whether the documents to be used are sufficiently detailed and contain testable features in order to derive test conditions. The degree to which the test conditions need to be checked is determined by measurably defined coverage criteria.
The following documents and data can be used to analyze the test basis and the planned test level:
Analyzing the test basis
■ Requirements that specify the planned functional and non-functional system or component behavior. The requirements specifications include technical requirements, functional requirements, system requirements, user stories9, epics10, use cases, and similar work products or documentation. For example, if a requirement doesn’t specify the predicted result and/or system behavior precisely enough, then test cases cannot be simply derived. A reworking is necessary.
Analyzing documentation
■ Design or implementation data that gives rise to specific component or system structures. System or software architecture documents, design specifications, call graphs, model diagrams (for example, UML11 or entity relationship diagrams12), interface specifications, and similar materials can be used for the test basis. For example, you will need to analyze how easily interfaces can be addressed (interface openness) and how easily the test object can be divided into smaller sub-units in order to test these separately. You need to consider these aspects at the development stage, and the test object needs to be designed and coded accordingly.
Check the test object
■ You need to investigate the individual components or the system itself, including the code base, database metadata, database queries, and other interfaces. For example, you need to check that the code is well structured and easy to understand, and that the required code coverage (see section 5.1) is easy to achieve and verify.
Consider risk analysis
■ Risk analysis reports that cover functional, non-functional, and structural aspects of the system or its components need to be investigated too. If potential software failures create serious risks, testing needs to be correspondingly thorough. Testing can be performed less formally for software that is not mission-critical.
Potential documentation errors
The cornerstone of the entire testing process is the test basis. If the test basis contains defects, you cannot formulate “correct” test conditions and you won’t be able to draft “proper” test cases. You therefore need to analyze the test basis for defects too. Check whether it contains ambiguities, or whether there are gaps or omissions in the descriptions of functions. You need to check the documentation for inconsistencies, imprecision, contradictions, and for repeat and/or redundant passages. Any defects or discrepancies you find should be corrected immediately.
The discovery and removal of defects from the test basis is extremely important, especially if the documentation hasn’t been reviewed (see section 4.3). Development methodologies such as behavior-driven development (BDD, [URL: BDD]) and acceptance test-driven development (ATDD, [URL: ATDD]) use acceptance criteria and user stories to create test conditions and test cases before coding begins. This approach makes it simpler to identify and remedy defects at a much earlier stage in the development process.
Prioritizing tests
Once the test conditions have been identified and defined, you need to prioritize them. This ensures that the most important and high-stakes test conditions are tested first. In real-world situations, time restrictions often make it impossible to perform all the planned tests.
Traceability is important
At the planning stage, you need to ensure that there is unambiguous bi-directional traceability between the test basis and the results of your testing activities (see above). This traceability has to precisely define which test condition checks which requirement and vice versa.
This is the only way to ensure that you can later determine how thoroughly which requirements need to be tested and—depending on the test conditions—using which test cases.
Choosing test techniques
It is useful to consider which test technique to use (i.e., black-box, white-box, experience-based, see Chapter 5) at the analysis stage. Each technique has its own system for reducing the likelihood of overlooking test conditions and helping to define these precisely. For example, using an equivalence partition (or “equivalence class”) test (see section 5.1.1) ensures that the entire input domain is used for the creation of test cases. This prevents you from forgetting or overlooking negative input in your requirements definitions. You can then define your test conditions to cover negative input data.
If you use an experience-based testing technique (see section 5.3), you can use the test conditions defined during analysis as objectives for your test charter. A test charter is a kind of “assignments” that, alongside traditional test objectives, provides potential ideas for additional tests. If the test objectives are traceable back to the test basis, you can evaluate the achieved coverage (for example, for your requirements), even when you are using an experience-based technique.
2.3.4 Test Design
How to test and which test cases to use
When designing tests you determine how you are going to test. At the design stage, test conditions are used to create test cases (or sequences of test cases). Here, you will usually use one of the test techniques detailed in Chapter 5.
Test cases can be specified on two “levels”: abstract and concrete (see Case Study: Abstract and concrete test cases below).
Abstract and concrete test cases
An abstract (or “high-level”) test case doesn’t include specific input values or expected results, and is described using logical operators. The advantage of such cases is that they can be used during multiple test cycles and with varying data but can still adequately document the scope of each case. A concrete (or “low-level”) test case uses specific input data and expected result values.
When you begin to design a test, you can use abstract as well as concrete test cases. Because only concrete test cases can be executed on a computer, abstract test cases have to be fleshed out with real input and output values. In order to utilize the advantages of abstract test cases (see above), you can derive abstract test cases from concrete cases too.
Test cases involve more than just test data
Preconditions have to be defined for every test case. A test also requires clear constraints that must be adhered to. Furthermore, you need to establish in advance what results or behavior you expect the test to produce. In addition to output data, results include any changes to global (persistent) data and states, as well as any other reactions to the execution of the test case.
The test oracle
You need to use adequate sources to predict test results. In this context, people talk about the “test oracle” that has to be “consulted” regarding the expected results. For example, specifications can serve as an oracle. There are two basic approaches:
■ The tester derives the expected output value from the input value based on the test object’s requirements and specifications as defined in the test basis.
■ If the inverse of a function exists, you can execute the inverse and compare its output with the input value for your original test case. This technique can, for example, be used when testing encryption and decryption algorithms
The following example illustrates the difference between abstract and concrete test cases.
Case Study: Abstract and concrete test cases
A dealership can give its salespeople the option of discounts to apply to the price of a vehicle. For prices below $15,000 there is no discount. For prices up to $20,000, a discount of 5% is appropriate. If the price is below $25,000, a 7% discount is possible. If the price is above $25,000, a discount of 8.5% is to be applied.
The above text enables us to derive the following relationships between price and discount:

The text itself obviously offers potential for interpretation13. In other words, the text can be misunderstood, whereas the mathematical formulae derived from it are unambiguous.
Based on the formulae, we can define the following test cases (see table 2-2):
Table 2-2 Abstract test cases

In order to execute these tests, the abstract cases have to be converted to concrete cases—i.e., we have to apply specific input values (see table 2-3). Exceptional conditions and boundary cases are not covered here.
Table 2-3 Concrete test cases

The values shown here serve only to illustrate the difference between abstract and concrete test cases. We didn’t use a specific test technique to design these tests, and the cases shown aren’t meant to test the discount component exhaustively. For example, there is no test case that covers false input (such as a negative price). You will find more detail on the systematic creation of test cases using a specific test technique in Chapter 5.
In addition to specifying abstract and concrete test cases, test design also includes prioritizing your test cases and providing appropriate testing infrastructure:
Priorities and traceability
■ Test analysis has already prioritized the test conditions. These same priorities can be used and fine-tuned for your test cases (or sets of test cases). This way, you can assign different priorities to the individual tests within a set that are designed to verify a single test condition. High- priority tests are executed first.
The same principle applies to the traceability of test conditions, which can be broken down to cover individual test cases or sets of cases.
Testing infrastructure and environment
■ The required testing infrastructure has to be evaluated and implemented. Test infrastructure consists of all the organizational elements required for testing. These include the test environment, testing tools, and appropriately equipped workstations. A test environment is required in order to run the test object on a computer and verify the specified test cases (see below). This environment comprises hardware, any necessary simulation equipment and software tools, and other supporting materials. In order to avoid delays while testing, the test infrastructure should be up and running (and tested) before testing begins.
Following on from the test analysis, the test design stage can reveal further defects in the test basis. Likewise, test conditions that were defined at the analysis stage can be fine-tuned at the design stage.
2.3.5 Test Implementation
Is everything ready to begin testing?
The task of test implementation is the final preparation of all necessary activities so that the test cases can be executed in the next step. Test design and test implementation activities are often combined.
Firm up the testing infrastructure
One of main tasks during implementation is the creation and integration of all testware, with special attention being paid to the details of the testing infrastructure. The test framework14 has to be programmed and installed in the test environment. Checking the environment is extremely important, as faults in the testing environment can cause system failures. In order to ensure that there are no delays or obstacles to testing, you also need to check that all additional tools (such as service virtualization, simulators, and other infrastructure elements) are in place and working.
Firm up your test cases
To ensure that your concrete test cases can be utilized without further changes or amendments, all test data have to be correctly transferred to the test environment. Abstract test cases have to be fed with specific test data.
As well as firming up your test cases, you need to define the test procedure itself—i.e., the planned tests have to be put into a logical sequence. This takes place according to the priorities you have defined (see section 6.2).
Test suite, test procedure, test script
To keep a test cycle effective and to retain a logical structure, test cases are grouped into test suites. A test suite consists of multiple tests grouped according to the planned sequence of execution. The sequence has to planned so that the postconditions of a test case serve as the preconditions for the following test case. This simplifies the test procedure, as it obviates the need for dedicated pre- and postconditions for every test case. A test suite also includes the cleanup activities required once execution is complete.
Automating test procedures using scripts saves a lot of time compared with manual testing.
The most efficient way to plan the execution of test cases, test suites, and test procedures is defined in a test execution schedule (see section 6.3.1).
The traceability of your test cases to the requirements and/or test conditions needs to be checked and, if necessary, updated. Test suites and procedures also have to be taken into account when checking traceability.
2.3.6 Test Execution
Completeness check
It makes sense to begin by checking that all the components you want to test and the corresponding testware are available. This involves installing the test object in the test environment and checking that it can be started and run. If this check reveals no obstacles, testing can begin.
Our Tip
Check the test object’s main functions
■ Test execution should begin by checking the test object’s main function (see the “smoke test” section in the Other Techniques side note in section 5.1.6). If this reveals failures or deviations from the expected results, you need to remedy the corresponding defect(s) before continuing with testing.
Tests without logs are worthless
The test execution process—whether manual or automated according to the test execution schedule—has to be precisely and completely logged. You need to log each test result (i.e., pass, fail, blocked15) so that they are comprehensible to people who are not directly involved with the test process (for example, the customer). Logs also serve to verify that the overall testing strategy (see section 6.2) has been performed as planned. Logs should show which parts were tested, when, by whom, how intensively and with what results. If a planned test case or test sequence is left out for any reason, this needs to be logged too.
The importance of clarity and reproducibility
Alongside the test object (or test item), a whole raft of documents and other information are involved in test execution. These include: the test framework, input data, logs, and so on. The data and other information that relate to a test case or test run have to be managed so that the test can easily be repeated later using the same data and constraints. The IDs and/or version numbers of the tools used should be noted and recorded by configuration management (see section 6.5).
Is there really a failure?
If there is a difference between the expected and actual result, evaluating the test logs will help you to decide whether this really is due to a failure. If you do discover a failure, make sure you document it before you begin to look for its causes. It may be necessary to specify and execute supplementary test cases. You should also report the defect. See section 6.4.1 for more on test logging, section 6.3.4 for more on the defect report, and section 6.4 for a general discussion of defect management.
Retesting
Once you have corrected a fault, you need to check that the corresponding failure has also been resolved and that no new failures have been caused by the correction process. You may need to specify new test cases to verify the modified code.
Our Tip
Check whether a defect is actually due to the test object itself
■ You will need to check carefully whether a failure really is due to the test object. There is nothing worse for a tester’s reputation than a reported failure whose cause is actually a flawed test case. At the same time, you have to take care not to be over-wary of such cases, and you mustn’t be scared to report potential failures, even if you are not sure of their cause. Both these situations are bad for the project.
Individual fault correction is not practical
■ Ideally, you will correct faults individually and retest to make sure that your corrections don’t unintentionally influence each other. However, this approach is simply not practical in the real world. If a test is executed by an independent tester rather than the developer, individual corrections won’t be possible. Reporting every fault to the developer and waiting for these to be remedied before retesting is not a justifiable effort. The usual approach is to correct bunches of faults and then set up a new version of the software for testing.
In addition to logging the differences between the expected and actual results, you need to evaluate coverage and—if necessary—log the run time of the tests. There are specialized tools available for these kinds of tasks (see Chapter 7).
Traceability is important here too
Bi-directional traceability is an important part of all the testing activities we have looked at so far. Here too, you need to check traceability and, if necessary, update the system so that the relationships between the test basis and the test conditions, cases, test runs, and results are up to date. Once a test sequence is completed, you can use its inbuilt traceability to evaluate whether every item in the test basis has been covered by a corresponding test process.
Test objectives fulfilled?
This way, you can check which requirements have passed planned and executed tests and which requirements couldn’t be verified due to failed tests or failures that were registered during testing. You may find that some requirements have not yet been verified because the corresponding tests have not yet been performed. Information of this kind enables you to verify whether the planned coverage criteria have been fulfilled and therefore whether the test can be viewed as successfully completed.
Effective coverage criteria and traceability enable you to document your test results so that they are easily comprehensible for all the project’s stakeholders.
Other common exit criteria are discussed in section 6.3.1.