Kitabı oku: «Software Testing Foundations», sayfa 7
3.2 Iterative and Incremental Development Models
Iterative development
The basic idea behind iterative development is that the development team can use the experience they gain from previous development stages along with real-world and customer feedback from earlier system versions to improve the product in future iterations. Such improvements can take the form of fault corrections or the alteration, extension or addition of specific features. The primary objective of all these scenarios is to improve the product step by step in order to meet customer expectations increasingly accurately.
Incremental development
The idea behind incremental development is to develop a product in preplanned stages, with each completed stage offering a more full-featured version (increment) of the product. Increments can vary greatly in size— for example from changing a simple web page layout to adding a complete new module with additional functionality. The primary objective of incremental development is to minimize time to market—i.e., to release a simple product version (or a simple version of a feature) to provide the customer as quickly as possible with a working version of the product or feature. Further enhancements will then be offered continually depending on the customer’s responses and wishes.
Iterative-incremental development
In practice, the borders between these two methodologies are blurred and they are often referred to together as iterative-incremental development. A defining characteristic of both is that each product release enables you to receive regular, early feedback from the customer and/or end-user. This reduces the risk of developing a system that doesn’t meet the customer’s expectations.
Examples of combined iterative-incremental models are: the spiral model [Boehm 86], Rapid Application Development (RAD) [Martin 91], Rational Unified Process (RUP) [Kruchten 03], and Evolutionary Development [Gilb 05].
Agile software development
All forms of agile software development are iterative-incremental development models. The best-known agile models are: Extreme Programming (XP) [Beck 04], Kanban [URL: Kanban], and Scrum [Beedle 02], [URL: Scrum Guide]. In recent years, Scrum has become the most popular of these and is extremely widespread.
Fig. 3-3 Scrum-based agile development
Testing to the rhythm of the iterations
The pace at which new increments/releases are created varies from model to model. While non-agile iterative-incremental projects tend to foresee releases at intervals of six months to a year, or sometimes even longer, agile models in contrast attempt to reduce the release cycle to a quarterly, monthly, or even weekly rhythm.
Here, testing has to be adapted to fit such short release cycles. For example, this means that every component requires re-usable test cases that can be easily and instantly repeated for each new increment. If this condition is not met, you risk reducing system reliability from increment to increment.
Each increment also requires new test cases that cover any additional functionality, which means the number of test cases you need to maintain and execute (on each release) increases over time. The shorter the release cycle, it remains critical but becomes more difficult for all test cases to be satisfactorily executed within the allotted release timeframe. Therefore test automation is an important tool when adapting your testing to agile development.
Continuous Integration and Continuous Deployment
Once you have set up a reliable automated test environment that executes your test cases with sufficient speed, you can use it for every new build. When a component is modified, it is integrated into the previous complete build, followed by a fresh automated test run3. Any failures that appear should be fixed in the short term. This way, the project always has a fully integrated and tested system running within its test environment. This approach is called “Continuous Integration” (CI).
This approach can be augmented using “Continuous Deployment” (CD): If the test run (during CI) is fault-free, the tested system is automatically copied to the production environment and installed there and thus deployed in a ready-to-run state4.
Continuous Delivery = Continuous Testing
Combining CI and CD results in a process called “Continuous Delivery”. These techniques can only be successfully applied if you have a largely automated testing environment at your disposal which enables you to perform “continuous testing”.
Continuous testing and other critical agile testing techniques are explained in detail in [Crispin 08] and [Linz 14].
3.3 Software Development in Project and Product Contexts
The requirements for planning and traceability of development and testing vary according to the context. Likewise, the appropriateness of a particular lifecycle model for the development of a specific product also depends on the contexts within which it is developed and used. The following project- and product-based factors play a role in deciding which model to use:
The company’s business priorities, project objectives, and risk profile. For example, if time-to-market is a primary requirement.
The type of product being developed. A small (perhaps department-internal) system has a less demanding development process than a large project designed for multi-year use by a huge customer base, such as our VSR-II case study project. Such large products are often developed using multiple models.
The market conditions and technical environment in which the product is used. For example, a product family developed for use in the Internet of Things (IoT) can consist of multiple types of objects (devices, services, platforms, and so on), each of which is developed using a specific and suitable lifecycle model. Because IoT objects are used for long periods of time in large numbers, it makes sense if their operational usage (distribution, updates, decommissioning, and so on) is mirrored in specific phases or catalogs of tasks within the lifecycle model. This makes developing new versions of such a system particularly challenging.
Identified product risks. For example, the safety aspects involved in designing and implementing a vehicle braking system.
Organizational and cultural aspects. For example, the difficulties generated by communication within international teams can make iterative or agile development more difficult.
Case Study: Mixing development models in the VSR-II project
One of the objectives of the VSR-II project is to make it “as agile as possible”, so the DreamCar module and all the browser-based front-end components and subsystems are developed in an agile Scrum environment. However, because they are safety-critical, the ConnectedCar components are to be developed using the traditional V-model.
Prototyping [URL: Prototyping] is also an option early on in a project and, once the experimental phase is complete, you can switch to an incremental approach for the rest of the project.
Tailoring
A development model can and should be adapted and customized for use within a specific project. This adaptation process is called “tailoring”.
Tailoring can involve combining test levels or certain testing activities and organizing them especially to suit the project at hand. For example, when integrating off-the-shelf commercial software into a larger system, interoperability tests at the integration testing stage (for example, when integrating with existing infrastructure or systems) can be performed by the customer rather than the supplier, as can acceptance testing (functional and non-functional operational and customer acceptance tests). For more detail, see section 3.4 and 3.5.
The tailored development model then comprises a view of the required activities, timescales, and objectives that is binding for all project participants. Any detailed planning (schedules, staffing, and infrastructure allocation) can then utilize and build upon the tailored development model.
Attributes of good testing
Regardless of which lifecycle model you choose, your tailoring should support good and effective testing. Your testing approach should include the following attributes:
Testing and its associated activities are included as early as possible in the lifecycle—for example, drafting test cases and setting up the test environment (see the principle of early testing above).
For every development activity, a corresponding test activity is planned and executed.
Test activities are planned and managed specifically to suit the objectives of the test level they belong to.
Test analysis and test design begin within the corresponding development phase.
As soon as work products (requirements, user stories, design documents, code etc.) exist, testers take part in discussions that refine them. Testers should participate early and continuously in this refinement process.
3.4 Testing Levels
A software system is usually composed of a number of subsystems, which in turn are made up of multiple components often referred to as units or modules. The resulting system structure is also called the systems “software architecture” or “architecture”. Designing an architecture that perfectly supports the system’s requirements is a critical part of the software development process.
During testing, a system has to be examined and tested on each level of its architecture, from the most elementary component right up to the complete, integrated system. The test activities that relate to a particular level of the architecture are known as a testing “level”, and each testing level is a single instance of the test process.
The following sections detail the differences between the various test levels with regard to their different test objects, test objectives, testing techniques, and responsibilities/roles.
3.4.1 Component Testing
Terminology
Component testing involves systematically checking the lowest-level components in a system’s architecture. Depending on the programming language used to create them, these components have various names, such as “units”, “modules” or (in the case of object-oriented programming) “classes”. The corresponding tests are therefore called “module tests”, “unit tests”, or “class tests”.
Components and component testing
Regardless of which programming language is the used, the resulting software building blocks are the “components” and the corresponding tests are called “component tests”.
The test basis
The component-specific requirements and the component’s design (i.e., its specifications) are to be consulted to form the test basis. In order to design white-box tests or to evaluate code coverage, you must analyze the component’s source code and use it as an additional test basis. However, to judge whether a component reacts correctly to a test case, you have to refer to the design or requirements documentation.
Test objects
As detailed above, modules, units, or classes are typical test objects. However, things like shell scripts, database scripts, data conversion and migration procedures, database content, and configuration files can all be test objects too.
A component test verifies a component’s internal functionality
A component test typically tests only a single component in isolation from the rest of the system. This isolation serves to exclude external influences during testing: If a test reveals a failure, it is then obviously attributable to the component you are testing. It also simplifies design and automation of the test cases, due to their narrowly focused scope.
A component can itself consist of multiple building blocks. The important aspect is that the component test has to check only the internal functionality of the component in question, not its interaction with components external to it. The latter is the subject of integration testing. Component test objects generally arrive “fresh from the programmers hard disk”, making this level of testing very closely allied to development work. Component testers therefore require adequate programming skills to do their job properly.
The following example illustrates the point:
Case Study: Testing the calculate_price class
According to its specifications, the VSR-II DreamCar module calculates a vehicle’s price as follows:
We start with the list price (baseprice) minus the dealer discount (discount). Special edition markup (specialprice) and the price of any additional extras (extraprice) are then added.
If three or more extras not included with the special edition are added (extras), these extras receive a 10% discount. For five extras or more, the discount increases to 15%.
The dealer discount is subtracted from the list price, while the accessory discount is only applied to the extras. The two discounts cannot be applied together.
The resulting price is calculated using the following C++ method5:
double calculate_price (double baseprice, double specialprice,
double extraprice, int extras,
double discount)
{
double addon_discount; double result;
if (extras ≥ 3)
addon_discount = 10;
else
if (extras ≥ 5)
addon_discount = 15;
else addon_discount = 0;
if (discount > addon_discount)
addon_discount = discount;
result= baseprice/100.0 * (100-discount) + specialprice
+ extraprice/100.0 *(100-addon_discount);
return result;
}
The test environment
In order to test this calculation, the tester uses the corresponding class interface by calling the calculate_price() method and providing it with appropriate test data. The tester then records the component’s reaction to this call—i.e., the value returned by the method call is read and logged.
This piece of code is buggy: the code for calculating the discount for ≥ 5 can never be reached. This coding error serves as an example to explain the white-box analysis detailed in Chapter 5.
To do this you need a “test driver”. A test driver is a separate program that makes the required interface call and logs the test object’s reaction (see also Chapter 5).
For the calculate_price() test object, a simple test driver could look like this:
bool test_calculate_price() {
double price;
bool test_ok = TRUE;
// testcase 017
price = calculate_price(10000.00,2000.00,1000.00,3,0);
test_ok = test_ok && (abs (price-12900.00) < 0.01);
// testcase 02
price = calculate_price(25500.00,3450.00,6000.00,6,0);
test_ok = test_ok && (abs (price-34050.00) < 0.01);
// testcase ...
// test result return test_ok;
}
The test driver in our example is very simple and could, for example, be extended to log the test data and the results with a timestamp, or to input the test data from an external data table.
Developer tests
To write a test driver you need programming skills. You also have to study and understand the test object’s code (or at least, that of its interface) in order to program a test driver that correctly calls the test object. In other words, you have to master the programming language involved and you need access to appropriate programming tools. This is why component testing is often performed by the component’s developers themselves. Such a test is then often referred to as a “developer test”, even though “component testing” is what is actually meant. The disadvantages of developers testing their own code are discussed in section 2.4.
Testing vs. debugging
Component tests are often confused with debugging. However, debugging involves eliminating defects, while testing involves systematically checking the system for failures (see section 2.1.2).
Our Tip Use Component test frameworks
Using component test frameworks (see [URL: xUnit]) significantly reduces the effort involved in programming test drivers, and creates a consistent component test architecture throughout the project. Using standardized test drivers also makes it easier for other members of the team who aren’t familiar with the individual components or the test environment to perform component tests6. These kinds of test drivers can be controlled via a command-line interface and provide mechanisms for handling test data, and for logging and evaluating test results. Because all test data and logs are identically structured, it is possible to evaluate the results across multiple (or all) tested components.
Component test objectives
The component testing level is characterized not only by the type of test objects and the test environment, but also by very specific testing objectives.
Testing functionality
The most important task of a component test is checking that the test object fully and correctly implements the functionality defined in its specifications (such tests are also known as “function tests” or “functional tests”). In this case, functionality equates to the test object’s input/output behavior. In order to check the completeness and correctness of the implementation, the component is subjected to a series of test cases, with each covering a specific combination of input and output data.
Case Study: Testing VSR-II’s price calculations
This kind of testing input/output data combinations is illustrated nicely by the test cases in the example shown above. Each test case inputs a specific price combined with a specific number of extras. The test case then checks whether the test object calculates the correct total price.
For example, test case #2 checks the “discount for five or more extras”. When test case #2 is executed, the test object outputs an incorrect total price. Test case #2 produces a failure, indicating that the test object does not fulfill its specified requirements for this input data combination.
Typical failures revealed by component testing are faulty calculations or missing (or badly chosen) program paths (for example, overlooked or wrongly interpreted special cases).
Testing for robustness
At run time, a software component has to interact and swap data with multiple neighboring components, and it cannot be guaranteed that the component won’t be accessed and used wrongly (i.e., contrary to its specification). In such cases, the wrongly addressed component should not simply stop working and crash the system, but should instead react “reasonably” and robustly. Testing for robustness is therefore another important aspect of component testing. The process is very similar to that of an ordinary functional test, but serves the component under test with invalid input data instead of valid data. Such test cases are also referred to as “negative tests” and assume that the component will produce suitable exception handling as output. If adequate exception handling is not built in, the component may produce runtime errors, such as division by zero or null pointer access, that cause the system to crash.
Case Study: Negative tests
For the price calculation example we used previously, a negative test would involve testing with negative input values or a false data type (for example, char instead of int)7:
// testcase 20
price = calculate_price(-1000.00,0.00,0.00,0,0);
test_ok = test_ok && (ERR_CODE == INVALID_PRICE);
...
// testcase 30
price = calculate_price(“abc“,0.00,0.00,0,0);
test_ok = test_ok && (ERR_CODE == INVALID_ARGUMENT);
Various interesting things come to light:
Because the number of possible “bad” input values is virtually limitless, it is much easier to design “negative tests” than it is to design “positive tests”.
The test driver has to be extended in order to evaluate the exception handling produced by the test object.
Exception handling within the test object (evaluating ERR_CODE in our example) requires additional functionality. In practice, you will often find that half of the source code (or sometimes more) is designed to deal with exceptions. Robustness comes at a price.
Alongside functionality and robustness, component testing can also be used to check other attributes of a component that influence its quality and that can only be tested (if at all) using a lot of additional effort at higher test levels. Examples are the non-functional attributes “efficiency” and “maintainability”8.
Testing for efficiency
The efficiency attribute indicates how economically a component interacts with the available computing resources. This includes aspects such as memory use, processor use, or the time required to execute functions or algorithms. Unlike most other test objectives, the efficiency of a test object can be evaluated precisely using suitable test criteria, such as kilobytes of memory or response times measured in milliseconds. Efficiency testing is rarely performed for all the components in a system. It is usually restricted to components that have certain efficiency requirements defined in the requirements catalog or the component’s specification. For example, if limited hardware resources are available in an embedded system, or for a real-time system that has to guarantee predefined response-time limits.
Testing for maintainability
Maintainability incorporates all of the attributes that influence how easy (or difficult) it is to enhance or extend a program. The critical factor here is the amount of effort that is required for a developer (team) to get a grasp of the existing program and its context. This is just as valid for a developer who needs to modify a system that he programmed years ago as for someone who is taking over code from a colleague.
The main aspects of maintainability that need to be tested are code structure, modularity, code commenting, comprehensibility and up-to-dateness of the documentation, and so on.
Case Study: Code that is difficult to maintain
The sample calculate_price() code contains a number of maintainability issues. For example, there are no code comments at all, and numerical constants have not been declared as such and are instead hard-coded. If, for example, such a constant needs to be modified, it isn’t clear if and where else in the system it needs to be changed, forcing the developer to make huge efforts figuring this out.
Attributes like maintainability cannot of course be checked using dynamic tests (see Chapter 5). Instead, you will need to analyze the system’s specifications and its codebase using static tests and review sessions (see section 4.3). However, because you are checking attributes of individual components, this kind of analysis has to be carried out within the context of component testing.
Testing strategies
As already mentioned, component testing is highly development-oriented. The tester usually has access to the source code, supporting a white-box oriented testing technique in component testing. Here, a tester can design test cases using existing knowledge of a component’s internal structure, methods, and variables (see section 5.2).
White-box tests
The availability of the source code is also an advantage during test execution, as you can use appropriate debugging tools (see section 7.1.4) to observe the behavior of variables during testing and see whether the component functions properly or not. A debugger also enables you to manipulate the internal state of a component, so you can deliberately initiate exceptions when you are testing for robustness.
Case Study: Code as test basis
The calculate_price() code includes the following test-worthy statement:
if (discount > addon_discount)
addon_discount = discount;
Additional test cases that fulfill the condition (discount > addon_discount) are simple to derive from the code. But the price calculation specification contains no relevant information, and corresponding functionality is not part of the requirements. A code review can reveal a deficiency like this, enabling you to check whether the code is correct and the specification needs to be changed, or whether the code needs to be modified to fit the specification.
However, in many real-world situations, component tests are “only” performed as black-box tests—in other words, test cases are not based on the component’s inner structure9. Software systems often consist of hundreds or thousands of individual building blocks, so analyzing code is only really practical for selected components.
During integration, individual components are increasingly combined into larger units. These integrated units may already be too large to inspect their code thoroughly. Whether component testing is done on the individual components or on larger units (made up of multiple components) is an important decision that has to be made as part of the integration and test planning process.
Test-first
“Test-first” is the state-of-the-art approach to component testing (and, increasingly, on higher testing levels too). The idea is to first design and automate your test cases and to program the code which implements the component as a second step. This approach is strongly iterative: you test your code with the test cases you have already designed, and you then extend and improve your product code in small steps, repeating until the code fulfills your tests. This process is referred to as “test-first programming”, or “test-driven development” (often abbreviated to TDD—see also [URL: TDD], [Linz 14]). If you derive your test cases systematically using well founded test design techniques (see Chapter 5), this approach produces even more benefits—for example, negative tests, too, will be drafted before you begin programming and the team is forced to clarify the intended product behavior for these cases.
Ücretsiz ön izlemeyi tamamladınız.