Kitabı oku: «Software Testing Foundations», sayfa 4
2.2 Software Quality
Software testing serves to identify and remedy failures and increase software quality. Test cases should be chosen to mirror the subsequent real-world use that the system is designed for. The quality that testing verifies should then equate to the quality of the user experience.
However, software quality is about more than just correcting the faults found during testing.
2.2.1 Software Quality according to ISO 25010
ISO 25010: Quality in Use and Product Quality models
According to the ISO 25010 standard [ISO 25010], software quality can be classified in two major ways7:
The Quality in Use Model, and
The Product Quality Model
The quality in use model comprises the following five characteristics:
Effectiveness
Efficiency
Satisfaction
Freedom from risk
Context coverage
The product quality model comprises eight characteristics:
Functional suitability
Performance efficiency
Compatibility
Usability
Reliability
Security
Maintainability
Portability
The product quality model has the most similarities with the previous ISO 9126 standard. Details of the Data Quality Model can be found in the ISO 25012 standard [ISO 25012].
In order to effectively judge the quality of a software system, all of these characteristics and quality criteria need to be considered during testing. The level of quality that each characteristic of the test object is intended to fulfill has to be defined in advance in the quality requirements. The fulfillment of these requirements and their criteria then has to be checked using appropriate tests.
Forty (sub-)characteristics
ISO 25010 breaks down the 13 quality characteristics listed above into a total of 40 further sub-characteristics. It is beyond the scope of this text to go into detail on all 40 sub-characteristics of the quality in use and product quality models. More details are available online at [ISO 25010]. Some of the more important characteristics are summarized below:
Functional suitability/functionality
The functional suitability (or, more simply, functionality) of the product quality model covers all the characteristics involved in describing the planned functionality of the system.
A quality characteristic is divided into three sub-characteristics:
Functional completenessDoes the function set cover all specified tasks and user objectives?
Functional correctnessDoes the product/system deliver correct results with the required degree of accuracy?
Functional appropriatenessTo what degree do the available functions fulfill the required tasks and specified objectives?
Appropriate tests can be used to check whether specified and implicit requirements are mirrored in the available functionality, thus answering the questions posed above.
Functionality is usually described in terms of specified input/output behavior and/or a specific system reaction to specified input. Tests are designed to demonstrate that each required functionality has been implemented in such a way that the specified input/output behavior or system behavior is complied with.
Reliability
The reliability aspect of the product quality model describes a system’s ability to perform at a specific level under specified circumstances for a specified period of time.
This quality characteristic has four sub-characteristics:
MaturityTo what degree does a system, product, or component provide the required reliability under normal operating conditions?
AvailabilityIs the system, product, or component always ready for use, and how easily is it available when it is required?
Fault toleranceHow well does the system, product, or component function in spite of the presence of known hard- or software faults?
RecoverabilityHow long does it take to recover specific data and normal service following a system or product failure or crash?
Satisfaction
The satisfaction aspect of the quality in use model addresses the degree to which user needs are fulfilled when the product or system is used under specified circumstances.
This quality characteristic has four sub-characteristics:
UsefulnessHow happy is the user with the perceived fulfillment of pragmatic objectives, including the results and consequences of using the system?
TrustHow certain is the user (or other stakeholder) that the product or system will behave as intended?
PleasureHow much pleasure does the user experience when using the system to fulfill his/her personal requirements?
ComfortHow comfortable does the user find the system—also in terms of physical well- being?
Most of the characteristics of the quality in use model have a strong personal element and can only be objectively viewed and precisely evaluated under exceptional circumstances. In order to test this quality characteristic you will need to refer to multiple users (or user groups) in order to obtain usable test results.
A software system cannot fulfill all quality characteristics to the same degree. Fulfilling one characteristic often means not fulfilling another. A highly efficient software system is not easily portable, as its developers will have designed it to utilize specific attributes of the platform it runs on.
Prioritizing quality characteristics
It is therefore necessary to prioritize these characteristics. The resulting priorities will also act as a guideline for the testing thoroughness for each characteristic.
Case Study: Applying ISO 25010 to VSR-II
The VSR-II testing/QA lead suggests using the product quality model described in ISO 25010 to the project steering committee. The committee agrees and asks the testing/QA lead to prepare a concept paper on how to apply the standard in the context of the VSR-II project. The core of the draft is a matrix that illustrates the relevance of each quality attribute to each product component and which interpretations to apply. The initial draft of the matrix looks like this:
Table 2-1 Classifying quality characteristics
These risk classifications are to be interpreted relative to one another and are justified by the testing/QA lead for each quality characteristic, e.g.:
Functional suitability/all modulesEvery module serves large numbers of users and processes a lot of data, so functional failures have the potential to produce considerable costs. The requirement is therefore classified as “high” for all modules.
Compatibility/ConnectedCarThere are no requirements, as this module is to be built from scratch.
Usability/FactBookThere are no requirements, as this is a back-end module and the API already exists.
Portability/DreamCarThis characteristic is classified as “low” because the framework in use covers it without the application of additional measures.
Which checks and tests are required will be established during QA/test planning for each module. This top-level classification can be used to establish basic parameters—for example, automated continuous integration tests (see section 3.2) are required for a “high” attribute, a single round of acceptance testing is sufficient for “mid” attributes, while a written design guideline on how the team approaches an issue is sufficient for “low” attributes. The QA/test lead is sure to have to go through a number of assessment rounds with the teams to arrive at an agreement on these high-level rules.
2.2.2 Quality Management and Quality Assurance
QM
Quality management (QM) covers all organizational activities and measures that serve to control quality. Quality management is usually responsible for defining quality policies, quality objectives, quality planning, quality assurance, and quality improvement. This makes quality management a core management activity. Quality management is mandatory in industries such as aviation, automotive, and healthcare.
ISO 9000
The ISO 9000 family of quality management standards [ISO 9000] is widely used, and the ISO/IEC 90003 Software Engineering standard [ISO 90003] stipulates how the ISO 9001 [ISO 9001] general guidelines are applied to computer software.
QA
Quality assurance (QA) usually concentrates on measures which aim to ensure that specified procedures are applied and defined processes are adhered to. It is assumed, if a company sticks to its predefined processes, that it will fulfill the required quality characteristics and therefore achieve the specified quality levels. Under these circumstances, the results will usually show increased quality, which in turn helps to avoid failures in the work products and the corresponding documentation.
QA and testing
The term “quality assurance” is often used when referring to testing processes or even as a synonym for testing. QA and testing processes are closely related but are definitely not the same. QA generates confidence in the fulfillment of quality requirements, which can be achieved by testing. Effective QA also involves analyzing the causes of all kinds of defects, and serves to identify (test) and remedy (debug) them. The results are discussed in meetings called “retrospectives” and can be used to improve the processes involved. Testing therefore serves to demonstrate that the required quality levels are achieved.
Testing activity is part of the overall software development and maintenance process and, because QA is about making sure such processes are implemented and executed correctly, QA also supports effective testing. The following section describes the testing process itself in more detail.
2.3 The Testing Process
Development models
Chapter 3 introduces different types of software development lifecycle models (also referred to more simply as “development models”). These are designed to aid structuring, planning, and management of new or continuing software projects. In order to perform well-structured tests, you will usually need more than just a description of the activities that make up the development model. In addition to positioning testing within the overall development process, you will also need a detailed dedicated testing schedule. In other words, the content of the development task called “testing” needs to be broken down into smaller, more manageable steps.
Testing comprises a sequence of individual activities
There are many widely used and proven test activities, and a test process will be made up of these kinds of activities. You need to put together a suitable test process according to the specified (or inherited) project situation. The specific test activities you choose, and how (and when) you implement them will depend on a number of factors, and will generally be based on a company or project-specific testing strategy (see section 6.2). If you ignore certain test activities you will increase the likelihood of the test process failing to reach its objectives.

Fig. 2-3 The testing process
The main activities
A test process8 will generally comprise the following activities (see figure 2-3):
Test planning
Test monitoring and control
Test analysis
Test design
Test implementation
Test execution
Test completion
Each of these activities comprises multiple individual tasks that produce their own output and vary in nature according to the project at hand.

Fig. 2-4 The test process showing time overlap
Iterative testing
Even if the individual tasks involved in the test process are defined in a logical sequence, in practice they can (and may) overlap, and are sometimes performed concurrently (see figure 2-4). Even if you intend to perform your test activities in a predefined sequence, a sequential development model (for example, the “waterfall” model) can cause overlap, combination, concurrency, or cancellation of individual activities or parts of activities.
Adapting these activities to fit the system and project context (see below) is usually necessary, regardless of which development model you are using.

Fig. 2-5 An iterative test process
Software is often developed in small iterative steps—for example, agile development methods use continuous build and test iterations. The corresponding test activities therefore need to take place continuously and iteratively (see figure 2-5).
The following sections provide an overview of the individual test steps and their output. Test management is responsible for the monitoring and execution of most of these activities, which are described in detail in Chapter 6 (see sections 6.2 and 6.3).
2.3.1 Test Planning
Structured handling of a task as extensive as testing will not work without a plan. Test planning begins directly at the start of a software project. Like with any plan, it is necessary to review your testing plan regularly and update or adapt it to fit changing situations and project parameters. Test planning is therefore a repeat activity that is carried out and, if necessary, adjusted throughout the entire product lifecycle.
The test plan: planning the content
The main task when planning your testing is the creation of a test plan based on your chosen testing strategy. This test plan defines the test objects, the quality characteristics and the testing objectives, as well as the testing activities you plan to use to verify them. The test plan thus describes your testing technique, the required resources, and the time required to perform the corresponding test activities.
Coverage criteria
The point at which you have performed sufficient testing is determined by the planned coverage and is also part of the test plan. Such criteria are often referred to as “completion criteria” or “exit criteria” or, in the case of agile projects, “definition of done”. If coverage criteria are defined for each test level or type, you can evaluate objectively whether the tests you have performed can be seen as sufficient. Coverage criteria can also be used to monitor and control testing, and they also verify when you have reached your testing objectives.
The test plan also contains information about the test basis, which serves as the cornerstone for all your testing considerations. The test plan also needs to include information regarding traceability between the test basis and the results of your test activities. For example, this can help you to determine which changes to the test basis modify which testing activities, thus enabling you to adapt or augment them.
The test plan also defines which tests are to be performed at which test level (see section 3.4). It often makes sense to draft a separate test plan for each test level, and you can use a master test plan to aggregate these into one.
Test planning: scheduling
The test schedule contains a list of all activities, tasks and/or events involved in the test process. This list includes the planned start and end times for every activity. Interdependencies between activities are noted in the test schedule.
The plan also defines deadlines for key activities to support its practical implementation during the project. The test schedule can be part of the test plan.
2.3.2 Test Monitoring and Control
Ensuring traceability
Monitoring and control involve constant observation of the current testing activities compared with the planned activities, reporting any discrepancies, and the execution of the activities required to achieve the planned objectives under the changed circumstances. The update of the plan must also be based on the changed situation.
Are the exit criteria fulfilled?
Test monitoring and control activities are based on the exit criteria for each activity or task. The evaluation of whether the exit criteria for a test at a particular test level have been fulfilled can include:
Achievement of the degree of coverage defined in the test plan according to the available test results and logs. If the predefined criteria are fulfilled, the activity can be terminated.
The required component or system quality is determined based on test results and logs. If the required quality has been achieved, the test activity can be concluded.
If risk evaluation is part of the test plan and you need to prove that you have sufficient risk coverage, this can also be determined using the test results and logs.
Perform additional tests or take a risk?
If the required exit criteria have not been fulfilled by the tests you have performed, you need to design and execute additional tests. If this isn’t possible for any reason, you need to clarify the situation and evaluate the ensuing risk.
Progress and completion reports
Stakeholders expect to receive regular test progress reports on current testing progress compared with the overall plan. Alongside any deviation from the original plan, these reports should also contain information regarding any prematurely terminated tests (see above) or non-fulfillment of the planned exit criteria. Test summary reports are to be provided when project milestones are reached.
All test reports should contain details relevant to their recipients and include a progress report as well as test results. Reports should also answer or preempt management questions, such as the (expected) end time, planned vs. actual use of resources, and the amount of testing effort involved.
Progress monitoring can be based on the reports made by team members or on figures and analysis provided by automated tools.
2.3.3 Test Analysis
“What” do we need to test?
Test analysis involves determining what exactly needs to be tested. For this purpose, the test basis is examined to see whether the documents to be used are sufficiently detailed and contain testable features in order to derive test conditions. The degree to which the test conditions need to be checked is determined by measurably defined coverage criteria.
The following documents and data can be used to analyze the test basis and the planned test level:
Analyzing the test basis
Requirements that specify the planned functional and non-functional system or component behavior. The requirements specifications include technical requirements, functional requirements, system requirements, user stories9, epics10, use cases, and similar work products or documentation. For example, if a requirement doesn’t specify the predicted result and/or system behavior precisely enough, then test cases cannot be simply derived. A reworking is necessary.
Analyzing documentation
Design or implementation data that gives rise to specific component or system structures. System or software architecture documents, design specifications, call graphs, model diagrams (for example, UML11 or entity relationship diagrams12), interface specifications, and similar materials can be used for the test basis. For example, you will need to analyze how easily interfaces can be addressed (interface openness) and how easily the test object can be divided into smaller sub-units in order to test these separately. You need to consider these aspects at the development stage, and the test object needs to be designed and coded accordingly.
Check the test object
You need to investigate the individual components or the system itself, including the code base, database metadata, database queries, and other interfaces. For example, you need to check that the code is well structured and easy to understand, and that the required code coverage (see section 5.1) is easy to achieve and verify.
Consider risk analysis
Risk analysis reports that cover functional, non-functional, and structural aspects of the system or its components need to be investigated too. If potential software failures create serious risks, testing needs to be correspondingly thorough. Testing can be performed less formally for software that is not mission-critical.
Potential documentation errors
The cornerstone of the entire testing process is the test basis. If the test basis contains defects, you cannot formulate “correct” test conditions and you won’t be able to draft “proper” test cases. You therefore need to analyze the test basis for defects too. Check whether it contains ambiguities, or whether there are gaps or omissions in the descriptions of functions. You need to check the documentation for inconsistencies, imprecision, contradictions, and for repeat and/or redundant passages. Any defects or discrepancies you find should be corrected immediately.
The discovery and removal of defects from the test basis is extremely important, especially if the documentation hasn’t been reviewed (see section 4.3). Development methodologies such as behavior-driven development (BDD, [URL: BDD]) and acceptance test-driven development (ATDD, [URL: ATDD]) use acceptance criteria and user stories to create test conditions and test cases before coding begins. This approach makes it simpler to identify and remedy defects at a much earlier stage in the development process.
Prioritizing tests
Once the test conditions have been identified and defined, you need to prioritize them. This ensures that the most important and high-stakes test conditions are tested first. In real-world situations, time restrictions often make it impossible to perform all the planned tests.
Traceability is important
At the planning stage, you need to ensure that there is unambiguous bi-directional traceability between the test basis and the results of your testing activities (see above). This traceability has to precisely define which test condition checks which requirement and vice versa.
This is the only way to ensure that you can later determine how thoroughly which requirements need to be tested and—depending on the test conditions—using which test cases.
Choosing test techniques
It is useful to consider which test technique to use (i.e., black-box, white-box, experience-based, see Chapter 5) at the analysis stage. Each technique has its own system for reducing the likelihood of overlooking test conditions and helping to define these precisely. For example, using an equivalence partition (or “equivalence class”) test (see section 5.1.1) ensures that the entire input domain is used for the creation of test cases. This prevents you from forgetting or overlooking negative input in your requirements definitions. You can then define your test conditions to cover negative input data.
If you use an experience-based testing technique (see section 5.3), you can use the test conditions defined during analysis as objectives for your test charter. A test charter is a kind of “assignments” that, alongside traditional test objectives, provides potential ideas for additional tests. If the test objectives are traceable back to the test basis, you can evaluate the achieved coverage (for example, for your requirements), even when you are using an experience-based technique.