Kitabı oku: «Software Testing Foundations», sayfa 6

Yazı tipi:

2.3.9 The Influence of Context on the Test Process

The context of the testing process

The following are some of the factors that influence the testing process within an organization (see also section 6.2.5):

 The choice and usage of test activities depends on the software development lifecycle model you use. Most agile models don’t include detailed directions for testing and its associated activities.

 Depending on the chosen and implemented system architecture the system may be divided into subsystems. This affects test levels (component, integration, system, and acceptance testing—see section 3.4) and the corresponding technique used to derive or select test cases (see Test Techniques in Chapter 5).

 The test type also influences the testing process. A test type is a group of test activities designed to test a component or a system for interrelated quality characteristics. A test type is often focused on a single quality characteristic—for example, load testing (see section 3.5).

 In the case of significant product or project risk, the test process needs to be designed to address the maximum number of risks through test cases, and thus minimize them (see section 6.2.4).

 The context in which a software system is used also has an effect on the testing process. For example, software used exclusively for company-internal vacation planning will be subject to less thorough testing than a custom system built to control a client’s industrial facility.

 Company guidelines and best practices can also have an effect on the testing process and the rules that govern it. The advantage here is that the factors influencing the process do not have to be discussed and redefined for each project. Required internal and external standards have the same effect.

Operational limitations

Operational limitations also affect the structure and implementation of test processes. Such limitations include:

 Budget and other resources (for example, specialist staff) that are usually in short supply for testing purposes

 Extended deadlines that leave less time than planned for testing

 The complexity of the system to be tested. Complex systems are not easy to test! The test process must be designed to reflect the complexity of the system.

 Contractual and regulatory requirements. These can have a direct effect on the scope of the testing process, and therefore on that of the individual activities involved.

2.4 The Effects of Human Psychology on Testing

Errare humanum est

Everyone makes mistakes, but nobody likes to admit it! The main objective of software testing is to reveal deviations from the specifications and/or the customer’s requirements. All failures found must be communicated to the developers. The following sections address how to deal with the psychological issues that can result.

Software development is often seen as a constructive activity while testing the product and checking the documentation are more often seen as destructive. This view often results in the people involved in a project having very different attitudes to their work. However, this assumption is not at all justifiable as, according to [Myers 12, p. 13, Software testing principle #10]: “Testing is an extremely creative and intellectually challenging task.” Furthermore, testing plays a significant role in the success of a project and the quality of the resulting product (see section 2.1).

Disclosing errors

Failures found must be communicated to the author of the faulty document or source code. Management, too, needs to know about which (or at least how many) defects have been revealed by testing. The way this communicated can be beneficial to the relationships among analysts, product owners, designers, developers, and testers, but can also have a negative effect on these important lines of communication. Admitting (or proving) errors is not an easy thing to do and requires a great deal of sensitivity.

Discovering failures can be construed as criticism of a product or its author, and can occur as a result of static or dynamic tests. Examples of such situations are:

 A review meeting in which a requirements document is discussed

 A meeting at which user stories are fleshed out and fine-tuned

 During dynamic test execution

Always take confirmation bias into account

“Confirmation bias”16 is a psychological term that describes the tendency to search for, interpret, favor, and recall information in a way that confirms or strengthens your prior personal beliefs or hypotheses. This tendency plays a significant role in the communication of software faults. Software developers generally assume that their code is fault-free, and confirmation bias makes it difficult for them to admit that their work might be flawed. Other cognitive preconceptions, too, can make it tricky for those involved to understand or accept the results of software tests.

Recognized defects are a positive thing

Another thoroughly human trait is the tendency to “shoot the messenger” who brings bad news. Test results are often seen as bad news, even though the opposite is often true: recognized defects can be remedied and the quality of the test object improved. Software failures that are out in the open are actually a positive thing!

Soft skills required

In order to avoid (or at least reduce) the potential for this kind of conflict, testers and test managers need well-developed soft skills. Only then can everyone involved effectively discuss faults, failures, test results, test progress, and risk. Mutual respect always creates a positive working environment and engenders good relationships between all the people involved in the project.

Examples of positive communication:

Examples of positive communication

 Arguments or simply “getting loud” are never good for cooperation in the workplace. To make sure work progresses amicably, it often helps to remind everyone of their mutual objectives and the high product quality they are aiming for.

 As already mentioned, you can never repeat enough that discovering a defect and rectifying it is a positive thing. Other positive aspects are:Error recognition can help developers to improve their work and therefore their results too. A lot of developers are not aware that test results can help them hone their own skills.Discovered and rectified faults are best framed to management as a means to save time and money, and as a way to reduce the risk of poor product quality.

Keep documentation neutral and factual

 Documentation style also plays a role in communication. Test results and other findings need to be documented neutrally and should stick to the facts. If a document is flawed, don’t criticize its author. Always write objective, fact-based defect reports and review findings.

 Always consider the other person’s point of view. This makes it easier to understand why someone perhaps reacts negatively to something you wrote or said.

 Misunderstandings in general and talking at cross-purposes never engender constructive communication. If you find yourself in such a situation (or expect one to arise), always ask how what you said was understood and confirm what was actually said. This works in both directions.

Section 2.1.2 lists typical testing objectives. Clearly defined objectives also have a distinct psychological effect. People often like to align their behavior to clearly defined objectives. As well as testing objectives, you can orient yourself toward team objectives, management objectives, or other stakeholders’ objectives. The important thing is that testers pursue and stick to these objectives as impartially as possible.

2.4.1 How Testers and Developers Think

Different mindsets

Because they pursue very different objectives, developers and testers usually think quite differently. Developers design and produce a product, while testers verify and validate the product with a view to discovering faults and failures. It is possible to improve product quality by combining these two different mindsets.

Assumptions and preferred decision-making techniques are reflected in the way most people think. Alongside curiosity, a tester’s mindset will usually include a healthy level of professional pessimism combined with a critical view of the test object. Interest in detail and the will to communicate positively with everyone involved are other characteristics that are useful in testers. Testers develop their soft skills through years of hands-on experience.

A developer’s main interest is in designing and implementing a solution to a problem, so he won’t be keen on considering which aspects of the solution are flawed or buggy. Although some aspects of a developer’s mindset are similar to those of a tester, the confirmation bias described above always makes it more difficult for developers to recognize errors in their own work (see also section 6.1.1).

Side Note

Can a developer change his mindset and test his own software? There is no simple answer to this question, and someone who both develops and tests a product requires a rare degree of critical distance to their own work. After all, who likes to investigate and confirm their own mistakes? Generally speaking, developers prefer to find as few faults as possible in their own code.

Developer tests

The greatest weakness of “developer tests” (see section 3.4.1) is that every developer who tests his own code will always view his work with too much optimism. It is highly likely that he will overlook meaningful test cases or, because he is more interested in programming than testing, he won’t test thoroughly enough.

Blind to your own mistakes

If a developer misunderstands the assignment and designs a program with a fundamental flaw, an appropriate test case simply won’t occur to him and he won’t find the flaw. One way to reduce this common “blindness to your own mistakes” is to work in pairs and get each developer to test the other’s work (see section 6.1, model #1).

On the other hand, inside knowledge of your own test object can be an advantage, as you don’t need to spend time getting to know the code. It is up to management to decide when the advantage of knowing your own work outweighs the disadvantages caused by blindness to your own mistakes. This decision is based on the importance of the test object and the level of risk involved if it fails.

In order to design meaningful test cases, a tester has to learn about the test object, which takes time. On the other hand, a tester has specialist skills that a developer would have to learn in order to test effectively—a learning process for which there is usually no time to spare.

Developers require testing skills, testers require development skills.

It is a great aid to cooperation and understanding between developers and testers if both acquire some knowledge to the other’s skills. Developers should always know some testing basics, and testers should always have some development experience.

2.5 Summary

 Testing terminology is only loosely defined and similar terms are often used to mean different things—a cause of frequent misunderstandings. This is why consistent use of terminology is an important part of the Certified Tester course. The glossary at the end of this book provides an overview of all the most important terms.

 Testing takes up a large proportion of a project’s development resources. Precisely how much testing effort is required depends on the type of project at hand.

 The testing process—starting with planning and preparation steps—needs to be begun as early as possible in order to generate the maximum testing benefits within the project.

 Always follow the seven basic testing principles.

Side Note

 Testing is an important part of the quality assurance complex in the context of software development. Appropriate quality models and characteristics are defined by the international ISO 25010 standard [ISO 25010].

 It is important to recognize and observe the connections and the boundaries between testing, quality assurance, and quality management.

 The testing process comprises test planning, monitoring and control, analysis, design, implementation, execution, and completion. These activities can overlap and can be performed sequentially or in parallel. The overall testing process has to be adapted to fit the project at hand.

 Bidirectional traceability between the results of the individual testing activities ensures that you can make meaningful statements about the results of the testing process, and to make a reasonable estimate of how much effort will be involved in making changes to the system. Traceability is also critical to effective test monitoring and control.

 All of the many factors that influence testing within an organization have to be considered.

 People make mistakes but don’t usually like to admit it! This is why psychological issues play a significant role in the overall testing process.

 The mindsets of testers and developers are very different, but both can benefit by learning from one another.

3 Testing Throughout the Software Development Lifecycle

This chapter offers a brief introduction to common lifecycle models used in software development projects, and explains the role testing plays in each. It discusses the differences between various test levels and test types, and explains where and how these are applied within the development process.

Most software development projects are planned and executed along the lines of a software development lifecycle model that is chosen in advance. Such models are also referred to as software development process models or, more concisely, development models.

Such a model divides a project into separate sections, phases, or iterations and arranges the resulting tasks and activities in a corresponding logical order (see [URL: SW-Dev-Process]). Additionally, the model usually describes the roles that each task is assigned to and which of the project’s participants is responsible for each task. The development methods to be used in the individual phases are often described in detail too.

Every development model has its own concepts regarding testing, and these can vary widely in meaning and scope. The following sections detail popular development models from a tester’s point of view.

Types of lifecycle models

The two basic types of development model in use today are sequential and iterative/incremental. The following sections include discussion of both types.

3.1 Sequential Development Models

As the name suggests, a sequential development model arranges the activities involved in the development process in a linear fashion. The assumption here is that development of the product and its feature set is finished when all the phases of the development model have been completed. This model does not envisage overlaps between phases or product iterations. The planned delivery date for projects run this way can lie months—or even years—in the future.

3.1.1 The Waterfall Model

An early model was the so-called “waterfall model” [Royce 70]. It is impressively simple and, in the past, enjoyed a high degree of popularity. Each development phase can only begin once the previous phase has been completed, hence the model’s name1. However, the model can produce feedback loops between neighboring phases that require changes to be made in a previous phase. Figure 3-1 shows the phases incorporated in Royce’s original model:


Fig. 3-1 The waterfall model according to Royce

The major shortcoming of this model is that it bundles testing as a single activity at the end of the project. Testing only takes place once all other development activities have been completed, and is thus seen as a kind of “final check” akin to inspecting goods that leave a factory. In this case, testing is not seen as an activity that takes place throughout the development process.

3.1.2 The V-Model

The V-model is an extension of the waterfall model (see [Boehm 79], [ISO/IEC 12207]). The advent of this model made a huge and lasting difference to the way testing is viewed within the development process. Every tester and every developer should learn the V-model and learn about how it integrates the testing process. Even if a project is based on a different development model, the principles illustrated here can still be applied.

The basic idea is that development and testing are corresponding activities of equal value. In the diagram, they are illustrated by the two branches of the “V”2:


Fig. 3-2 The V-model

The left-hand branch represents the steps that are required to design and develop the system with increasing detail up to the point at which it is actually coded.2

The constructional activities in the left-hand branch correspond to the activities outlined in the waterfall model:

 Definition of requirementsThis is where the customer and end-user requirements are collected, specified, and approved. The purpose and proposed features of the system are now stipulated.

 Functional designThe requirements are mapped to specific features and dialog flows.

 Technical designThe functional design is mapped to a technical design that includes definition of the required interfaces to the outside world, and divides the system into easily manageable components that can be developed independently (i.e., the system architecture is drafted).

 Component specificationThe task, behavior, internal construction, and interfaces to other components are defined for each component.

 ProgrammingEach specified component is programmed (i.e., implemented as a module, unit, class etc.) using a specific programming language.

Because it is easiest to identify defects at the level of abstraction on which they occur, each of the steps in the left-hand branch is given a corresponding testing step in the right-hand branch. The right-hand branch therefore represents an integration and testing flow during which system components are successively put together (i.e., integrated) to build increasingly large subsystems, that are then tested to ensure that they fulfill their proposed functions. The integration and testing process ends with acceptance testing for the complete system.

 Component tests ensure that each individual component fulfills its specified requirements.

 Integration tests ensure that groups of components interact as specified by the technical design.

 System tests ensure that the system as a whole functions according to its specified requirements.

 The acceptance test checks that the system as a whole adheres to the contractually agreed customer and end-user criteria.

These test steps represent a lot more than just a chronological order. Each test level checks the product (and/or specific work products) at a different level of abstraction and follows different testing objectives.

This is why the various test levels involve different testing techniques, different testing tools, and specialized personnel. Section 3.4 presents more details regarding each of these test levels.

In general, each test level can include verification and validation tests:

Did we build the system right?

 Verification involves checking that the test object fulfills its specifications completely and correctly. In other words, the test object (i.e., the output of the corresponding development phase) is checked to see whether it was “correctly” developed according to its specifications (the input for the corresponding phase).

Did we build the right system?

 Validation involves checking that the test object is actually usable within its intended context. In other words, the tester checks whether the test object actually solves the problem assigned to it and whether it is suited to its intended use.

Practically speaking, every test includes both aspects, although the validation share increases with each level of abstraction. Component tests are largely focused on verification, whereas an acceptance test is mainly about validation.

The V-model’s hallmarks

To summarize, the most important characteristics of the V-model are:

 Development and test activities take place separately (indicated by the left and right-hand branches) but are equally important to the success of the project.

 The model’s “V” shape helps to visualize the verification/validation aspects of testing.

 It differentiates between collaborative test levels, whereby each level is testing against its corresponding development level.

The principle of early testing

The V-model creates the impression that testing begins late in the development process, following implementation. This is wrong! The test levels in the right-hand branch of the model represent the distinct phases of test execution. Test preparation (planning, analysis, and design) must begin within the corresponding development step in the left-hand branch.

Türler ve etiketler

Yaş sınırı:
0+
Litres'teki yayın tarihi:
19 ağustos 2024
Hacim:
443 s. 56 illüstrasyon
ISBN:
9783969102992
Yayıncı:
Telif hakkı:
Bookwire
İndirme biçimi:
Metin
Средний рейтинг 0 на основе 0 оценок