STBox stands for a Software Testing Method Based on Computer Task Group, Incorporated's experience. STBox is a proprietary methodolgy and the first workflow-driven testing methodology, and adopts the most important international testing standards (e.g. BS 7925, IEEE 829 and ISO 9126, as used by ISTQB/ISEB). It was created in 2006 by Sven Sambaer, Alec Puype & Steven Mertens. In 2006, it was launched in Belgium, and during 2007 it was introduced in the UK, Luxemburg, Germany and the US.
Overview
STBox addresses the three basic dimensions of testing:
- Process: This aspect includes the work breakdown structure of test activities (test phases and their place in the software development process).
- People: This aspect recognizes the organizational components of testing (test organization structure and models, testing roles, tasks, and responsibilities).
- Technology: This aspect covers test infrastructure (test environments, test data, office environments, test tools).
These three dimensions can be visualized as a cube, or box, to clearly indicate that they are interrelated: testing activities prescribed by the testing method will only be possible if the supporting organizational structure and technology infrastructure are in place. STBox is based on standards used by ISEB and ISTQB in their certification programs (including BS 7925-1, BS 7925-2, IEEE 829-1998, and ISO 9126). These standards have gained international recognition and are regarded as best practice. However, they only cover certain aspects of software testing such as terminology and ]. STBox can be considered the ‘glue’ filling in the gaps between the different international standards. One of the major added values of STBox is that it offers a workflow for the various testing activities and covers all aspects of software testing (process, people, and infrastructure).
STBox: Process The V-Model, Test Levels, and Test Types The V-Model
STBox is based on the V-Model. The V-Model considers testing a process that runs in parallel with analysis and development, instead of a separate stage at the end of the project. The classic graphical representation of the V-Model relates the various test levels to the waterfall life cycle for software development ( user requirements, system requirements, global design, detailed design, implementation). The software development phases appear on the left, with the corresponding test levels on the right.
The figure shows the V-Model and the responsibilities designated in STBox, based on international standards. Every organization can use its own version of the V-Model, based on its own terminology and development life cycle. The principles and philosophy, however, remain the same. Based on requirements or design, one should plan and prepare the corresponding test level. This means writing testplans for the various test levels and documenting what tests should be executed later to verify that the system satisfies the specified requirements or design. In general:
- Acceptance tests verify the user requirements
- System and system integration tests verify the system requirements
- Component integration tests verify the global design documents
- Component tests verify the requirements specified in detailed design documents
Black Box and White Box Tests
In white box testing, the test design is based on the internal structure of the component or system. In black box testing, it is based on specifications. Typically, the higher test levels are black-box-based, while lower test levels take into account the internal structure of the components to test.
Test Levels vs. Test Types
The right branch of the V-Model consists of the different test levels. Test planning, test build, and test execution activities are repeated at each test level. The different test levels are characterized by different goals, test bases, schedules, test environments, staff involved, etc. A standard STBox implementation assumes five test levels: component testing, component integration testing, system testing, system integration testing, and acceptance testing. For each test level, tests can focus on several quality characteristics. This means that various test type(s) may be relevant to a particular test level.
Regression Testing
In BS 7925-1, regression testing is defined as “retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.” This means that regression testing focuses on determining whether features that were working before a change to the system still work. Some organizations or methodologies refer to this kind of testing as ‘business as usual’ or ‘non-regression’ testing. Some test professionals consider regression testing a test level, while others consider it a test type. According to STBox, regression testing is neither:
- It is not a test level, because regression testing activities occur at every test level, and even after deployment.
- It is not a test type, because it does not focus on one particular quality characteristic (or group of characteristics). In fact, most regression tests combine different test types (e.g. functional testing and performance testing).
The STBox Process Basics of the STBox Process Test Activities
In addition to defining the relevant test levels and test types for a software development project, the various test activities that should be executed are also defined. There are seven categories of test activities:
- Strategy and planning activities for an entire program or project
- Strategy and planning activities for a specific part (an iteration, test level, or test type)
- Test and test infrastructure build activities
- Test execution activities
- Closure activities
- Management and coordination activities
- Quality management activities
Test Phases
The STBox test process is divided into seven phases, each of which includes a group of activities with assigned roles, responsibilities, and ]. The STBox methodology is graphically represented as a box, where the outer elements are the generic project phases and the inner elements are specific phases for each iteration, test level, or test type.
A test project always starts with the Test Project Preparation phase. The objective is to determine the scope of the test assignment and develop the high-level test strategy and a test project plan.
In the Detailed Test Planning phase, an individual test plan is composed for each iteration, test level, or test type specifying the detailed approach, the features to test (FTT) tree, and planning for each of them. The Test Build phase includes all activities that deliver parts of the test model (completed FTT tree, test procedures, test cases, test scripts, checklists, and test execution schedules). At the end of this phase, the test infrastructure and test object are installed.
The tests prepared in the previous phase are executed in the Test Execution phase. The results are analyzed and reported. This phase consists of three cycles: defects are reported, defects are fixed, and retests are performed. As a result of the execution phase, a test summary report will give advice on the open risks associated with moving on to the next test level or deploying the system into production.
Finally, during the Test Project Closure phase, the entire test process is evaluated, the test ] are consolidated, and the test team is released from their assignment.
Test Management is a set of continuous test project activities that ensure that the test process is managed professionally: follow-up on the test team and test plans, status reporting, defect management, issue and scope management, and timely and cost-effective production of high-quality test ]. The purpose of Quality Management is to manage, steer, and improve the quality of the ] and the project execution. An important aspect of quality management is reviewing the test basis.
Main Process Overview
One of the major added values of STBox is that it offers a workflow for the different testing activities.
350px
Detailed Process Flow
The main workflow consists of 23 activities. For each activity, STBox provides a detailed overview of:
- The steps in this activity
- Input sources required
- Output delivered
- The roles regarding this activity (who is accountable, responsible, consulted, and/or informed)
Phase 1: Test Project Preparation
A test project always starts with the Test Project Preparation phase. The objective is to determine the scope of the test assignment and develop a high-level test strategy and test project plan. The test project preparation phase is about getting a good overview of the project and starting to organize testing activities. In this phase, all the practical aspects (such as scope, strategy, approach, schedule, budget and infrastructure) are discussed, decided upon, and compiled in the project test plan.
Activity 1: Determine Test Scope
The test manager starts the test project by gathering as much information as possible about the project, processes, organization, culture, and infrastructure. This can be done by gathering relevant documentation (system documentation, project documentation, process documentation, and organizational documentation) and/or by conducting meetings with key stakeholders. The test manager should agree with the key stakeholders on the project’s scope and objectives, the test object, test basis, test types, and test levels.
Activity 2: Determine High-Level Test Strategy
It is impossible to test everything, so every project involves some measure of risk. The test manager and project manager have to make decisions about what to test and what not to test (scope/test coverage), and what to test more or less thoroughly (test depth). Therefore, it is crucial to develop a test strategy to ensure that you are correctly focusing the test effort and doing the best possible job with limited resources.
A test strategy begins with an overview of the product risks, features to test (FTTs), and the priority of each FTT. The test strategy enables you to make objective decisions about the distribution of the test effort and the risks taken by reducing test coverage or test depth. A risk-based test strategy enables the test manager to make the project stakeholders aware of risks. STBox advises a risk- and requirements-based test (RRBT) strategy and follow-up, based on the following process:
- Identify high-level project risks, product risks, and requirements
- Analyze risks and requirements and compose list of FTTs
- Define risk response
- Create test strategy matrix
- Assign test techniques
The first three steps are part of an iterative process that is executed once in an early phase for the whole project, resulting in the highlevel test strategy, and again later in more detail for each iteration, test level, or test type. Throughout this process, previously defined risks may disappear and new risks may arise. Therefore, continuous risk monitoring is necessary.
Test Strategy Matrix
The main ] of this activity is the test strategy matrix. The test strategy matrix contains:
- The FTTs and the test types
- Their priorities
- The corresponding test levels
- An overview of which FTTs and test types should be covered at which test levels
Activity 3: Plan the Test Project
In this activity, the test manager creates a project test plan to answer the questions why, what, who, when, where, how, and by what means. In other words, the test plan covers the purpose, scope, organization, schedule, infrastructure, approach, techniques, and tools of the test process. When all remarks and comments from the project test plan reviewers and approvers have been processed, the test manager organizes the official test project kick-off meeting.
Project Test Plan
IEEE 829-1998 suggests the following chapters be included in the test plan:
- Test plan identifier
- Introduction
- Test items
- Features to be tested
- Features not to be tested
- Approach
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test ]
- Testing tasks
- Environmental needs
- Responsibilities
- Staffing and training needs
- Schedule
- Risks and contingencies
- Approvals
Phase 2: Detailed Test Planning
In the Detailed Test Planning phase, an individual test plan is composed for each iteration, test level, or test type. These detailed test plans specify the detailed approach, FTT tree, and planning for every iteration, test level, or test type that must be addressed.
Activity 4: Determine Detailed Test Strategy
For each test level, test type, and iteration that has been agreed upon, a detailed test strategy must be established. The aim of this activity is to specify in detail what features should be tested and how thoroughly to test them at each test level, test type, and iteration.
FTT Tree
The detailed test strategy is documented in a diagram called an ‘FTT tree.’ During this activity, the test manager only creates the ‘trunk’ of the FTT tree. As the project continues, the tree will be refined and branches will be added. The FTT tree provides a framework for documenting and managing the progress of the test execution schedule, test procedures, and/or test scripts. You can set up FTT trees using spreadsheet software (such as Microsoft Excel) or a professional test management tool.
Activity 5: Plan Iteration, Test Level, or Test Type
This activity identifies which test levels, test types, or iterations will require separate detailed test plans. The test manager may organize a kick-off meeting for these iterations, test levels, or test types.
Detailed Test Plan
A detailed test plan is an extension of the project test plan. It contains the same data as the project test plan created in phase 1, but is limited to information and agreements that are specific to the iteration, test level, or test type under consideration. A detailed test plan also contains more concrete information, such as a work breakdown structure, detailed milestones, and the actual names of the test resources who will participate. A detailed test plan is not required for every iteration, test level, or test type. It depends on the size, scope, complexity, and criticality of the project whether separate detailed test plans are advisable.
Phase 3: Test Build
The Test Build phase includes all activities that deliver parts of the test model (completed FTT tree, test procedures, test cases, test scripts, checklists, and test execution schedules). At the end of this phase, the test infrastructure and test object are installed. In accordance with the V-Model, the test build phase ensures that all test activities required before test execution can begin are conducted.
Activity 6: Design Tests
The first step in this activity is to identify the high-level test cases for each FTT in the FTT tree. These high-level test cases are derived from the test basis by identifying situations to test. After this step, the high-level test cases are assembled and consolidated into test procedures (test scenarios) for the FTTs. If required, the high-level test cases may be translated into low-level test cases. The test procedure may be automated with a test script. Instead of describing many different checks within one test procedure, it is also possible to design a test checklist that can be used for various FTTs. These checklists are often applied to quality characteristics such as usability, installability, maintainability, and portability.
FTT Tree
Every branch of the FTT tree is assigned to a tester who is responsible for further breakdown of the FTT tree. The tester stops dividing the branches when the item at the lowest level is testable. These testable items are the final FTTs.
Test Cases
Test cases describe the different situations to test for each FTT. Each test case contains a well-considered combination of:
- Input data
- Processing of that input (action)
- Expected outcome
Test specification techniques are used to derive the test cases from the test basis in a reproducible and unambiguous way. The difference between high-level and low-level test cases is the level of specificity. High-level test cases describe the input and the expected outcome on abstract levels, while in low-level test cases, these abstractions are replaced by concrete values.
Test Checklist
The test checklist is a list of control actions that cannot be verified by executing test cases, or for which it is more efficient to have one checklist instead of repeating the same control action in every single test procedure).
Test Procedure
A test procedure is a document that provides detailed instructions for executing one or more test cases ( BS 7925-1). It consists of a logical sequence of test cases, a start situation, concrete actions, and controls. The initial test data required for test execution are also included in the test procedure. The following concepts should always be included in a test procedure:
- Test procedure information
- Test procedure preconditions
- Test procedure steps
- Test procedure pass/fail criteria
- Test cases (included within the test procedure or in a separate document)
Test Scripts
Test Scripts are test procedures that have been automated. This is frequently done for regression testing purposes. Test scripts, like test procedures, provide detailed instructions for executing one or more test cases ( BS 7925-1). The only difference is that the instructions are now given to a test tool that automatically executes the test scripts.
Activity 7: Organize Test Execution
For each test cycle, the test procedures, test scripts, and checklists are arranged in sequential order for test execution purposes. The outcome of this step is the test execution schedule.
Test Execution Schedule
A test execution schedule is created for each test cycle and consists of the logical and chronological order of test procedures, pre-conditions (including initial data setup and test environment setup), and post-conditions for test execution purposes. Start-up and wrap-up activities must be included.
Activity 8: Set Up Test Infrastructure
Parallel to the test design activities, the test infrastructure must be prepared. This infrastructure must be ready before test execution can begin. This generally involves configuring hardware, system software, test tools, required external applications, and necessary test data. The purpose of this activity is to ensure that test execution will not be hindered by problems with infrastructure elements such as the test environment, test tools, or office environment.
Activity 9: Install Test Object
The test object is the component, system, or specific version of the system to be tested. As the final step in the test build phase, the test object is installed into the test environment, right before test execution begins.
Phase 4: Test Execution
The tests that were prepared in the previous phase are now executed in the Test Execution phase, and the results are analyzed and reported. This phase includes three repeating cycles: defects are reported, defects are fixed, and re-tests are performed. As a result of test execution, a test summary report is created which provides guidance on the open risks associated with moving on to the next test level or deploying the system into production. In this phase, the testing activities often occur on the critical path to delivery of the software. Test execution is the process of executing all (or a selected number of) test procedures according to the test execution schedule.
Activity 10: Verify Test Infrastructure and Test Object
In this activity, a preliminary test is performed to check whether the test infrastructure is stable enough to perform the tests planned for an iteration, test level, or test type. This step is often called the ‘pre-test’ (a.k.a. ‘smoke test,’ ‘intake test,’ or ‘sanity check’).
Activity 11: Execute Tests
The aim of this activity is to carry out test procedures, test cases, and checklists that were defined during the test build phase according to the sequence defined in the test execution schedule. The outcomes of these tests are compared against the anticipated results, discrepancies are analyzed, results are recorded, and, if necessary, defects are logged.
Test Results
The results of the test cases and test procedures (pass or fail) can be recorded right in the document created during the test build phase, in separate copies of that document for each iteration or test cycle, or in a test management tool, which is most efficient.
Defects
In STBox, the term ‘ defect’ conforms to the IEEE 1044-1993 definition of ‘anomaly:’ “any condition that deviates from expectations based on requirement specifications, design documents, user documents, standards, etc., or from someone’s perceptions or experiences.”
Anomalies may be found during review, testing, analysis, compilation, or use of software products or applicable documentation (as well as in other situations). In other words, whenever something is encountered in the software or documentation that does not meet expectations (whether those expectations have been clearly specified or not), it’s a defect. Defects can be categorized as:
- Defects in the software
- Defects in the test infrastructure
- Defects in the documentation
- Defects in the test specification
- Defects in test execution
- Enhancements
IEEE 1044-1993 and 1044.1-1995 describe the generic defect tracking process. According to these standards, every defect should go through the following steps: recognition, investigation, action, and disposition. An important characteristic of a defect is its status. By assigning a defect a status, you can enforce a defect life cycle and assign responsibilities.
Activity 12: Summarize Test Execution
The purpose of this activity is to summarize the results of the testing activities and to provide an evaluation based on these results.
Test Summary Report
The test summary report is based on the test status reports. It provides guidance regarding the quality of the test object and calculates global and specific risks for the following iteration or test level (or for implementation in production). A test summary report should be written for each detailed test plan.
Phase 5: Test Project Closure
One of the most frequently neglected phases in a test process is Test Project Closure. During this phase, the entire test process is evaluated, the test ] are compiled, and the test team is released from their assignment.
==== Activity 13: Consolidate Test ] ====
All ] from the test process (testware) that were produced are now gathered, updated, and archived. A distinction is made between reusable and non-reusable ]. For maintenance purposes, this material is handed over to representatives of the relevant department(s).
Activity 14: Evaluate Test Project
The test status and summary reports are consolidated and reviewed. In this activity, a final evaluation of the test product (test object) and the entire test process is conducted. The purpose is to use this information for process improvements for future projects.
Test Evaluation Report
Unlike the test summary report created during the test execution phase, a test evaluation report considers the entire testing project (all iterations, test levels, and test execution cycles). The test evaluation report assesses not only the test object, but also the test process itself. Information regarding the effectiveness and the efficiency of the test process can be extracted from the test status reports.
Activity 15: Release Test Staff from Test Assignment
At the end of the test project, a formal or informal closure is conducted. In this closure, all test staff are released from the test assignment.
Phase 6: Test Management
Test Management is a set of continuous test project activities that ensure that the test process is managed professionally. These activities include followup on the test team and test plans, status reporting, defect management, issue and scope management, and timely and cost-effective production of high quality test ].
Activity 16: Staff and Manage Test Team
Once the test team members have been selected, they receive information regarding the project, the organization, and the approach that will be followed. In this activity, the tasks and responsibilities for the various test team members are assigned and explained. Test training is planned as required. The team members’ technical capabilities, commitment, and motivation are continuously monitored and evaluated.
Activity 17: Monitor and Adjust Test Plans
The three main aspects of STBox (people, process, and technology) are dynamic in nature. During the life cycle of any project, various internal and external events occur that affect the agreements made in the project test plan or detailed test plan. The goal of this activity is to monitor whether test activities are conducted according to the plan. If they aren’t, the test manager re-assesses the test activities and makes adjustments. He or she may also document these adjustments in the consolidated test plan.
Activity 18: Follow Up and Report Status
The goals of this activity are to provide insight into the quality of the test object and the progress of the test process and to make adjustments if required. Test status information (based on metrics or non-quantitative data) is compiled into a test status report.
Test Status Report
Test status reporting aims to provide regular feedback on the quality of the test object and the progress of the test process. It also helps to identify risks, suggest mitigating actions, and recommend other action points. The following aspects must be checked against the plan:
- Defect tracking
- Resource tracking
- Lead time tracking
- Work product tracking
- Test infrastructure
For each of these aspects, consider:
- What was initially planned
- What was actually achieved
- Variance
- Reasons for variance
- To be completed
- Trends/evolution (analysis of historical data)
- Risks and constraints
- Mitigation measures
- Action points
- Open items
Issues and scope changes that affect the testing process can also be reported in the test status report.
Activity 19: Manage Defects
In addition to tracking the status of defects and planning retests, defect management information is monitored and evaluated, and adjustments are made as required. If appropriate, defect meetings are organized.
Activity 20: Manage Issues and Changes
In a project, changes in scope, planning, and resource availability happen constantly. The impact of these issues and changes must be monitored and evaluated, and appropriate mitigating actions must be taken as required.
Activity 21: Facilitate Delivery
This activity includes the packaging and presentation of the test ]. It is important that there be no ambiguity as to the acceptability of what has been delivered. The sign-off signifies transfer of ownership of and responsibility for the delivered product or system.
Phase 7: Quality Management
The purpose of Quality Management is to manage, guide, and improve the quality of the ] and the project execution. Reviewing the test basis is an important part of quality management. Quality management is about performing quality activities, as outlined in the project test plan, to ensure that the ] are complete. Quality control is the sum of all activities needed to ensure the quality of the products created. Quality management activities include:
- Verifying that appropriate quality control activities and methods are applied
- Ensuring that the latest standards are complied with
- Ensuring that all defined and agreed-upon project procedures are followed
- Reporting serious quality problems and escalating if required
Activity 22: Review Test Basis
The earlier in the software development life cycle a defect is found, the easier and cheaper it is to correct. For this reason, verification and validation activities must be performed before any ] is considered complete. Thus, it is important to test (review) documents as part of the test process. The main purpose of this review is to find defects in the test basis early in the development life cycle, in accordance with the V-Model. The test basis is evaluated to locate discrepancies and inconsistencies and to recommend improvements. This is not limited to a review of how ‘testable’ the test basis is; it also looks at contents, compliance with rules and standards, and consistency with related documents. The IEEE standard on software reviews (IEEE 1028-1997) mentions three types of formal reviews:
- Walkthroughs
- Technical or peer reviews
- Inspections
Defects
As mentioned previously, the term ‘ defect’ refers to anomalies. Anomalies may be found during test execution, but they are also often found during the review of documents.
Review Report
The results of a walkthrough, peer review, or inspection can be compiled into a review report.
==== Activity 23: Review Test ] and Organize Approval ====
The same reviewing techniques used to review the test basis may also be used to review test ]. The ] should be evaluated for correct contents, compliance with rules and standards, and consistency with related documents. The organization of formal or informal approval of the test ] is part of this activity. Test ] that should be subjected to this type of review include:
- Test plan
- Test strategy matrix
- FTT tree
- Test procedure
- Test execution schedule
- Test summary report
- Test status report
- Test evaluation report
- Test infrastructure checklist
- Inspection log form
- Risk checklist
STBox: People Test Roles and Responsibilities
STBox recognizes the following testing roles:
- Test manager (test planning, follow-up, coordination of test team activities)
- Tester (test design and execution)
- Test support (owner of test policy, test improvement project, automation framework, etc.)
- Methodology
- Test automation
- Infrastructure
- Business support
- Test tool specialist
- Quality manager (responsible for the organization of review activities)
One person may fulfill more than one role. However, different skills are required for the various roles.
Test Organizations
It is impossible to define one ideal test organization structure that will fit every company’s needs. How the test organization fits into the company depends not only on the objectives and measurable facts but also on corporate culture, the skills and experience of the people involved, product risks, politics, and other factors. The most widely-used test organizational models are described here. (This list is not exhaustive: other organizational models are possible, including a combination of models.) The five most commonly used models are:
- Independent test organization
- Test competence center
- Function-based test organization
- Role-based test organization
- Outsourced testing
Independent Test Organization
An independent test organization is a team that focuses primarily on testing. Instead of establishing a testing approach one project at a time, the independent test team is a permanent structure within the organization. It may be responsible for testing one or more products.
Test Competence Center
A test competence center is essentially a hybrid of two models: the independent test team and the function-based test organization. A test competence center manages a pool of independent test profiles and establishes (and enforces) the organization’s test policy. The test competence center provides testing services to the rest of the organization. Test professionals from the test competence center are assigned to specific projects or applications throughout the organization, where they report to the responsible manager. The test competence center team may include the following profiles:
- Methodological support
- Test coordination experts
- Test tool specialists
- Testers
Function-Based Test Organization
In a function-based test organization, dedicated test professionals are responsible for the testing activities within a team, department, or project. These testers report to a manager within their team or department.
Role-Based Test Organization
In a role-based test organization, there are no dedicated test profiles or functions. In fact, there are no full-time testers in the organization. For each project, testing responsibilities are divided up among those responsible for business requirements, global design, detailed design, and software development. Testing roles and activities are simply assigned to the most appropriate person on the project team.
Outsourced Testing
In an outsourced model, some or all of the testing responsibilities and activities are assigned to another company. It is the responsibility of the client organization to define exactly what testing activities should be conducted by the outsourcing company, what ] should be produced, and what level of quality should be achieved. It is possible to make clear agreements with an outsourcing company as to the quality of the testing process without specifying the desired quality of the test object (an option that is particularly useful if development and analysis are not outsourced as well). Possible reasons to outsource testing activities may include:
- Lack of test resources
- Lack of correct test environments
- Lack of expertise
- Focus on core business
- Independent quality judgment
- Certification
- Reduction of throughput time
- Reduction of costs
- Temporary buy-in of strong or very specific test expertise
There are many terms associated with outsourcing:
- Offshoring
- Insourcing
- Cosourcing
- Process outsourcing
- Strategic sourcing
Selecting an Organizational Model
It is impossible to establish a single universal organization structure for testing. There are concrete guidelines for determining which organizational model is the best fit for a particular situation. These guidelines weigh the following factors:
- Test levels
- Size of projects
- Size of the organization
- Size and complexity of the test object
- Culture of the organization
- Importance of maintenance work and regression testing
RACI Matrix
STBox uses the RACI principle to clearly define responsibilities in the test process.
The RACI matrix is an organizational representation method for identifying the various stakeholders for tasks and activities:
- Responsible (R): The role that actually performs an activity, in other words, the ‘author’ of the activity (literally, in the case of a written ]). For each activity, at least one person must be designated as responsible. The responsible role performs the activity, and the accountable role must ensure that the activity is performed.
- Accountable (A): The person who is accountable for the progress of the activity. There should always be exactly one
person accountable for each activity. The accountable person delegates the activity to the responsible role(s).
- Consulted (C): Those who must be consulted before or during an activity. More than one role may need to be consulted.
- Informed (I): Stakeholders who must be informed during or after an activity. More than one role may need to be informed.
STBox: Technology Test Environments
During the execution phase of the project, a suitable test environment is needed for the execution of dynamic tests. A suitable test environment consists of:
- Hardware and hardware configurations (PC, client–server, mainframe, etc.)
- Operating system and system software
- Network facilities
- Interfaces with other installed applications, stubs, and drivers (interfaces, etc.)
- Test data
- Procedures
The setup of the test environment should reflect the aims of the test level or test type in question. Environmental needs might be very different from one test level/type to another; for example, component testing requires a completely different environment from acceptance testing.
General Requirements
To ensure valid and representative tests, the test environment must meet the following criteria:
- The test environment should be stable and manageable
- For each test level, test type, series of tests, or even test case, the test environment should be as representative as possible
- Configuration management procedures should be in place to deal with changes in the test infrastructure (the test manager
owns and manages the test environment) - The test environment setup must accommodate changing environmental requirements
- Executing tests often changes the status of test data or involves ‘unsafe’ actions, so a system for backing up and restoring data must be in place
- Test activities should not be constrained by poor performance; often, a relatively small investment in hardware and software can bring considerable efficiency gains to test execution
- For some types of tests, it must be easy to manipulate data or operating system parameters
- Several physical instances of a test environment may need to be established to allow simultaneous test execution activities
Most applications will be installed on machines that also host other applications. This raises important questions for testing:
- Do cohabiting applications share common files?
- Is there competition for resources between the applications?
Give careful consideration to whether to install other applications on the test environment. For system testing, it is best practice not to have any interfering applications (those outside the scope of the system integration test) installed. For acceptance-level testing, of course, the aim is to simulate the real-life environment as closely as possible.
Specific Requirements
In general, different test environments are required for the various test levels and test types, because different levels of representation and flexibility are required. There are concrete guidelines and considerations regarding:
- Test environments for each test level
- Test environments for each test type
- Ownership of test environments
Test Data
Test data is an important, but often underestimated, part of the test infrastructure. Different types and sources of data exist, and decisions must be made regarding the ownership and maintenance of test databases, the test data strategy to use for each test level, etc. There is also the strategic decision regarding which test data will be available at the start and which test data must be created during execution. This decision has major impact on the creation and execution of the test procedure.
Types of Data
The following types of data may be used in testing:
- Environmental data
- Setup data
- Fixed-input data (input data available before the start of the test)
- Consumable-input data (data for which input is required during execution of the tests)
In a test data strategy, a choice must be made as to which data must be available before test execution can begin and what data will have to be inserted (manually or automatically) during execution. This decision has a major impact on the efficiency of test execution.
Sources of Data
There are many possible sources for data:
- Copy of production data
- Manually-created test data
- Captured test data
- Generated test data
- Random
- Specified
Test Data Strategy
The test data strategy concerns the choice of what test data will be used. This decision depends on the test level and test type. Two major choices are available:
- Artificial test database
- Copy of (part of) production
Office Environment
A well-arranged office environment can positively motivate the entire test team. It improves communication and efficiency of the execution of test tasks in general. The office environment consists of:
- Offices, desks, chairs
- PC's (including all required software)
- Network access
- Printers, telephones, fax
- Meeting rooms
The use of a dedicated test lab is recommended in order to be able to concentrate on testing activities. This is particularly important during acceptance testing and/or within role-based test organizations. Otherwise, there is a high risk that testers will be distracted by their operational and other duties. Some test types may have particular requirements, such as a usability lab, external mobile locations, portability (foreign language PCs and keyboards, etc.), crisis room, etc.
Test Tools
Test tools are available to support a wide range of test activities across all test levels. Test tools can improve the efficiency and reliability of testing activities by automating repetitive tasks or enabling otherwise impossible tests. In STBox, the term ‘test tool’ is used generically to refer to any kind of software support for test activities. Most of the types of tools described here are available in various forms and price ranges. A test tool might come in the form of a commercial package, an open source application, a ‘home brew’ program developed in-house, or a combination of one or more of these. Testers might use anything from an Excel spreadsheet listing defect details to a large commercial suite of test tools. Most of the discussion following refers to commercial or open source software. Organizations with very specific requirements may decide to work with custom-built software.
These are some examples of the types of test tools:
- A commercial tool such as QuickTest Professional or TestPartner
- An open source tool such as NUnitForms
- A framework built around Ruby which uses the Watir library to automate tests of web applications
- A custom-built application that interfaces with an application by sending specifically formatted messages
Benefits
Some of the benefits that can be achieved by using test tools include:
- Fewer manual repetitive activities and greater consistency, repeatability, and reliability of those actions. This can be
achieved through activities such as automating regression tests, performing smoke tests, creating test data, checking coding standards, etc.
- Increased project intelligence, easier access to information about testing, and more and/or faster insight into the status
of the test process. A test tool can help calculate values such as defect resolution metrics, coverage metrics, and response times.
- Support for otherwise virtually impossible tasks such as large-scale performance tests, security tests, etc.
- Decreased time and labor consumption. Investing time in automating test activities makes these activities less labor-intensive. For example, generating a weekly status report in a test management tool might take five minutes, while collating the data manually might take two hours. Reducing repetitive tasks has the added benefit of improving the testing staff’s motivation.
- Greater test coverage achieved through, for example, easily repeatable automated regression tests, testing with different sets of data, or testing on different configurations.
- Increased flexibility for executing and re-executing tests. Testing can also be done at times when human testers are generally not available.
- Increased job satisfaction among testers. Being released from cumbersome, repetitive actions improves testers’ job satisfaction and allows them to focus more on advanced testing issues. The use of test tools also gives technically-oriented testers useful job skills and interesting career possibilities.
Risks
While potential benefits can be very high, there are also some risks to using test tools. Most of these, however, can be overcome by establishing a solid implementation plan. Risks of using test tools include the following:
- Setting unrealistic expectations for the tool: In most cases implementing a tool is a long-term project. Benefits may not be visible on the short term.
- Underestimating the implementation effort: The cost of licenses is only part of the total cost for implementing test tools. The cost of a test tool implementation must also include training, configuration, customization, support, and often a certain amount of external expertise to get the process up and running.
- Disregarding what effect test tools might have on the existing test organization and test process: Test tools present a
number of choices and decisions for an organization. For example, the skill sets required for automated testing might be very different from the standard skills needed for a tester. In addition, the organization will have to determine when the automated test suite will be run, what will be done with the test results, and how to divide up duties among test tool specialists and business-oriented testers (for example, sometimes a test tool specialist looks at test results first to see whether a fail is due to a test script problem or an application problem; that way business-oriented testers only have to get involved in the latter case). - Disregarding the continuous effort test tools require: The implementation of test tools does not end after the tool has been installed, customized, and configured. Test tools require continuous administration, and the test assets generated by the tool require continuous maintenance.
Types of Tools
There is a wide variety of test tools available. However, if we look at their characteristics and functionalities, we can classify them into major groups, each with their own subtypes:
- Tool support for management of testing and tests
- Tool support for test preparation
- Tool support for test execution and logging
- Static analysis tools
- Tool support for dynamic testing
- performance testing tools
Tool Selection, Implementation, and Framework Test Tool Selection
A distinction may be made between the strategic choice to introduce a new tool into the organization and the strategic choice to use the tool for a particular project or release. These two decisions do not usually coincide, although in some cases they do. Before selecting a tool, it is important to investigate whether implementing a tool makes sense at all. In some organizations the possible benefits will never outweigh the required investment, no matter what tool is selected. This decision has to be made for each tool type; for example, while almost any organization can benefit from tool support for defect tracking, not every organization will be able to gain from test automation tools. This feasibility study can be part of a test process assessment, where the maturity of the organization and testing process are investigated and opportunities for an improved process supported by tools are identified. When an organization does decide to use a test tool, the next decision is which tool to use. This can be done using a proven method for technology selection, joint technology selection (JTS). In a tool selection process, the following activities are typically performed:
- Definition of test tool requirements
- Definition of objective decision criteria
- Market study
- Elaboration and evaluation of request for information (RFI)/request for proposal (RFP)
- Proof of concept/pilot project
A proof of concept is a good way to validate whether the selected tool is in fact usable within the environment of the organization. The nature of the proof of concept will vary depending on the kind of tool under consideration. For a test automation tool, proof of concept will typically evaluate whether the tool is able to perform actions on the user interface of the application or whether additional customizations to the tool are necessary to achieve this. The goal is to let the tool vendor prove that the tool’s stated requirements can be met within the technical environment of the specific organization. The proof of concept might take the form of a small-scale pilot project. The appropriateness of using test tools for a particular project is determined when the test strategy is developed. Customizations or additions to the chosen test tool’s standard implementation may be required for a specific project. For example, in a project where development is outsourced, certain differences may arise in the way a test management or defect tracking tool is used. If so, mitigation strategies will need to be specified.
Tool Implementation and Framework
Before a tool can be implemented, it needs to fit into an organization’s existing processes and practices. This will most likely necessitate adjustments or additions to existing processes and maybe even to the organization. Furthermore, to allow for a structured and maintainable implementation of the tool, technical standards and guidelines must be developed for using, managing, storing, and maintaining the tool and the test assets; this may include deciding on naming conventions for files and tests, creating libraries, and defining the modularity of test suites. (International Software Testing Qualifications Board, Certified Tester, Foundation Level Syllabus, Version 2005, p. 62)
Obviously, the nature of these technical standards and guidelines will vary depending on the tool type being implemented. For certain types of tools, especially test automation tools, the aspects mentioned above must be supplemented by a technical framework built around the testing tool. This framework will also vary from one test tool type to another, for example:
- For a test management tool, this may consist of customizations that allow the import/export of spreadsheet-based test procedures or that generate complex reporting.
- For test automation tools, where a solid framework is absolutely critical, the framework developed might consist of code
libraries that improve object recognition, reporting, error recovery, checkpoints, etc. Often, a layer is built on top of the test automation tool that allows non-technical users to document tests while hiding the test tool’s technical complexity from these users. The nature of such a framework is subject to the implementation plan, but is likely to evolve over time.
Implementing test automation tools is often a long-term investment. Test automation tools require a specific set of skills, particularly technical development expertise. It is not advisable to implement test automation in an organization that has no structured test process, because the test automation effort will then lack direction (for example, in terms of which tests to automate, how to maintain the test assets, etc.). The test automation tool framework often follows one of the following approaches or a combination of the two:
- A data-driven approach separates the test inputs (data), usually into a spreadsheet, and uses a generic script that can read the test data and perform the same test on different data. Testers who are not familiar with the scripting language can enter test data for these predefined scripts.
- In a keyword-driven approach, the spreadsheet contains keywords describing the actions to be taken (also called ‘action
words’) and test data. Testers (even those unfamiliar with the scripting language) can define tests using keywords, which can then be tailored to the application being tested. (International Software Testing Qualifications Board, Certified Tester, Foundation Level Syllabus, Version 2005, p. 61)
Test automation frameworks typically address the following quality attributes for test automation:
- Reliability: It is critical to be able to trust the results of automated testing. If an automated test delivers false negatives (the automated test fails, but there was no real functional problem), there may be a serious effect on the accuracy and usefulness of the tests, as well as decreased trust in the testing process. False positives (the test fails even though the automated test indicates a ‘pass’ result) are more probable but have less impact.
- Robustness: This refers to how well a test (or set of tests) can recover from unexpected events in the system being tested. For example, what happens if you schedule an evening run of 200 tests and the second test encounters a problem (such as a popup indicating a virtual memory problem or a lost connection to a mainframe)? You would probably want this particular test to fail, but you would still want to execute the remaining 198 tests after recovering from the error (dismiss error, return to initial condition of test, and perform other cleanup activities). With elaborate exception handling, an automated test tool can be set up to handle almost any situation, but of course the more bulletproof it is, the more expensive it will be. If it is crucial that your test suite can run completely independently, then you will need to invest in robustness. This will typically include procedures for verifying that initial conditions are met before test execution (rather than assuming that this is always the case), procedures for cleaning up after an unexpected error, etc.
- Maintainability: How easily can you adjust a set of automated tests in response to changes in the system being tested? Will a change in login procedure, for example, have a big impact, or can this be handled smoothly? Typical techniques for increasing maintainability include documenting the code, breaking the code into modular units, making use of function libraries, and using initialization files (database connection parameters, URLs, etc.). Maintainability also implies storing GUI object definitions in a structured way, separate from the logic in the scripts. Test automation frameworks set up this way can greatly reduce the use of recording: while a test automation suite set up to use recording only might use 10,000 lines of code for 100 tests, a test automation suite set up in a more maintainable way may be limited to 2,000 lines of code. Instead of being overwhelmed with an insurmountable amount of code, you get wellstructured code that is relatively easy to maintain in case of changes to the application.
- Test Data: This is the subject of a separate chapter. Since one of the main advantages of automation is that tests can be easily repeated, specific attention should be paid to the availability of test data at the beginning of test execution.
- Reporting: When working with commercial test automation tools, elaborate reporting is usually a given. During test execution, the automated test should generate a clear log of which steps were executed with which specific data. If the tests generate certain output data (such as order numbers) that can facilitate follow-up after test execution, it is also useful to report this in a specific format. Reporting standards that allow easy followup of executed tests should be agreed upon in the framework.
Also, it can be helpful to differentiate between technical reporting (statistics generated by the test automation tool) and highlevel reporting (pass/fail, data used, data output) tailored to a more business-oriented audience.
Decision to Automate Tests
The decision to automate a test is based on a number of factors. In most real-life situations, the decision is made after a system has gone through a number of releases and it has become clear which tests should and should not be automated. In this case, tests are automated for existing functionality. It is possible to use automated tests for new functionality, but only with very repeatable tests (such as testing various data combinations, configurations, etc.). Either way, it is not advisable to start automating tests on any functionality that has not yet achieved a certain level of stability and maturity. The regression test suite is a prime candidate for automation. The main advantage of automating regression tests is that a full test run can be executed each time with limited effort. It can also be useful to automate tests that require a lot of time-consuming actions, because it costs less for a tool to wait 15 minutes for an action to complete than for a human tester. However, not everything can or should be automated:
- Some tests always require manual intervention (e.g. checking an invoice on paper)
- Some tests require a high level of human intelligence, perception, and intuition to asses whether it has run successfully
- Some tests are very complex to automate (perform a cost/benefit analysis to compare the effort to automate versus effort to manually execute the test; experienced test automation specialists report that it takes between two and ten times as much effort to automate the average test case than it does to design it manually)
- Some tests do not have predictable results
- For some tests, there is a high probability that they will undergo a lot of changes in upcoming releases
- In some situations, the test tool is not capable of interacting with the necessary attributes of the tested feature. These parameters could lead to the decision to automate only part of the test or not to automate at all. For example, in a particular case it might be fairly straightforward to automate data entry but quite complex to check the results. In this case, data entry might be automated while the results are checked by manual test techniques.
|
|
|