In the beginning there was the computer and all of the programs ran there. The source code simply appeared, was compiled, and executed. Nobody was really sure how the software came to be except that there were the requirements and they had to be met. In this world, a separate group of people would evaluate the condition of the system by executing the test plan. If it passed, then the blessing was given; if not, then there was rejection and a return back to the analysts and programmers.
Around 35 years ago, this began to change. New methodologies and tools to support them began to appear. Among the first was Source Control; the ability to track changes over time, compare versions, and even revert back to a previous state. The ability to correlate the “work to be done” with actual and specific changes to the code began to appear. Instead of massive changes that took years to implement, shorter cycles began to become common. And yet, the test plan and the processes around it tended to remain largely unaffected.
Fortunately this has been starting to change over the past decade. As organizations move from “Gantt Chart” type schedules (which never were accurate) to “Backlog” driven development, there are many opportunities to make the formal testing cycles significantly more effective.
The remainder of this post will use terminology from Microsoft Team Foundation Server [TFS] and Team Services [VSTS] along with Scrum [such as Product Backlog Item], but only because some set of terms needed to be chosen. The concepts are universal and independent of platform, language, development tools, et. al.
We will start with a quick review of how work is planned using backlog based approaches.
Initially the “big items” are identified, these may take an extended period to fully achieve; they are commonly important to upper-management; some may not be started until significantly after they have been identified. These items are referred to as “Epics”.
Each epic can be refined down to a number of smaller elements. At this level, there is an increment in functionality that is of interest to the consumer [customer, business user]. The degree to which each epic is stable and known. Typically epics of higher priority (therefore to be focused on sooner) are handled in a much more detailed manner than those that are further out. These items are referred to as “Features”.
Each feature can be refined down to even smaller increments such that each provides an atomic increment of value. As an example, an epic of “Establish e-Commerce” may have a feature of “Shopping Cart on Web-Site”. Individual items could then be “Add Item to Cart”, “Remove Item from Cart”, “Change Quantity of Item in Cart”, “View Catalog Page for Item in Cart”. During exercises that focus on this type of work breakdown, it is not uncommon to identify upwards of 20 distinct items within the feature. These items are referred to as Product Backlog Items [PBI].
We can now return to looking at testing. Each of the Product Backlog Items can (and should) be associated with one or more Test Cases as part of the Acceptance Criteria. The PBI is not “done” until it has been demonstrated that all of the Test Cases pass. A first pass (general description) of each Test Case is created at the same time as the PBI is created, additional Test Cases may be added throughout the cycle as the PBI is further refined. At some point during the development the Test Case is likely to contain specific actions and expected results which can be performed manually or automated.
If the set of Test Cases is complete, then *any* code which passes all of the test cases is acceptable [passing, working, deployable, releasable] code. If it is determined that there is something wrong with the code from functional perspective, it is indicative of a missing (or incomplete) Test Case.
As more work is defined and completed, the number of Test Cases grows. Multiple reasons for grouping the tests for organizational purposes begin to appear, and these groupings are Test Suites. Suites may be nested (a Suite can contain one or more Suites).
For each key Release, a Test Plan is created which comprises of the relevant Suites. Typically this is a clone of the Test Plan used in the previous Key Release, with the addition of new Suites and Test Cases [but also with the potential for removal of things that no longer apply]. When all of the items in the Test Plan pass, the system has been validated.
When this approach is taken, the Test Plan has been transformed into a dynamic artifact which is active for the entire duration of a release cycle. Developers working on code [because they are delivering a PBI – the only real reason code should be written] know exactly what the tests (Acceptance Criteria) for there work is, at the time they are working on it. With rich automation, the Test Plan can be executed nightly against the prior day’s work to mitigate risks and provide an updated view of progress.