Hierarchical Testing – What are you Testing? and Why?

Automated User Acceptance Testing – An Oxymoron?
August 31, 2017
What is a Backlog?
October 12, 2017

Hierarchical Testing – What are you Testing? and Why?

In the previous post we started to identify different dimensions for testing. Here we will be focusing on the Domain Axis. There are many different domains that tend to group into one of the following:

Functional Testing

Every developer has done at least some functional testing; hopefully, most have adopted some for of formal or even semi-formal means in this area. Approaches such as Test Driven Design [TDD] focus on Functional Testing at the “Unit Test” level [which is the low end of our Integration Dimension]. System Level testing is sure to occur, even if it is implemented by “let the users test in production”.

What is often missing is any type of incremental testing between the two endpoints.  With [successful] TDD one can presume that Class A works as expected and that Class B does the same. When it comes time to write Class C [which will call A and B], low level [Unit] testing is likely to mock both of these!  Those there is no test written that validates the real implementations of the classes work well together. This situation reproduces itself as integration levels increase, until we arrive at full system testing and begin to discover the flaws.

A more robust approach is to have a set of tests at each level of integration that validates the combination and interaction of the individual elements.

Software Design Testing

This testing domain is much less common than Functional Testing, but is seeing an increase in many environments. The focus of tests in this area is to facilitate “The Best Architectures and Designs Emerge…” as per the Agile Manifesto. Items in this area can range from very simple [do not allow code which generates warnings into Source Control] to extremely complex.

The approach of “Red-Green-Refactor” is very effective. Unfortunately the common usages merely test that the Refactor phase did not break any of the tests that have been written; these situations do nothing to test the quality of the refactoring!

This is where Design Rule Checking [DRC] type tests can provide significant value. Consider a situation where Dependency Inversion [DI] is in place (without using any type of IL rewriting); the goal is that there should be no direct calls to “new” for the relevant objects. One approach would be to prevent such access [internal classes, private constructors, etc.]. But lets add another challenge, you are consuming a library (which you can not change) which exposes not only proper DI means, but also exposes the concrete implementations in a constructible manner.

Both of these situations can be detected via DRC tests and trigger a failure. This mitigates hard references sneaking into the code, and allows for such conditions to be caught right at the time of implementation.

Performance Testing

Performance testing is something that is often left to the latter parts of the time frame. In many cases there is little if any attention paid until System Integration and Scaling testing is being performed. As a result, performance related problems tend to have a very high cost of remediation. Considering performance from the beginning provides significant mitigation.

One trap that many teams fall into is a concentration on “Big-O” complexity as an indicator of performance; it is not! Rather it is an indicator of scalability. Consider a comparison between two implementations. The first has a measure of O(n * log n) and the second O(n^2). The gut reaction is that the first is “better”. But what if the per element time of the first is 10mS while the latter is 1uS?  The cross over point for total execution time is at approximately 46.5K items.  If the list size is known to be smaller than this, then the implement with the “worse” Big-O will actually be more performant!

In many applications, Responsiveness rather than pure performance is the real goal. One technique is to set a default performance spec of  200mS for any user initiated action until there is some type of user visible response. This value is chosen because (most) people are incapable of perceiving/ranking delays that are shorter than this. Improving a response time from 125mS to 75mS will not be noticed. However as times pass the 250mS mark, the delay becomes noticeable.  Remember this is a default time, and is intended to “Raise an alert”. It may be appropriate to increase the time for a given use-case, or to provide some type of intermediate feedback, or to address the actual performance.

User Interface / Experience Testing

UI and UX testing is a very large scope, and something we will discuss in a future post. For right now, the key point is not to confuse “System Level Testing by means of Driving and Monitoring the UI” with testing that is actually focused on the interface or the experience itself.

As a quick sample, a certain large company has a copyright on a specific “blue” (actually a pixel pattern of different RGB values). To protect their copyright, it is important that all uses of “blue” in their banners and other elements is represented in this way.  Tests that specifically examine the rendered material looking for “other blues” is an example of true UI testing

Conclusion

Although this post barely scratches the surface of different Domains that testing can be focused on, it hopefully makes clear that a comprehensive approach to testing involves much more than Functional Tests at the Unit and System levels. Diversification of testing across Domains mitigates risk and reduces costs by discovering aspects of the code/system under test, usually at a much earlier point in the implementation cycle. Additionally, having these tests automated allows them to be run frequently (at least daily).

Leave a Reply

Your email address will not be published. Required fields are marked *

FREE CONSULTATION
Loading...