Quality Software – We know we want it! But what is it????

Why you should embrace Impediments!
November 3, 2017
Hierarchical Testing – When Can I Test It?
November 25, 2017

Quality Software – We know we want it! But what is it????

During the heyday of computer conferences in the early 1990’s I was invited to fill an empty chair during a panel discussion on the trade floor. The other members of the panels were all “big names” and to be honest, I did not believe I was qualified to sit on the same dais as them. Since they really did not want an empty chair, I agreed to sit there, and planned on being basically a silent participant.

The topic was “Achieving Software Quality”, and the first 10 minutes of discussion were all about “What is Software Quality?”.  Aspects were presented, and then counter-arguments showing how a program could excel in that regard and still not be a quality product.  Rather than making progress the talk was starting to go round in circles.

I had a thought. What if, instead of determining what quality was, one was to focus on what indicated a lack of quality? I considered this for a moment, had a flash of insight, and came up with a single word answer: Surprise!

Yes, the answer was “Surprise” as the universal indicator of a quality problem. Surprise, the code does not function as intended. Surprise, the code will not be delivered on time. Surprise, the code is unmaintainable. The list is nearly endless.  If there are no surprises, then all expectations and that is a workable definition of quality.

Nearly 25 years later, I remain convinced that this is true. Of course, one still has to determine ways of avoiding surprises – and that will forever be a challenge. This leads us to examine expectations more deeply.

It is tempting to want (or even expect) “perfection”, but the harsh reality is that nothing is perfect. Even something as easy as “Cut a board to 1 meter in length” can never be done.  Getting within 1cm is pretty easy, within 1 mm is going to be challenging, within 1 um (0.00003937 inches) is not possible with conventional methods.

Instead, one should consider what is “good enough” as a proper expectation; and getting a good answer requires careful consideration of the use-cases and impacts of imperfections. Which ones are immaterial(zero potential impact), which are negligible (impact and be safely ignored), until one transitions between those which are acceptable and those which are not.

Most developers will probably be in conceptual agreement with the above, but at the same time not have done any of the work to objectively determine what deviations from perfection their code does have, and the probability of that deviation later causing a surprise and thus a software quality issue.

Decades of experience working with teams in various vertical markets (with drastically different needs in terms of “quality” and risk tolerance to failure) had shown that the following two elements are critical to addressing this issue:

1) Measurements:  Off-the-cuff, subjective views are statistically very poor indicators. This may be the result of bias originating from many different sources. With objective, empirical, and relevant measurements proper correlation can be achieved.

2) Standards: Raw numbers are not sufficient. There must be something they can be compared against. In most cases this will not be a “pass/fail” Boolean condition, but rather a mapping to a risk/impact level. Once the transformation to this domain has been achieved, limits can be set that are appropriate for the current environment.

I hope this information will trigger thought about how software quality is considered in your organization. The material here only scratches the surface, and future posts will diver deeper into some of the focused items related to software design, implementation and testing.

Comments are closed.