Tuesday, March 5, 2019

What Does Testability Mean To Me?: #30daysoftesting

I've decided that I need to get back into the habit of writing stuff here just for me and anyone else who might happen to want to read it. It's true that any habit that one starts is easy to keep rolling. It's just as easy to stop a habit and literally not pick it up again because of vague reasons. I haven't posted anything since New Year's Eve and that's just wrong. What better way to get back into the swing of things than a 30 Days of Testing Challenge?



This time around the challenge is "Thirty Days of Testability".  As you might notice, today is the fifth. Therefore I'm going to be doing some catchup over the next couple of days. Apologies ahead of time if you are seeing a lot of these coming across in a short time frame :).

So let's get started, shall we?

Day 1: Define what you believe testability is. Share your definitions on The Club.

First, let's go with a, perhaps, more standard definition and let's see if I agree or if I can actually add to it. Wikipedia has an entry that determines overall testability as:

The logical property that is variously described as contingency, defeasibility, or falsifiability, which means that counterexamples to the hypothesis are logically possible.
The practical feasibility of observing a reproducible series of such counterexamples if they do exist. 
In short, a hypothesis is testable if there is some real hope of deciding whether it is true or false of real experience. Upon this property of its constituent hypotheses rests the ability to decide whether a theory can be supported or falsified by the data of actual experience. If hypotheses are tested, initial results may also be labeled inconclusive.

OK, so that's looking at a hypothesis and determining if it can be tested. Seems a little overboard for software, huh? Well, not necessarily. In fact, I think it's a great place to start. What is the goal of any test that we want to perform? We want to determine if something can be proven correct or refuted. Thus, we need to create conditions where a hypothesis can either be proven or refuted. If we cannot do either, then either our hypothesis is wrong or the method in which to examine that hypothesis isn't going to work for us. For me, software testability falls into this category.

One of the aspects that I think is important is to look at ways that we can determine if something can be performed or verified. At times that may simply be our interactions and our observations. Let's take something like the color contrast on a page. I can subjectively say that a light gray text over dark gray background doesn't provide a significant amount of color contrast. Is that hypothesis testable? Sure. I can look at it and say "it doesn't have a high enough contrast." That is a subjective declaration based on observation and opinion. Is it testable? As I've stated it, no, not really. What I have done is made a personal observation and declared an opinion. It may sway other opinions but it's not really a test in the classic sense.

What's missing?

Data.

What kind of data?

A way to determine the actual contrast level of the background versus the text.

Can we do that?

Yes, we can, if we are using a web page example and we have a way to reference the values of the specific colors. Since colors can be represented by hexadecimal values or RGB numerical values, we can make an objective observation as to the differences in various colors. By comparing the values of the dark gray background and the light gray text, we can determine what level of contrast exists between the two colors.

Whether we are using a program that can create a comparison or an application that can print out a color contrast comparison, what we have is an objective value that we can share and compare with others.

"These two colors look too close together"... not a testable hypothesis.

"These two colors have a contrast ratio of 2.5:1 and we are looking for a contrast ratio of 4.5:1 at the minimum" ... that's testable.

In short, for something to be testable, we need to be able to objectively examine an aspect of our software (or our hypothesis), be able to perform a legitimate experiment that can gather actual data, and then allow us to present that data and confirm or refute our hypothesis.

So what do you think? Too simplistic? Am I overreaching? Do you have a different way of looking at it? If so, leave a comment below :). 

1 comment:

Ty Andal said...

Your breakdown makes perfect sense to me. I also look for something objective to test against when performing validations. Unfortunately, the perspective isn't shared by others I have worked with. Sometimes, I may question how something works, and when digging to get answers they become defensive or state that's just how it's supposed to work. I'd love to ask this same question to those that don't truly question and want to find out how something works prior to validation.