Wednesday, August 21, 2013

Testers, Don't Repeat Yourself: 99 Ways Workshop #71

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #71: Don't repeat yourself. I learned this from "The Pragmatic Programmer" although it means something different for testers: don't repeat the same actions, don't follow the same path, the same order. Break your habits. - Philippe Antras

For those who have read The Pragmatic Programmer or are familiar with the terms "Red - Green - Refactor", this notion of "Don't Repeat Yourself" should be familiar. It stems from the idea that we should focus on reuse from a single source, rather than copying and pasting options into multiple places.

Some duplication is necessary, and we cannot get rid of all aspects of having to use the same statements. Ideally, we would want to create a library or a function that has the details for what we want to do, and then by including that library, calling a function, or invoking a method, we can do what we need to with a minimum of duplication.

That makes sense from a development perspective, but how about for software testers? Can we also leverage the "Don't Repeat Yourself" philosophy? Don't we want to repeat certain steps many times for certain tests? Sure, there will be tests that we want to run in a tight loop and repeat many times. That's not what Phillipe is talking about here. What he's saying is that we should try to (in our actions, processes and testing methodology) provide a sense of variety and variability. If we perform the same steps for the same tests the same way every time we run them, we will very likely miss some interesting aspects of our product and the way we interact with it. 

Workshop #71: Add some randomness to your day. Take your test plans and see where you duplicate effort, or where you have a structured order to the steps. See if you can un-couple and run the steps in a different order, or completely randomly if possible. See what happens when you take several test cases and swap their order. Modify workflows so that you don't cover the same territory.

Back in Workshop #6, I made a suggestion to "add some disorder" to your day. That idea was to shake up the very reason you were doing things. If you were using scripted test cases, toss them for a day and do something totally different. This is not asking you to do that. Instead, we want to examine the testing that we are doing, especially if we have scripted test cases. If we don't, then let's consider how we approach exploratory testing, the way we take notes, or anything else that we do. Our goal is  to see how we do it and if we are duplicating effort, or if our tests have a lot of overlap that they do not need to.

Let's take a hypothetical example. I want to work with a spreadsheet, and for me, it's important that we test every function and calculation option available. On the surface, OK, we can do that, so we create a few spreadsheet docs, we populate some fields and then we create a variety of rows or columns, possibly many sheets in a workbook, that we then go through and put these calculations in. Once we've done that, then with each new build we get, we would open the sheet and we would examine the data. If we get what we expect to see, then all is good.

Is it?

If we put this together, sure, we may be creating some options that exercise the calculation functions. That's good, and we want to do that, but is that enough?

In the simplest manner, A1 has a value, A2 has a value, A3 sets up a calculation or formula, and A4 displays the results. That's a good first step. It is good preliminary check, and I can see the value of having that file. Using self referencing data so that we can, at a glance, see if the formula is working as we would expect it to is also great, but there's so much more we could be doing.

What if we wanted to see what would happen if we combined calculations? 
What happens if we choose to create new values with the result in data of the formulas we are creating in column D? 
Are we sure we will get a correct answer? 
How many steps can we go with these calculations before we see some sort of data error? 
Would we?
What would happen if we were to take results from a calculation in one place and feed them to a calculation in another place that requires a different format? 
Would we get an error? 
If so, would we understand it?


In this example, randomization would have little benefit (some formulas just require the right inputs to work), but we could do all sorts of randomization with combinations, or opening pages, or testing order, or any number of things we take for granted. The goal here is to see where we duplicate efforts, as well as where we run things the same way each time. If we can "loosen up" and do things in a different order, then we should try that. If we have tests that are automated, we should run them in a randomized fashion, just to see if we can. Do we have dependencies between tests? If we do, a randomized run order will point that out.


Bottom Line:


Sometimes the best "break through" comes with a "break with". If we do things just because we have always done them, perhaps we should reconsider why. If we do steps that are very repetitive or in the same order, change it up and see what happens. 

No comments: