Saturday, October 8, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (10/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.

We are now into Section 2, which is sub-titled "What Should We Do?". As you might guess, the book's topic mix makes a change here. We're less talking about the real but sometimes hard to pin down notions of cost, value, economics, opportunity, time and cost factors. We have defined the problem. Now we are talking about what we can do about it. This part covers Chapter 9.

Chapter 9: Postpone Costs to Next Release by Jeroen Rosink

So how many of us have been on projects where it seems the feature set or the deliverables are really overloaded or far reaching? Is there anything wrong with being ambitious? Of course not. Is there anything wrong with trying to outdo the competition? Again, no, but ask yourself, is everything that is being promised really as important as people are making it out to be? Do we really need to deliver every single one of the features that are listed right now? In many of the projects I have worked on, the answer has proven to be "no" more often than "yes". My guess is you've felt the same way. What if we culled the load a little bit? How about if we really focused on the features that actually mattered? What if we could come to a consensus on what those features were, and deliver those during our window, and then allow some of the other issues/challenges/opportunities to be addressed for the next release window?

Some might say "ah, but that's being defeatist!" Is it? We all know that we do this anyway; there's almost always issues left on the table when a product releases, but we tend to leave them on the table after pulling our hair out trying to code/test/recode/retest and ultimately giving up because if we keep at it, we really will miss our window of opportunity. That happens all the time, and that process is very expensive. Jeroen is suggesting instead that we take a more pro-active approach and rather than reach the "leave things on the table" state in desperation at the end, that we focus on determining what is really important first, and try to avoid that frustrating phase at the end altogether if we can.

Jeroen describes the often challenging tug-of war between postponing features and issues and demanding solutions be found to known issues. Often the challenges of solving a problem "right now" leads to false starts, dead ends and unsatisfactory answers, where putting more time and research into a solution can yield better results. There is no question that some products are born in a cauldron of intense pressure, the time limit the crucible that can bring to the fore a make or break idea that at its heart is pure genius. However, the success rate of that approach is far less than you might imagine. What usually happens is that we have a rushed solution that is often missing a lot in its implementation, and the odds of large scale problems surfacing in the field go way up. Taking time to adequately study a situation, look at potential permutations, and code a solution that answers them usually takes a significant chunk of time, and doing so at a measured pace may well prove to be more effective both in the way of the solution provided and the costs to produce it.

Structuring testing in phases can help identify critical areas and time needed to test adequately and effectively. Activities such as collecting documentation, defining test cases, creating test data, and executing tests all falls into these phases. The most common (and repetitive) activities are specification, preparation and execution.

The primary goal of any software delivery is business value. How often is there a claim from business that everything is important and that everything must be delivered? Under these conditions everything would merit being tested at the same depth. It's true that some mission critical processes may need extended or extraordinary testing. In truth, if all of the features were truly evaluated, we'd see that they will fall under different levels of the MoSCoW principle ("Must have", "Should have", "Could have", "Would like to have"). When presented with the "everything is important" argument, we need to answer back "OK, are you wiling to open us up to the risks of treating everything at the same level of importance?"

Product risks are those that are directly related to a particular product under development. Project risks are those related to the resources and or the costs of particular project. MoSCoW helps classify different risks. This allows a realistic overview where decisions can be made. Late delivery of functionality to meet requirements results in later-stage project risks. After risk-assessment is performed a test strategy can be defined. Not everything needs to be tested to the same extent. Issues discovered can help inform the development or re-evaluation of the testing strategy over time.

It's important to remember that even simple changes can have big impact on a project, so more complex changes will have even more of an impact. An important question to always ask is "What impact will there be if we discover an issue later in the project?"  When issues are discovered, we need to consider their technical impact versus their business value. Does delivering with known issues produce a net positive business gain? Is it worth the risk to try to fix it?

The value of an issue changes during the life of a project. Early on, all issues feel equally important. Later on issue values may need to be examined in context, and determine how valuable fixing that issue at this time really is. Is it important enough to potentially delay release? Costs definitely increase towards the end of a project, especially if a resource or piece of functionality is a must have and it's not fit to be delivered close to the end of the testing cycle. By considering the costs per test phase, it may make sense to postpone some of the solution to a later release.

No comments: