Thursday, September 29, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (2/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 1.

Chapter 1: Is This the Right Question? by Matt Heusser

There's no question that the simplest and easiest way to limit the costs of testing is simply to not do it. Problem solved. Only it's not solved, because believe it or not, problems will still exist. So Matt asks us up front, when we talk about reducing costs, are we really talking about cost reduction... or are we really asking "How can we increase the VALUE of software testing?"

There are lots of ways to cut costs, not just in software testing, but in every part of the organization. Benefts are expensive, so are salaries. Cut those and you save *lots* of money... of course, you are also likely to lose your best people, too. So that's a false economy. We could break down work into very simple, repeatable steps. This has the benefit of wringing the best "value" out of the costs needed. Once again, though, it's a false economy, for two reasons, and one that I think Matt touches on, but I'll add my own take on this. First, there is no way in software to make a true "factory or millwork" comparison. Software is not a widget. What I mean is that we do not create a single component like you would at a factory to make a cylinder casing for an engine. So breaking everything down into simple components won't work in this fashion, because there are so many permutations and variables that it's impossible to cover them all. Second, to borrow from Seth Godin's "Linchpin", if you could structure all of the work in this manner, then anyone could do it, and anyone could be plugged in and pulled out. It's a race to the bottom. To put it more succinctly, it could be a factory job... but do you really want that?

So the short answer is, we are not asking the right question if we are just asking "how do we reduce the costs of software testing?" It's an important question. It's just not the only question. Value has to be considered. The biggest problem with "Value" is that it's really fuzzy. It's very subjective, and there's no magic number that says "OK, now that's VALUE!" Think of the things that generate "value" in an organization. The clasic example is training. Is it valuable? How can you tell? How much is enough? At what point is there a law of diminishing returns? Is there such a  thing as too much training? We would instinctively say "well, of course there isn't", but how can you truly quantify that?


In the software testing world, we look at "test cases" as a solidly quantifiable metric. The more test cases you have, the better your testing will be, right? Not so fast. I could tell you that my automated test routine has 1,000 test cases. Wow, 1,000 cases. That's a lot. But do those 1,000 test cases actually really mean I am doing better testing just by having them? Of course not, you don't have any context into why or what I'm testing. That's why my saing that, out of 1,000 test cases, 995 of them passed sounds great, until I tell you that one of them is relate to the fact that the app can't send emails. That can be a ctastrophic failure if you're testing a CRM system, but you won't know that by my just quoting you a number.

So how can we use these value ideas to help steer the conversation when those in the driver's seat are all about controlling costs?

The key is that we need to be able to do a number of things at important times. Writing and using tests as examples of the requirements to help make sure the requirements are clear is the first step. Finding the most important issues early and quickly is also important. Giving good and timely information to the development and management teams so that important decisions can be made.

A few days ago, a developer that I worked with at a previous company wrote to me and mentioned something I told him while I was testing with him a few years back. He asked me why I was able to get so many bug reports early in the process. I told him that one of my "principal weapons" in the battle of software testing came from James Whittaker (who may have taken it from somewhere else, I don't know really) but that I found to be one of the most valuable first salvos on an application... look for every error message in the code, and do what you can to make those error messages appear at least once. For those familiar with the book "How to Break Software", you will recognize this as "Attack #1". The message that I got back from this developer was that that tip alone, while he was working on his most recent project, helped eliminate about 50% of the bugs that the project would have had by that point. I thought it was cool of him to write me and tell me about that. Point being, that's a simple approach of a "big bang for your buck" early testing strategy and technique that you can use starting right now :).


The ability to provide good information so that the development or executive team can make a well informed decision is really the #1 thing that testers provide, at least in my opinion. If we have any chance of really making an impact, and a dramatic one, that's where testers can make the greatest substantive changes and add tremendous value. From test reports, to meeting status, and to ship/no-ship decisions, the tester has a unique role and responsibility. To borrow from Jon Bach, testers have more kinship with journalists than with any other profession. Therefore, the "story" or narrative of the project and its fitness is one of the key deliverables of the test team. How well does your team tell its story? The story? Do you approach your testing with the intensity of a beat reporter? If not, you may want to consider it.

Finally, to up the value and reduce the costs, one of the best ways to help that process is to eliminate waste wherever possible. There are areas that are beyond our control (status meetings, email, etc. may be a mandatory part of the jobs we do) but there are ways to get more bang for the buck in what we do. One great way is to approach testing from a Session Based model. Instead of saying "I tested this functionality" show that you have completed "x" number of testing sessions (of focused time) associated with a key piece of functionality... and tell your story.

Next installment will cover chapter 2.

1 comment:

Jokin Aspiazu said...

Hi Michael, at my company we just received this book and I asked to be the first to give it a read, so I'll try to keep up with your reviews as long as I go reading it.

First thing I've noticed is how fresh this text is, this is a year 2011 testing book! and I trully believe that it will stand the test of time and become a classic in the testing field.

About chapter one, I liked the story about Tom DeMarco. His guidelines about metrics have been quite a lighthouse in the way our company developed internal measurement software... so reading about his late article was some kind of 'yeah, I knew it!' moment. Now we'll have to see where we should stop measuring time we spend instead of things we actually do.

The rest of the chapter is also fine, as it provokes lots of toughts in the reader's mind, about things that could be done.

now for chapter 2.
salute!