Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 6.
Chapter 6: An Analysis of Costs in Software Testing by Michael Bolton
First off, this is a big chapter and it covers a lot of specific details and items that can effect the cost of testing. To fully cover all of them would take an exceptionally long post, so this is very much a skim of the ideas. For the full explanation of all the factors, I of course recommend reading the chapter directly :).
Michael starts of the chapter asking a simple question… who are you? Why are you reading this book? Where you are in the process has some bearing on what you can do and what factors you can contribute to or influence There are many factors that contribute to the cost of testing, many of which are outside of the testing realm (some are development related, others management related, but all have an influence).
Michael used an example of a job he worked some years ago and described the situation of how they were incurring great costs for their testing efforts, ranging from how they hired new testers every iteration to how they utilized those testers once they were there. Recruitment, training and staffing of individuals is an expense, and its one that goes up the more frequently you do it. In many cases testers are not trained, or they re not trained to think outside of the “run the scripts we tell you to run, and make notes about what you see, and issue regular status reports". We’ve all been there at one point or another (this actually describes much of what I did in the 90s). Scripted testing is supposed to make for a complete process, one in which all the steps are laid out and testers would then be interchangeable. Perhaps, but it also doesn’t reward much in the way of exploration. In short, the scripted approach itself is terribly expensive and wasteful. It stifles innovation and it puts testers in the role of doing the same things over and over.
Many of the testing documents we get to help guide us, if we actually get any, are often developed early in the process, usually incomplete, frequently missing broad areas, and leave testers with the need to ask a lot of questions (questions that the documents were supposed to answer in the first place). To this day I am often finding myself in this situation, where stories are submitted, and I try to figure out “OK, but how do I test this?”
There are specific domains that we all approach in development and in testing, and they are not all the same or interchangeable. A tester to be effective has to become a subject expert many times over, even if that subject expertise is a small functional area. I remember well working with a legal software package that deals with the changing and convoluted world of immigration law. As I was talking to a customer about a particular problem, he commented to me that I could hang up my own shingle and charge to do immigration law related services; I certainly knew enough about them to be competitive with many attorneys and paralegals currently practicing. That knowledge takes time to develop, it’s not something you can just add a random tester to and expect them to be effective.
Many moving parts fit together, and the ability of testers to know where they fit, how they fit and how they can be used to complement each other (or work against each other) is an important skill, and it’s one that is often not understood by managers. Breaking up the team and adding different testers don’t give us an automatic win in manpower, because those testers have a lot to learn about the system to be effective. That’s not to say a good tester can’t be effective fairly quickly, but there is always a ramp up time, and that ramp up can vary from days to weeks to months in some instances.
Exploratory testing techniques would certainly be helpful, but this is an approach that is sadly not well understood in certain areas. I’ve had to explain a few times what exploratory testing is. A few organizations get it, many others need to be convinced that it is effective. And there are those that want no part of it.
Very often, we as testers may by direct involvement or through casual examination discover an area that might be a corner case, but might also be a fundamental design issue. There is often resistance to these areas, usually with the dismissive “no user would ever do that” (oh, how often I have heard that (LOL!) ). Testing often opens up new avenues of understanding in the application, an that testing can provide many new details, as well as new directions that the developers may need to consider, those new directions require time to explore and implement, and that ads to the costs as well.
Again, this is just skimming the surface, but the main takeaway is that there are many factors that come into play and help determine where are costs come from. We may have issues with just a few of them, we may have issues with all of them. At some point in an organizations life, all of the factors that Michael describes will need to be addressed to help truly get the costs under control or at least understand what those costs are.
Michael’s story and the challenges he faces rings all too true to me. Over the years I’ve ha similar challenges, and many of them trace back to decisions that seemed very rational at the time, and had a fair amount of common sense to them, but when seen in the broader view of costs and waste, those decisions looked less intelligent and more just vestiges of doing business the way business had always been done. Making a shift away from that requires that organizational leaders understand all of the costs associated with testing, and realize that many of the costs are not specifically owned by the testers. Many of them are organizational costs and have long standing reasons for being there. Still, waste is waste, and the first step to dealing with a problem is identifying that you have a problem.
Also, any change we make in one area will likely cause shifts in other areas. They may be beneficial, they may be destabilizing, but make no mistake, they will cause changes to occur, and once set in motion, they will touch on many aspects of the way that software is tested, developed, and the role we play in that destabilization will come into focus, for god or ill :) Also, any changes made will take time to see come to fruition,and there will likely be much weeping, wailing and gnashing of teeth along the way. If, however, we are aware of the factors that play into the costs and waste, we can at least address them. We won't be able to make changes to all of them at once, but much like a pebble dropped into a pond, even one small change can have ripple effects that can yield larger changes over time.