Wow, two book reviews in one day? Well, this one's been waiting in the wings for awhile. I've wanted to finish it, but reality keeps getting in the way. It's also a fairly lengthy and detailed book, so it took me some time to get through it and consider some of its ideas.
Testing Dirty Systems is the latest book by Randy Rice and William Perry. Both collaborated together to bring out "Surviving the Top Ten Challenges of Software Testing". That was a fun book to read and it was fairly quick to absorb its main messages. Testing Dirty Systems is a much bigger work and covers lots more ground. The title alone ("Testing Dirty Systems") might raise some eyebrows. Perhaps a better title would be "Testing Real World Systems That Actually Exist and are in Production Today", but that's not nearly as snappy a title. Besides, those of us who have spent any time in the testing trenches already know what a "dirty system" is. The trick is to find out how the average tester actually works with these systems and brings value to the testing efforts when they can't realistically have a truly clean test environment. Does this book help with that?
The book starts out by defining what Randy and William consider a dirty system, and that dirty systems fall on a continuum. They start with a checklist that will help you determine where your project falls. I've been on projects that range everywhere from Spotty to Soiled (I've yet to be on a truly Clean project, but thankfully I've never been on a project that was so bad it would be deemed Filthy). The benefits to seeing this is that you get a clear idea what your limitations and issues are even before you start testing.
While Dirty Systems are a challenge, Randy and William make the case that these systems can be tested. They can even be tested in ways we are familiar with, but they need some extra "filtering", to borrow from the coffee metaphor they use in the book :).
Testing Dirty Systems uses a six-point process to help guide the tester through this environment. The steps are as follows:
Step 1: Diagnostic Analysis: get a bearing on the system and find out just how dirty it is, utilize this information as you devise your test strategy.
Step 2: Test Planning: A two-pronged black box and white box approach to testing. Watching the behavior and evaluating the software using heuristics to determine if the system is working as described, in concert with an active white box testing approach to really see if the code does what requirements say it does.
Step 3: Perform the Tests: Fairly obvious, but goes into great depth about the various considerations to be worked through when testing a production system. Exploratory and session based test techniques from both black and white box testing approaches would be effective in this space.
Step 4: Interim and Final Analysis: Bug reporting, of course, but there’s more than that. Code coverage, test case coverage, and requirements coverage fall into this area. It’s important that more than just defect finding and defect fixing happens here. This is a clear way to indicate whether or not the system is responding in a way that makes sense based on the known requirements, and to help clarify those requirements based on real world behavior.
Step 5: Test Reporting: Randy and William recommend two reporting systems, one that focuses on fitness of each component and another for cumulative reporting. The details that are included in the test reports are meant to help direct system maintenance, they are not meant to inform new development (though they certainly can). Also specific to this is the idea of being as effective with our communication of issues as possible (I can recommend a great class called “Bug Advocacy” that does exactly this, but that’s outside of the scope of this review ;) ). To borrow from “Testing Computer Software”, the true mark of effective testing in not the number of bugs that get reported, but the number of bugs that get fixed.
Step 6: Using Test Results to Clean the Dirty System: This is a compelling idea, in that the whole goal of testing a dirty system, in addition to testing it, is to help put it into a state where it is less dirty than when we started. Over time the system can be "cleaned" as we provide more details and fill in areas that are prone to fail.
This is fairly heavy on process, and some may complain that this book is advocating the "old ways" of documenting everything and doing everything up front. Actually, it's not. Randy has often been quoted as saying there should be "just enough process" and "just enough documentation" to make sure that the team can be effective. The underlying point of the book is that, for many testers, you are going into or are already working in an environment that is sub-optimal in the way that it was developed or maintained. The processes are not meant to be a cookbook to handle all possible scenarios, but rather a series of techniques to help the user get into the mindset of being a detective and working through the challenges each phase will present. Each section also gives the user a case study to help the users actually see the concepts related to each of the six steps applied and explained.
This is a dense book, and it covers a lot of ways to perform testing in environments that may be sorely lacking in the way of documentation, requirements or understood expectations (hey, that sounds like a lot of projects I've worked on). Will this be a panacea to the problems we all face? No, but it certainly gives structure to a lot of things we take for granted. Will you need to do everything listed in these chapters to effectively test a dirty system? Again, probably not, especially since every project will have its own set of eccentricities in this regard. The goal is to get the reader into the mindset and develop skills and tools that will work for their particular issues, and that's the key takeaway of this book; it helps the user develop skills in areas that they may not be currently familiar, and help them to come to grips with systems that might stop them in their tracks otherwise. If you are starting with a clean slate or a company that does a great job on documenting their development approach, requirements and standards, this book may not be as helpful, but you may still get some new ideas. For those who have less than optimal environments, there really is a lot to consider here.