Today's book review is a bit retro. James Whittaker has written a trio of “How to Break” books. This one is the first, published in 2003, and for me was a great shot in the arm to look at some different approaches to testing. First and foremost, this is not a theoretical book. It’s a practical book filled with “how to apply this stuff in the real world”. Whittaker designed the book around the concept of “waging war” on software, and the book has sections that describe “attacks” that can be applied to a software application. Why attacks? Because Whittaker felt it would make testing more fun... and he's right! IMO, the attack metaphor does make testing more fun. If you listen to any software quality podcasts and you hear the term “Whittaker Attacks”, this is the book that contains and describes them. So does James Whittaker offer a “battle plan” that’s all that? Let’s take a look.
In How to Break Software, James makes the case for creating a “Fault Model”. This way, we can determine what approach we want to take when testing an application. The idea for the fault model comes from the way that the user interacts with an application, and the way that a system interacts with an application. The basic idea is that:
- A human user calls the application.
- The application requests memory and resources from the kernel.
- The application establishes connections to things like databases, libraries, etc.
- The application opens and closes files on the system and accesses peripheral devices.
Once the user understands where they interact with the system and how to interact with it, they can then “go and explore”. The approach to testing recommended is to “wage war” and “attack” the software.
Each attack mentioned in the book is presented and structured the same way. The attack is first named, and then Whittaker says when to apply the attack, what software fault model makes the attack successful, how to determine of the attack exposes failures, and how to actually conduct the attack. Each example shows a real world application under test and how the bug was triggered. This takes the idea of testing software out of the theory books and gives users a direct hands-on method to trying it out for themselves.
Talking about each of the attacks would take up way more room that a review would realistically allow, but I have listed them below so that you can think about them and how you might use them in your own tests.
User Interface Attacks
Attack 1: Apply inputs that force all error messages to appear.
Attack 2: Apply inputs that force the software to establish default values.
Attack 3: Explore allowable character sets and data types.
Attack 4: Overflow input buffers.
Attack 5: Find inputs that may interact and test combinations of their values.
Attack 6: Repeat the same input or series of inputs numerous times.
Attack 7: Force different outputs to be generated for each input.
Attack 8: Force invalid outputs to be generated.
Attack 9: Force properties of an output to change.
Attack 10: Force the screen to refresh.
Attack 11: Apply inputs using a variety of initial conditions.
Attack 12: Force a data structure to store too many or too few values.
Attack 13: Investigate alternative ways to modify internal data constraints.
Attack 14: Experiment with invalid operand and operator combinations.
Attack 15: Force a function to call itself recursively.
Attack 16: Force computation results to be too large or too small.
Attack 17: Find features that share data or interact poorly.
System Interface Attacks
Attack 1: Fill the file system to its capacity.
Attack 2: Force the media to be busy or unavailable.
Attack 3: Damage the media.
Attack 4: Assign an invalid file name.
Attack 5: Vary file access permissions.
Attack 6: Vary or corrupt file contents.
In addition to talking about the techniques, there are also some tools included with the book on the CD that comes with it (Canned Heat and Holodeck). Many of the examples in the book (especially the system interface attacks) use these tools to conduct them. However, the ideas behind the attacks can be transferred to other tools. Also, while many of the examples displayed are for MS Windows software, and the tools are Windows based, again, don’t focus too much on the actual tools, and use the attacks as a metaphor and ideas to test with.
The biggest value with these attacks is that it gives a fairly simple framework for testers to use when looking at any program or application. Programs accept input, display output, interact with the file system, and access system resources and peripherals. Knowing just these details and nothing else about the program gives the tester plenty of areas to explore.
This is a great introduction to some practical testing techniques, and can help build skills and ideas for testers both old and new. The information is basic and is presented in a way to be accessible to both beginners and veteran testers. The new testers will appreciate the quick “get into it and get effective quickly” aspect of the book. Seasoned testers will appreciate the methodology and a few new tricks they might not have considered.
What needs to be made clear is that this is not a book that will give the reader all they need to go forth and conquer. Frankly, no one book will do that in any discipline. Also, while Whittaker’s attacks are a great model, they will not cover 100% of your testing, so relying on them too greatly and not looking at other aspects of testing will leave the tester with “blind spots” that will still need to be overcome. What Whittaker has done with How to Break Software is give the reader some food for thought, and to consider how to use the attacks described to broaden the way they think about testing.