THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?
Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.
We are now into Section 2, which is sub-titled "What Should We Do?". As you might guess, the book's topic mix makes a change here. We're less talking about the real but sometimes hard to pin down notions of cost, value, economics, opportunity, time and cost factors. We have defined the problem. Now we are talking about what we can do about it. This part covers Chapter 11.
Chapter 11: You Can't Waste Money on a Defect That Isn't There by Petteri Lyytinen
Ideas such as technical debt and the need to be faster to market and to get the product released faster are always consideration we have to contend with. In some ways, these can be anywhere on the spectrum of benign to truly dangerous. The amount of time and the proximity to release tends to determine where on the spectrum your organization or project may fall. Make no mistake, though, technical debt and chasing ways to cut corners as the release gets closer happens. Focusing on immediate needs often cause the technical debt to grow, not shrink. While it's possible to enhance testing as a standalone process, why should the testers have all the fun? Petteri suggests that developers have a chance to contribute to the decrease of the cost of software testing as well, by focusing on techniques like Test Driven Development, Continuous Integration, and Lean principles of Software Development.
Here's a heretical thought. Want to reduce the cost of software testing? Bring up the skill of your developers. More to the point, encourage your developers to develop software from the approach that Uncle Bob Martin refers to as 'the Software Craftsmanship Movement". Central to that is the idea of Test Driven Development, and the ccircular process of developing software with tests in mind first. Having the test fail first, then coding to get the tests to pass, and then refactoring and repeating the process.
It should be noted that TDD does not resolve all of the issues; there's still plenty to test. What TDD does do, however, is it takes many of the solidly boneheaded issues out of the picture. My work with SideReel is an example. Yes, there are occasional issues that I find or details that may not be specifically implemented the way they should, but I rarely come across a truly bone-headed omission or a really truly "broken" implementation. the developers has mostly resolved those issues through actual TDD processes, so I can vouch for them being effective :).
Continuous integration (CI), is the process where all new code committed gets immediately built and deployed to a server and where new features and functionality is immediately tested. Along with TDD, this helps developers commit small or large scales changes and quickly see how their changes effect the rest of the environment and application. Coupled with TDD unit tests, and a smoke test run from the testers side, testers are freed up to focus on exploring the new changes and seeing if the changes have additional issues or if they are slid enough to be deployed. Another benefit of CI is that developers can quickly see where the changes "broke the build" and can back them out, make fixes, and then resubmit/retest. this helps with the ever present issue of finding issues late in the game. While it will never be completely eradicated, the odds of finding a problem that has never been tested or examined in conjunction with other components goes way down. Still, even with these enhancements, testers must never get complacent and think their work is all done. It's not. As E.W. Dijkstra noted: "Program testing can be used to show the presence of bugs, but never to show their absence".
The biggest benefit to using processes like TDD, CI and automated smoke tests is the hope and goal of eliminating needless time waste. As time and skills grows with the development team, downtime because of issues diminishes. It also helps to diminish the inevitable downtime between "bug discovery" and "re-test" with an updated module. Petteri suggests having the team sit closely together so that, when issues are discovered, the need to enter issues in a defect tracking system is not the bottleneck to a fix being made. rather, leaning over and saying "hey, developer person, check out the issue I just found here!" While tracking issues is not in an of itself a bad thing, it can be if the workflow is specific to alerting each member of the next step by changing states in the issue tracking system. Direct verbal updates are much faster. If immediate personal interaction is not possible, use Instant Messaging as the next best thing.
It's common to think that just having automated test scripts will solve all of the testing team's problems. They can help, up to a point, but as more and more tests roll in, and need the be run, the quick and dirty smoke test often grows into a more extensive set of feature tests, and their completion time grows longer and longer (why yes, I have experience with this :) ). it's impossible to test everything. Even simple programs could have millions of paths through the programs and testing all of them would be physically impossible, even with automation running day and night. thus the goal is not comprehensive testing, but targeted and smart automation testing. Combinatorics (pairwise testing being a popular version of this and a term known to many testers) can help trim down the number of cases so that the tester can focus on the ones that give the most coverage in the least steps.
A disadvantage to up front test cace development is that we just plain don't know what the test cases are going to be. we can guess, but it takes time to develop everything to get the true picture of the requirements and the coded features. Reworking these test cases later is a pain. Rather, Petteri describes a process called Iterative Test Development (ITD). When reading a user story, a use case, or part of a technical spec., write down a few brief lines about what needs to be tested. As developers start coding features, flesh out each test case in a simple format. When the feature is finished, fill in the precise details for the test cases and start testing with the full requirements s son as they are ready.
These examples (TDD, CI, and ITD) all point to the same goals and focus; they are meant to help make sure that the craft of developing software is first and foremost a sound one. ITD is the testers step to help make those development steps come into focus quicker and make sure that, as the developer focuses on the craft of software development and the processes that bring testing to their sphere, we likewise also develop our tets as the code is being developed, so that we do not waste time creating test cases that do not address the real code in question. Ultimately, this all comes around to helping answer Petteri's initial goal... you can't wast time on a defect that isn't there. Rather than focus on finding bugs, let's focus on preventing them in the first place.