As many of you may be aware, the Pacific Northwest Software Quality Conference is underway (and sadly, I cannot attend due to the condition of my leg, but I've talked enough about that in this blog, so I'm not going to rehash that again). One thing that's been very nice is to see the ongoing Twitter stream from talks I would have liked to attend. One of the tweets was very interesting. I'm paraphrasing, but basically it said:
The two most unfair questions in all of software development and testing:
1. How did you let that bug get through (usually directed to testers)?
2. How did you let that bug get in (usually directed to developers)?
Generally, the 2nd question is asked in conjunction with the first question, yet its the first question that is asked far more often. Tester, why didn't you find that bug? There's an expectation that testing, when it is applied, will save development from mistakes they have made. Testers may discover issues, and very often we do, lots of them, but it's often unfair to expect we will find all of the bugs, or even to put us in the position where if bugs are discovered later, that it is our fault that they got out there. I can appreciate some obvious omissions, and I've been guilty of those in the past. It's the less defined ares where many of us are not trained to even look. Let's face it, there are far fewer testers out there who are familiar with white box and gray box methods than there are black box testers, and even the black box testers out there could learn a lot about how to maximize their effectiveness beyond what they are currently doing (yep, me too).
Still, why don't we as often turn the tables and ask the developers "Hey, how could you let that bug get in there?!" That elicits a very different reaction, often a defensive one, but it shouldn't. It's just as valid a question as the first. Sure, we may have missed that bug in our testing, but we wouldn't have had to worry about missing it if you hadn't put it there in the first place. Ouch! But it's true. Interestingly enough, I've seen less of the defensiveness from organizations that actually practice test driven development. I think there are a few reasons for this, the primary one being that developers who actively use test driven development understand two things. First, they understand the process necessary to create tests and verify that they are passing. Second, they realize that there is no way that, even with their extensive unit tests, that they will cover every possible situation. In my own experiences with TDD organizations, the "bone headed" mistakes are few and far between. They happen, but not very often. What's also interesting is how much less defensive they are when issues are reported, and if issues don't meet the requirements, they are the first to say "OK, reject that story and I'll take another look at it".
Those questions are indeed unfair, but life is unfair, and it's going to ask the questions anyway. As testers, we need to accept the fact that we are viewed as the last tackle on the field, right or wrong. Developers, you also share the responsibility to make sure that the issues that we find are not there in the first place, so I encourage embracing of options like test driven development and continuous integration. Will they catch all of the problems? Probably not, but I'm willing to bet that the overall issue count will still drop significantly, and that will help the testers focus on those truly elusive issues. With a little skill and a little luck, maybe we can both hear those two unfair questions a little less often.