Friday, May 14, 2010

What Does Q.A. Mean To Me?

I think in the testing world, this is the most bandied about question that I have heard discussed, debated, and argued. Since I purport to have a blog dedicated to talking about testing, it’s only fair that I go on the record with my thoughts on this.


First and foremost, Quality Assurance is a nebulous description for testers, and is in many ways not helpful. I am opposed to the idea of a “Quality Assurance Team” that is separate from development (put down the pitchforks, people, lemme’ ‘splain!). Quality Assurance is an empty promise; we cannot “ensure” quality. All that we can do is point out issues and find issues with a product and call into question its quality. That’s it. We cannot magically bake quality into a product. We cannot wave a magic want and exorcise bugs from a program. We can point out to developers issues that we find when we test.


Quality Assurance is not just my team’s job. Rather, it has to be the mission of the entire company and a dedication to making sure that we all spent the time and the energy to make sure that there is as few issues in a product to be released as possible. Testers provide an indication as to how well the company is achieving that goal. Rather than a gate (or my favorite overly used and abused metaphor, the “bug shield”), we are more closely aligned with the function of a gauge. Instead of looking at software as buggy data that drops into QA as though it were a function, and that we magically cleanse the code and bug free software comes out the other side, we can tell the story of what we have seen and give the company and development team information that says “here is where we are at”. The tester tells a story, and gives information to show the state of the application. From there, the developers can then decide what they want to do based on the information (using a GPS as an example, they can stop, turn around, and make changes to continue forward, or they can just keep moving forward).


Regardless of my personal feelings as to what my role is and how I would like to see myself in that role, the truth is, whether I like it or not, most other people in an organization do look at the QA tester or the QA team as “the last tackle on the field”. In my current environment, yes, that is the case, and it requires me to be very strategic and creative. While I may not be the one who put a problem in, I will certainly catch a fair share of the heat if a customer discovers the problem. Thus I have to embrace the fact that, whether or not I like or appreciate the “bug shield” metaphor, it’s the role that others see me playing, and I cannot just abandon it.


So what can we do? What is our mission, our real value to the organization? What’s the bottom line of what we offer? In general, my answer to this is that “I save the company money”. Every bug that I find, whether it is major or minor, has a hand in helping to determine whether or not a customer stays a customer, or talks about our product in good or bad terms, or purchases another seat for their company or “makes do” for the time being. It can be tricky to measure, and it’s not a hard and fast as a sale vs. no sale, but it does help to make clear what we as an organization provide (and in this case the we means me; remember, I’m a lone gun at the moment, but I have hopes that may change at some point). How about you? Where do you see yourself in the Q.A. picture?

2 comments:

Markus Gärtner said...

To some extent I agree with you, Michael. Testing is not quality assurance, but maybe quality assistance, as Michael Bolton once pointed out (and I think it goes as far back as to Kaner).

Where I disagree, is that testing can decide whether or not a customer stays a customer. Consider two projects: 1) the developers are encouraged to "throw code over the wall" without even taking a mere smoke test (start the application, and see if it starts to melt), thereby producing lots of crappy and buggy code; 2) the developers are encouraged to use pair programming, tdd, etc. to the extent that there are near to zero bugs in the code once it reaches you.

Now these are two extremes, but the degree of testing varies widely for the projects when I want to deliver at the point where the customer is satisfied. So, there is yet a management decision necessary to decide "this much testing enough for us".

Testers provide the biggest part of the feedback to make this decision, but testing does not make the decision, so doesn't testers make. It's the business, the manager that decides ship it or not.

So, on the first project, testers might be able to save more money for the company than for the second, though I would call in testers on both projects.

Michael Larsen said...

Thanks for the comment, Markus.

I may have oversold the "keeping customers as customers" part, as really. I have little ability in what causes a company to make their purchasing decisions, but each show-stopper bug I find that a customer doesn't tends to help the odds that we will not lose customers because we "let a big one get through".

I'm with you on the wish that testing were done at more levels of the process and not just by me. In my current company, we are transitioning to the second model that you have described.

I'm absolutely with you on the feedback part. I'm a provider of "synthesized and considered information". What is done with that information, of course, is another matter entirely :).

--MKL