Saturday, February 9, 2013

The Unseen World of Testing

I'm modifying the source material a bit here, as this was a recent experience for someone I know and they shared their story with me. Some might feel that it's putting people on the spot, but I think that there are some good point to make. Therefore, with their their permission, I'm sharing (and paraphrasing) their comments.

Scenario: A C-Class executive is looking through the work that's being done, and asks why it's taking so long for QA to get through what seems to be, in their estimation, some simple enhancements.


"This has been an issue since the dawn of QA time - someone from upper management who is not tuned into the day-to-day operations of the QA team starts wondering why something that seems so simple isn't getting done fast enough."

"Based on my experience, QA time gets extended for one or more of three reasons:

 - The code delivered is buggier than expected, so QA spends time on fix/verify iterations or working with a DE who didn't really understand how the feature was supposed to work;

 - The feature was poorly specified, so QA spends time "testing the design" into the feature; 

 - QA gets diverted to other activities not related to testing the feature.

"The core issue is that, when one of these things happens, there is little or no visibility to that fact for anyone outside of the Q.A. team. All the exec sees is a card for something that they think is "simple" that is not moving."


The challenge I see here is two-fold. First, how does one effectively communicate all of the "ilities" that often go into a story and don't make their way into the actual text or record? Some places have a way of tracking bugs, but others prefer to include everything inside the story itself. Is there a way to really tell what we do without people getting into and actually reading the stories in depth?

Second, what can we do, short of tracking time religiously, to communicate the amount of set-up, tear-down, checking, de-bugging and otherwise futtzing with stuff that is critical, necessary, and otherwise gets absorbed without anyone really realizing what we are doing. In a perfect world, we would be totally engaged, exploring wild edge cases, and looking at issues at a brisk clip. The reality, of course, is that a lot of our time is spent doing stuff other than testing.

I explained to my friend that software testing is  lot like journalism, and that, for many, all that they see is the story in the paper or the magazine. If it's a brief few paragraphs, some may wonder why we don't get more stories from a particular journalist. What they don't see is all the traveling, research, finding sources, gaining trust, and the details necessary to actually get the story. We face a similar situation in testing in that, really, a lot of what we do is "ephemeral". There isn't a hard and noticeable deliverable. With the exception of automated testing, where there is code that is written, most of the testing activities we do are not as tangible, so it's difficult to communicate exactly what we are doing. While we can document more, put more of our information for people to see, often the net result is that we end up spending even more of our time doing things that are not testing.

As stated, it's an old problem, so I'm asking my testing friends... how do you make it so that others in your organization can more readily see what you are doing, without having to add to the bureaucratic overhead?

2 comments:

Tony Bruce said...

Hi Michael, have you had a chance to checkout 'Thinking visually in software testing'?

http://www.youtube.com/watch?v=K4hvAbN2QbE

Jeff Lucas said...

Michael - I have found the real key is communication. On a small team, speaking up about impacts, schedules, and resource requirements makes everyone aware of all the factors that go into testing. It has been on larger teams, where a "test lead" or "test manager" is speaking for a group of testers at formal meetings, that I have found major disconnects between other team members and the testers.

On one project involving 40+ developers and 6 testers, we alleviated that somewhat by regularly luring unsuspecting team members into conversations. A notice that there are cookies available in the test lab and an informal "coffee cup ambush" in the developers office every once in a while goes a long way to keeping up communication.

I am a firm believer in keeping team sizes small for just that reason, although I realize it is difficult for many organizations.