Monday, August 19, 2013

Everyone Within the Project Team is Responsible for Quality: 99 Ways Workshop #67

"Programmers and Testers, tear down this wall!"
The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.

My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 





Suggestion #67: Everyone within the project team is responsible for quality - Steven Cross


For those that have come into testing in just the past few years, you may have missed this whole experience. For those who have longer memories, like me, this may be all too familiar. 

When I first started working in a testing organization, there were multiple groups and disciplines. There were people who wrote system code. There were people that wrote microcode for on-board communications. There were people that wrote management software that was installed on workstations. In all cases, there was a separate testing group that reported through a different chain up to the VP of Engineering. The purpose of this was to make sure that "test" and "programming" would not taint one another. The testers had their own labs, their own grouping in cubicles, and other ways that we were "close, but not too close" to the programmers. 

The phenomenon of "throwing code over the wall" was well known, and traditional development practices at the time dictated that separation. Even with this, there was an understanding that there needed to be a way to, if not remove that wall, at least make it lower and easier to cross. Automation factored heavily in this process, with a huge investment of programming time and energy going in to help exercise a lot of the code at the functional and integration levels. Still, there was always this sense that it was "us against them", "testers vs. developers", and a healthy (and sometimes not healthy) level of competition and adversarial communication. 

If you missed this era, breathe a sigh of relief. If you are still living through it, get out if you can. If that's not an option, then perhaps a little bit of Don Quixote is in order.


Workshop #67: Take a small project or feature that will be coming up, and emphasize the testability of the feature. Focus on testing as early as possible. Ask questions to make sure that the testing scope and necessary structure is in place to do effective testing. Review at the end of the process (release of feature) and see how much rework was needed as compared to a traditional approach.

There is something cool that happens when testers get involved up front, really have a say in what goes on while the features are being developed (even back in requirements stages), and follow through as a story progresses. Help your testers, or a sub-set of them, come up to speed as much as possible with the infrastructure that is being used. Let them learn how to configure it, change options, make hacks, and do any number of things to do more than just say "OK, the scripts set up the environment, I'm ready to test".

This initial investment will prove to be hugely valuable. The more they know about the underlying guts of the systems, the easier it will be to ask questions very early in the process of new feature development.


- Start with the requirements phase, and look at the user acceptance criteria. In a loose manner, make sure that every line of User Acceptance Criteria item can be mapped to a test that you can verbalize. If you cannot verbalize a test based on the information provided, ask what is happening and get clarification so that you can. It need not be in vivid detail, with every parameter set, but you should be able to say "Based on this criteria, then (A,B,C,D,E) needs to be visible or in place for this to pass acceptance". Ask how to make sure that (A,B,C,D,E) can be confirmed. Often it can be a simple as creating a switch option, so that a command can be run and a single piece of information is returned for verification.

- See if you can spend some time pairing with the programmer while they are working out the implementation. Ask questions about the approach, and how to confirm the features as they are being designed. Walk through any "testing hooks", and make sure you understand how to use them. If you have an automation testing framework, make sure that the hooks can be used with that system.

- Walk through early configuration and setup of the features with the programmer, and state areas that you feel make sense, are unclear, or are causing you to get stuck. Often, there are setup steps that a programmer takes for granted, but aren't documented. This helps make sure that those "implicit steps" are exposed and defined directly.

- Work with automation programmers (or do it yourself if that's your forte) to see if the API or the parameters of the feature can be effectively tested. Look at the unit tests that the programmers have made and se if you understand what is being run. Do not copy the unit tests, but they can give you key ideas as to how to devise effective and robust  automated tests. Modify the feature and integration tests so that this new feature can fail the tests as well as pass the tests (yes, verify that the test can fail first, then confirm that the appropriate criteria can be put in place to make sure the test will pass).

- Iterate through the story as carefully as you can. Give specific feedback to acceptance criteria that needs to be spelled out, doesn't match what was originally agreed to, or just plain looks wrong. 

- Run through a stacked deployment process if the option is open to you. Use a developer's machine first, then a demo box, then a staging box, and finally deploy to a production level machine. Observe through all of the stages if something doesn't behave the way we expect it to, or if services that were not available on the earlier stages have an effect on the feature.

- During retrospectives, mention key aspects that helped along the way and key enhancements to testability that might be leveraged in other stories. See if, indeed, the rework for that story was less than in other stories. If so, see if more stories could be worked in this manner. 

Bottom Line:

Getting everyone involved in the quality of the product as early as possible and to be focused on the enhancements that can improve quality at all levels: 

  • requirements
  • unit tests 
  • initial deployments
  • rework and retesting
  • staged deployments on hardware closer to the customer
  • dog-fooding the application in our own organization


All of these steps can bring us closer to getting everyone on the team to focus on the quality of the code from the earliest instincts for a new feature to the final deployment to our customers. It may take time to break down barriers in some organizations, so start slow, champion something internal first, or a feature that's a nice to have but perhaps lower on the "ultra-critical" curve. If successful, the powers that be will likely give more leeway to try it again, and in more places with more people.

No comments: