Sunday, October 16, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (18/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.

Afterward

The editors of the book give us here a charge by way of a description of a game manual set up with a glossary. The point made is the last entry doesn't have a , it ends with Y, in fact, it ends with the word "You".

You: Have reached the end of this manual. We've taken you as far as we can; the rest is up to you.
Not get out there and play!

This is the official end of the book, except that it isn't. Over the next few days I'll be covering Appendix material that likewise deserve their own entries (see below), and of course, the conversation continues with the writers of the book. Most of us are very active on our own blogs, on Twitter and other mediums, such as the Software Testing or SW-IMPROVE discussion lists -- and on LinkedIn. If you try the ideas in this book, please let us know what you think. The important thing is that it's now time to put the ideas to work, so get out there and test!

Appendix A: Immediate Strategies to Reduce Test Cost by Matthew Heusser

So the book is finished, yet you may be thinking to yourself "OK, that's all great, but I need stuff I can do "right now" that can provide immediate payback. do you have any suggestions for that?"

Matt offers the following:

While testing less is an easy cost cut, it also introduces risk. The best way to help reduce the cost in an immediate way is to reduce or eliminate waste. The following strategies may help that, as well as provide additional value added benefits from trying a few things (25 of them, in fact):

1) Cut your documentation to a minimum
First you have to write it, then maintain it. Plus it only has value when people read it.
Instead of a comprehensive breakdown, focus on removing details that are not important to the testing effort. Keep those that would need to be known if you needed to hand it off to someone so you could leave town for a week. Is there enough there that the tester can read it and to the job? If so, aim for just enough documentation to do the job, but not so much that it becomes a time drag.

2) Make the cost of changing the documentation cheap
Perhaps use a shared doc, or on a network drive,  put it in a wiki, or hey, why not just keep it on a whiteboard?

3) Never be blocked
If a tester feels they are blocked or they can't do anything, they are generating waste. Instead, ask the following:

(a) Can I interview customers for test ideas
(b) Can I pair with the developers to learn the system
(c) Can I pair with another tester on a different piece of functionality in the same project

4) Eliminate Multi-Project-ing
Focus on a task at hand, and if you are blocked on it, see #3. Taking on side-projects and having to frequently context switch takes a lot of time. Focus on one area until you are done, then move on to something else if necessary.

5) Automate entirely redundant processes
While automation is sold as a panacea to all testing troubles, we know it's not. There's much more a real thinking brain can do at interesting points in the software that automation can never do. However, there's a lot of steps that are horribly repetitive and don't add any real value to the testing knowledge (set-up, navigation to key places, etc.). Absolutely automate these sets if you can.

6) Start with Soap Opera tests
Instead of testing for individual factors in a test, start out with testing the largest group possible, and if you see errors, whittle it down instead of testing each factor in isolation (which will take much longer).

7) Start with quick attacks
"Quick Attacks"are all about overwhelming the software with invalid input, too much input, and input that is out of range. If the software handles the obvious exception conditions well, it is likely in good shape. If the developers left holes and errors in the exception conditions, they likely left holes and exceptions in the main business logic, too. Quick attacks allows the tester to find bugs early, learn business rules, and perform a quick assessment of the software -- all at the same time!

8) Test Early
If you are use to working with software where an entire build is ship to you to test a new feature, see if there is a way you can get access to an environment where the feature is added and you can test it without having to wait for the entire build. The sooner you can test a feature in isolation, the sooner you can provide meaningful feedback the developer.

9) Develop a regression-test cadence, and keep it light
The days of shipping software once a year are disappearing fast. Most products are moving to incremental updates over the web, or much more frequent updates if its a web property entirely.

Regression testing, therefore, needs to be modified so that we don't have to run a years worth of test changes in a week,  or less. While there is a chance that any change could cause a problem anywhere else in the code, this is not universally true. By compartmentalizing tests that actually have a relation to other functionality, the regression tests can be made more efficient and focus on the areas where testing will actually yield results, which leads to...

10) Test the things that actually yield bugs
If you have regression tests you run for every release that never seem to find bugs, find a way to limit how many of those run. Maybe run them every third release, or one third of them per release, or just enough of them that would find a major defect if it were introduced in that code.

11) Elaborate - and communicate - a test strategy or triage strategy
List features by how critical they are, then go down the feature list in order for a "first pass". If you complete that "first pass", go in more depth. Make sure everyone knows how the decision was made, what the decision was - and the decision-makers feel like part of it.

12) Decrease your time spent reproducing the problem
If your team is spending a lot of time on bug reproduction, look for ways to lower it. Have the server store a log of every action. Find a tool to record exactly what the tester is doing and make a screen-cast that can be played back.

13) Do a low-cost beta program or pre-release
Can you segment your users and send some to a managed beta? If so, you could release early and engage the users in helping to test. You'll need some infrastructure (which users get which builds), and on web-based projects, some sort of production monitoring.

14) Recognize and eliminate project wankery
Getting better requirements, having more time to test, doing architecture and code reviews - those can all be good things. They can waste a lot of time, too. Experiment with these practices, but view them as experiments. After you've tried them once or twice (or if they are currently "mandated") ask if there will be less defects down the line, do we add value to the project, do we decrease overall cost by doing these things? If the answer is no, change the format, or drop them.

15) Stop fighting to manage the project
Instead of arguing over if the software is ready to release, make the defects visible to everyone and let senior management decide if it should be shipped. Imagine saying something like this:
"We're going to stop fighting you over issues a, b, and c. We yield to a, b, and c. We're going to focus on testing. If a, b, or c fail, don't complain to me. The decision is yours." Consider how liberating that might be.

16) If you're going to fight, win
If you do give on some issues, you may just want to pick a few battles. If you pick those battles, fight to win. Otherwise, you're just wasting time.

17) Walk around and listen
It's s good bet that there's a lot of other stuff going on that may look like work, but may just be busy-work leading to little value. How can you tell if that's happening? Walk around and listen. See what the testers are actually doing. They may think they are focusing on something important, or they may be "blocked" and not know how to proceed, but are too embarrassed to say anything. By doing this, you can then help coach or guide the testers to getting out of the rut, or determine what the blocker actually is. Is it their process? is it another person on the team? Are they just "socially loafing"? Don't wonder, find out :).


18) Write test automation that attacks the business-logic level
GUI testing is very expensive, and often GUI testing is really "change detection" testing, and can be very brittle if UI elements change. With setups and scaffolding and test hooks, testers could write tests that exercise business-logic level functions. These tests will general run much faster and be less brittle than GUI tests. Automated tests verify those dozens (hundreds? thousands?) of possible inputs for a given function. Test automation is an investment -- in the short run, testing costs always go up. Starting at the business-logic level might make it possible to see returns in days and weeks, not months or years.


19) Develop a test automation library
If you do want to test at the graphical level, or want to have more powerful business-logic tests, you may want to develop re-usable functions. For a GUI, that might be login (taking to parameters and pushing the login button), search, tag, etc. At the business-logic level these will probably be object-oriented functions.

20) Develop or hire expertise
As a test manager, you can foster expertise with brown-bags and pairing; when you look to expand your team, look for skills that would round out the same. Help develop your team so that there is redundant expertise where possible. Avoid the information silo.

21) Get a return from conferences or don't go
Expect employees to write two 'what I learned' documents: Things for us to do on Monday, and strategies to pursue over the next year. Use conferences to attract talent. Send your employees with business cards and a list of open positions. Use conferences to retain talent. Build the conference into each employees annual professional development goals, and he'll be more likely to stick around to attend it. Use conferences to grow your network of friends with specific expertise. If your staff comes back with ideas to implement on Monday, let them actually try them out!

22) Build a social network
Getting involved in local user's groups, attending conference, collaboration over the internet, all of these can provide an increasing list of people with skills you do not have, people willing, even eager to share ideas if you tap them on the shoulder. Instead of being "blocked" by a performance testing problem for a month and calling a consultant, a few questions to the social network, some calls, some questions, and a couple of days of exploration might solve the problem directly.

23) Examine invalid bugs carefully
How many bugs are being marked invalid, or 'wrong' for various reasons? Each of those means a tester and a developer both invested time for no benefit. If enough bugs are invalid, take a look at the reasons why and try to prevent it in the future.

24) Build a test model that deal with the natural up and down staffing need for test resources
Test projects have an ebb and flow. If testers remain constant, you'll either be stuck with too small a team or too big. Build a staffing model that allows you to scale up and down quickly. Perhaps bring in tech support to test, hold a beta program, bring in an outsourcer, or consider crowdsourcing like  uTest or volunteer a project to Weekend Testers.

And finally ...

25) Ask the team to identify opportunities to drive out waste ... and implement them.
A good tester is inquisitive, curious, and critical. The tester is likely to think "Why do we waste so much time doing X? I would think we could get by without it, or get the same benefit from doing Y."
So ask your testers what they would do to eliminate waste.

1 comment:

Tal E. said...

about #23 (invalid bugs) - there are usually 3 reasons for that:
1. either the tester doesn't understand the design,
2. the person closing the bug doesn't understand the scenario, or
3. a bug is duplicated.
a good test management tool can usually help you with all those problems. for example
1. if you can link requirements to tests there's no confusion about functionality
2. if you create mandatory fields that force the tester to add an accurate description,
3. if you have an anti-duplication mechanism or a good search option it can help you know in advance that you are trying to open a similar issue...