The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.
My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.
Suggestion #74: Good point and to keep 'em coming: experienced testers can benefit from asking less experienced testers to get a new/fresh view of a problem, especially true when experienced includes very use to the product being tested (in that case as a means to fight bias). - Erik Brickarp
This is part three of the trilogy on testers helping testers. The three suggestions and workshops cover similar territory, but they have a unique enough level of difference that they can be handled individually.
Experienced testers can help less experienced testers, but it also goes the other way. Less experienced testers can also give experienced testers a new lease on looking at a situation. This is especially true with a product/application the experienced tester knows backwards and forwards. "Been there, done that" is dangerous, and it can create bias towards the way that we personally view a product or feature.
Workshop #74: Gather a variety of testers together and examine a feature. Giving just a basic framework of the acceptance criteria, ask each tester to come up with their own approach to testing the feature. Keep the focus tight and the sessions brief. Review what they chose to do and, especially, ask them why they chose the approach they did. See if they were looking at areas you would not have considered.
This approach is, in a way, an application of the fable "the blind men and the elephant". In that story, the blind men all touch a different part of the elephant, and in doing so, they describe wildly different things about the elephant. We, likewise, find that we will tend to see what we are personally curious about and personally find interesting. When we get to be really familiar with something, we will also tend to follow the paths that we know well. One of the reasons is because we think that we are doing "what most people would do". That might be true, but more to the point, we often approach the areas that we can get through the quickest and in the most efficient manner. that may be fine, but could we do better?
By opening up the pool of participants, and not prescribing the methods, we can see what would interest or draw them in. Also, we can see what areas different people find interesting and of value. Take for example a writing application (MS Word, Google Writer, OpenOffice Writer, or some other form of WSIWIG text editor). If we were to hand a document with specific test cases, and have ten testers run those tests, it's likely that the ten testers would cover the same ground, perform the same tests, and discover the same or similar issues. Some testers would be faster than others, some testers might be more comprehensive in their reporting, but generally, the results would not vary greatly.
By contrast, give some very light guidelines, and let the tester's approach the problem as they see fit. Rather than a bunch of prescriptive steps, say that you want to see what they find based around a specific area, like the text formatting options in the toolbar. The important thing is that they need to be able to see the elements, use the elements, and then print to paper or a PDF for comparison. That's it. No more "official guidance". Give each tester fifteen minutes, and then review what they covered.
In this case, you will be far more likely to get interesting results, in ways you never anticipated. Why? Because we have now freed up the tester to take the avenues that they would find interesting.
- One tester might apply all of the options in order.
- Another tester might focus on ways to see how color or formatting work together.
- Because I do a lot of "example" documents, I like to mix a bunch of different fonts together and see how they display.
- Another tester might be enamored with newspaper and magazine formatting, and as such, they may go nuts and experiment with a variety of column layouts, picture placements, captioning, etc.
- Another tester would emphasize configuration options, and see how many of them they could put together as default values.
- Another tester would be focused on keyboard mapping so that they could customize the ability to call every element in the editor through the keyboard.
- Another tester might want to see just how many elements they can interpret based on the tool tips an float-over text tips.
These examples show that there is a lot of natural variety in each person's testing approach. It's likely that each review will show some aspect that would not have been considered by even experienced testers. With a large test team, there is natural diversity. If allowed to be expressed, that diversity will bring to light things that one tester, no matter how skilled and well versed in the product, will be likely to do on their own. Don't get me wrong, it's possible that a single tester could be so brilliant and all-curious that they would cover everything a group of ten would cover independently and with a wide latitude. It's possible, but in my own experience, those all encompassing testers are extremely rare.
Each of us has a unique world view, and each of us expresses it in different ways. The things that naturally interest me might match the areas that interest you, but we will also probably have a great deal of variation. Those variations are important, and we need to encourage that variation. As testers, lets be open to learning from other testers, and people who are not necessarily "testers" by profession. Like the blind men and the elephant parable, we are better informed when we can get more people involved and leverage their own natural curiosity. The key is to allow that natural curiosity to come through, so guide, but do not dictate specifically. Natural curiosity needs to be exactly that.