Tuesday, August 6, 2013

Always Use the Best Methodology Invented - Common Sense: 99 Ways Workshop #38

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #38: Always use the best methodology invented - Common Sense. Then choose your testing methodology... - Gil Bloom


There's an aphorism that says "common sense is really not very common", but the fact is, our own instinct and our own experience is what brings a person to develop what is commonly referred to as "common sense". Our actions and experiences define our view of the world. In this light, we are the best proxy for our "customer" that our customers can hope for. 

How we test will be determined by what we are testing, who we are testing for, what that "who" hope to do with that product, and what makes sense under the circumstances to focus on.



Workshop #38: Build your common sense muscles by performing "context deadlifts"


I'm being a little silly with this suggestion… but just a little. The truth is, common sense means having the ability to size up a situation and, based on what the situation requires, determine a course of action that is "sensible".  

The fact is, we are not always rational beings. We can have great common sense in one area, but have lousy common sense in another. Generally speaking, though, the context that we apply our thoughts and the way that we reach conclusions help us develop better common sense, and help us plan why we want to do for that particular context. therefore, if we want to strengthen our common sense, we need to regularly consider our contexts and practice taking on more challenging scenarios. Like with deadlifts. we start light and progressively develop the ability to lift more and more over time.

Think of going for a drive. The posted speed is 55 miles per hour. Imagine the following scenarios:

- on an open freeway with clear conditions (sunny, no clouds)
- on a winding two lane road in the mountains (raining)
- during a blizzard driving on city streets

We could, conceivably, drive tithe posted speed under all three conditions, but would you? 

Common sense tells you that it would be foolish to drive 55 MPH in the snow, or on the winding mountain road in the rain. On the winding road, there's a high likelihood that we could skid out and crash. In the blizzard, we don't have the ability to see what's ahead of us, plus stopping could be greatly impeded, with us sliding through intersections or skidding into other vehicles.

Do we need to experience these situations to know the potential? I'd guess  not, because it's likely we've heard enough examples to convince us that either the rainy mountain road or the blizzard in city streets is not the time to drive that fast. We are applying context and considering cause and effect, and that helps us make a sound decision. 

We consider an application designed for a variety of uses, such as controlling a camera mounted on a stalk for performing arthroscopic surgery, sitting atop a snowboartder's helmet, or being used as a web cam on top of a computer. Arthroscopic surgery cameras require pinpoint precision, and the ability to smoothly and cautiously maneuver is very important. A snowboard helmet camera needs to be rugged enough to handle weather conditions, keep from  breaking upon intact, and have a reasonable performance and recording options in the cold, as well as an interface we can interact with on the move. The webcam example will not require pinpoint precision or extreme durability, but ease of configuration and interoperability with multiple systems would be much more important aspects.


Bottom Line:



It's a bit of a joke to say "it depends" for every testing issue, especially when referring to the context-driven community, but the fact is, there's a great deal of "it depends" when it comes to dealing with different contexts for the same or similar tools. When we focus on the contexts, we will naturally see that our approach depends entirely on the most important matters for the customer. Therefore, looking at different contexts and applying what is reasonable for those contexts will help us focus on what really matters. One size does not fit all, so practicing to see what genuinely fits will help us be more effective.

No comments: