Tuesday, October 22, 2013

Understand Your Business and Customer Needs: 99 Ways Workshop #93

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #93: Understand your business and customer needs, not just the requirements - Mike Hendry


I was so tempted to put this into a combined post with the last one, because a lot of the suggestions would be the same. However, a  funny thing happened on the way to throwing in the towel and saying "OK, this could use the same advice as the one that I just wrote". I participated in a conversation on Twitter with another tester that was frustrated with the fact that "meaningless metrics" were getting their team nowhere. While they were still being asked for measurements, there was a chance that management might be open to another conversation. In the process of that conversation, we came down to a piece of advice that I realized answered this suggestion pretty well. I love when things like that happen :).


Workshop #93: Sit down with your management/product team and have a frank discussion about risk. What are the real risks that your product faces? Get specific. Map out as many areas as you can think of. Identify key ares that would be embarrassing, debilitating, or even "life threatening" for an organization. Develop a testing strategy that focuses on one thing, mitigating those risks.


Most organizations couch testing in terms of coverage or bugs found. It's a metric, and it may even be a meaningful metric, but very often, it's not. "Coverage" is vague. What do we mean? Are we talking about statement coverage? Branch coverage? Usually, the term "test coverage" is used, and again, probe deeper and see if that word means what the speaker thinks it means. If someone asks if "all possible test cases have been identified", we've got problems. At this point, it would be helpful to instruct and show that complete, exhaustive testing of all scenarios is impossible (not to mention infinite).


In most of the places I have worked, "risk" has not been directly assessed, and there are valid reasons for this. True and honest risk assessments are hard to do. They are subjective. Risk to who? Risk in what way? Under what circumstances would something we consider to be low risk become high risk?


Risk is not a sure thing. It's the probability that something could go wrong. The more something is used, the higher the probability that something will go wrong. Not all risks are weighted the same. Some risks are trivial and easily shrugged off ( typos and cosmetic errors) because the "damage" is minor. Other risks are much more important, because the potential for damage is very high (an iFrame can open you up to cross server scripting attacks). Risks are identifiable. They may or may not happen, but you have a handle on how they could happen. 


Here's a process you can use. The next time that you sit down for a story workshop (or whatever you refer to an initial exploration of a new feature idea and implementation), take the time to ask the following questions:


- What would be the downside if we didn't deliver this feature?
- What would be potential problems that would prevent us from implementing this feature?
- What other areas of the code will this feature associate with?
- In what capacity could we run into big trouble if something isn't configured or coded correctly?
- What are the performance implications? Could this new feature cause a big spike in requests?
- Is there a way that this new feature could be exploited, and cause damage to our product or customers?


Yes, I see some of you yawning out there. This is a blinding flash of the obvious, right? We all do this already when we design tests. I used to think the same thing… until I realize how much we were missing at the end of a project and we opened it up to  much larger pool of participants. Then we saw issues that we hadn't really considered become big time bombs. We all started working on interrupts to fix and close the loops on areas that were now bigger issues. It's not that we hadn't tested (we had, and we did lots of testing), but we had placed too much of our focus on areas that were lower risk, and not enough focus on areas that were higher risk. Our test strategy had been feature based, instead of risk based.


A risk analysis should consider a variety of areas, such as:


- defects in the features themselves and customer reaction to those defects
- performance of the feature under high load and many concurrent users
- overall usability of the feature and the user experience
- how difficult will it be to make changes or adapt this feature based on feedback
- what could happen if this feature were to leak information that could allow a hacker to exploit it.


Each of these areas are risks, but they do not have the same weight. At any given time, these risks can change in weight based on the feature, platform and audience. Security may be a middle-weighted issue for an internal only app, but much more heavily weighted if it is a public facing app. Performance is always a risk, but have we considered certain times and volume (healthcare.gov being a perfect recent example)?


Additionally, identify features that are critical to the success of a project, their visibility to users, frequency of use, and whether or not there are multiple ways to accomplish a task or only one way. The more an audience depends on a feature, the greater the risk, and the more focused the testing needs to be. Likewise, if the item in question is infrequently used, is not visible to a large audience, and there are other avenues or workarounds, then the risk is lower, and the need for voluminous tests lessened. 


Ideally, we would spend most of our time in areas that are high risk, and very little time in areas with little to no risk. Making that small shift in thinking can radically alter the landscape of your test cases, test coverage and focus.


Also, we would like to prevent another insidious risk factor that can change this equation and balance, and that's time pressure. If we have to compress a schedule, we radically alter the risk profile. Issues that are relatively low risk given enough time to test become much higher risk when there is time pressure and a "death march" mentality to getting the product out the door.


Bottom Line:

Everyone's risks are going to be different. Every organization has different fears that keep them up at night. Making a check list to consider every potential risk would be pointless, but a framework that allows us to examine the risks that are most relevant to our product owners, company and customers will help us to set priorities that are relevant, and place our efforts where they will have the most potential impact. 


Again, risk isn't black and white. It might happen. We might do something that could cause us damage down the road. We might be performing lots of "cover our butt" tests that, really, have a very low likelihood of actually happening in the wild, while missing important areas that have a much higher chance of occurring. Shift the conversation. Move away from "how many tests have we performed" to "how many risks have we mitigated, and are we focused on mitigating the right risks?"


No comments: