Next up is Jenna Charlton with a realistic look at the rhetoric of Risk-Based Testing. As many may well be aware, there's a lot of branding and promises that surround a variety of terms. Many of the phrases that we like to use have a certain comforting ring to them. Risk-Based Testing is one of them. Think about what it promises. If we identify the areas of greatest risk and test around those areas, we can deliver the best bang for the buck quality and we can do it so much faster because we are not testing every single thing.
Sounds great, right? However, what does this ultimately tell us? We have said we care about risk but what actually is risky? We are only alert to the risks if we have thought about them. The biggest fear I have when I think about doing a risk assessment is that I have made risk assumptions from what I know and can anticipate. Is that really a good risk assessment? It's an okay and workable one. However, if I'm not able to consider or understand certain parameters or areas that may be blind spots to me, I cannot really do a great risk assessment, so my risk assessment is incomplete at best and flying blind at worst.
One of the first things that can help ground us in these considerations is to start with a simple question... "what am I most afraid of?" Understand, as a tester, what I am most afraid of is missing something important. I'm afraid of having shallow coverage and understanding. That's not necessarily something that a general risk assessment is going to focus on. How many of us have said, "I don't know enough about the ins and outs of this system to give a full risk assessment here"? I certainly have. What can I do? Much of the time, it's a matter of bringing up my concerns about what I know or don't know and being up-front about them. "I have a concern about this module we are developing because I do not feel I fully understand it and thus, I have foggy spots here and here". Sound familiar? What is the net result of this? Do we actually get a better understanding of the components and that leads to a more lean testing plan because now we know the items better? Do we double up our coverage and focus model so we can "be sure" we've addressed everything? Here's where risk assessment breaks down and we fall back into the "do more testing, just to be sure" approach.
Something else that often doesn't get addressed is the fact that what is a risk at one point in time, as the organization matures and they have covered these areas, risk in those areas actually goes down. Still, how many of us have continued focusing on the "riskiest areas" because tradition has told us that they are, even though we have combed through every aspect of this area we consider so risky. If you have made tests for a risky area, you've run them for an extended period, and no problems have been found (the tests pass all the time), what does that tell us? It could tell us we have inadequate tests (a real risk, to be sure) or it could also tell us that this area has been thoroughly examined, we've tested it vigorously and now we have a system in place to query multiple areas. In short, this area has been moved into an area where it might be risky if something blows up but as long as it doesn't, the risk is actually quite low. Thus, we now have the ability and the need to reassess and consider which risks are the current ones, not yesterday's.
We have to come to grips with the fact we will never cover every test possible and as such, we will never fully erase the risk. Also, we will never get it perfect. Still, we often operate under the assumption that we will be blamed if something goes wrong, or that we made bad assumptions, and of course, we fear the retribution if we get it wrong. Thus, it helps to see how we can mitigate those fears we have. If we can quantify the risk and define it, then we can look at it objectively, and with that, we can better consider how we will address what we have found. Are afraid of an outcome (nebulous) or are we addressing the risks we can see (defined and focused)? To be clear, we may get it wrong, or we may make a mountain out of a molehill. Over time, we might get better at that. Our goal is to deal with the molehills effectively but miss the entire mountain.
Again, there's a chance that we will miss things. There's a chance something that matters to our organization will not get the scrutiny it deserves. Likewise, fear may be making us focus on solidly functioning software over and over again because "it just pays to be safe" only to realize we are spending so much time on an older risk that isn't as relevant now. It's more art than science but both are improved with practice and observation.
No comments:
Post a Comment