Wednesday, April 3, 2019

What not to Test: an #STPCon Live Blog Entry


“Help, I’m Drowning in 2 Week Sprints, Please Tell Me What not to Test”

This talk title speaks to me at such a profound level. Be warned, I may veer into tangent territory here. There are what I cal "swells" that come with sprint madness and sprint fatigue. It's never constant, it's like a set of waves that you would look to time. For those familiar with surfing, this likely makes sense. For those not familiar, waves tend to group together and swells grow and shrink. These series of increasing and decreasing waves are referred to as "sets" and the goal is to time the set that feels good to you. Too early and you don't catch a wave. Too late and the wave wipes you out. In between is the rideable waves to drop in.

Sprints are the software mental metaphor that goes with "timing the waves" but the problem is that the timing of sprints is constant, but wave sets are not. Likewise, even figuring out what matters for a sprint may take more or less time any given sprint. Tasks like backlog grooming, story workshops, sprint planning, etc. all come down to making sure that we have an understanding of what matters and what's actually available to us.

Risk-based testing is the idea that we focus our attention on the areas that present the most potential danger and we work to mitigate that. We all know (or should know) that we can't get to everything. Thus we need to focus on the areas that really matter.

Mary recommends that we place emphasis on testing ideas. Testing ideas should go beyond the acceptance criteria. We can easily be swayed to think that focusing on the acceptance criteria is the best use of our time but often, we discover that with a little additional looking, we can find a variety of problems that simply looking at acceptance criteria won't cover. We also need to be aware that we can also range far afield, perhaps too far afield if we are not mindful. Test ideas are helpful but don't just play "what if" without asking the most basic question of "which would be the riskiest area if we weren't to address?"

An area that I find that happens (tangent time) is that I will be testing something and find that we have to deal with an issue related to our product ut has nothing to do with stories in play. As I am the owner of our CI/CD pipeline (note: that doesn't mean I'm the expert, just that I own it and I am the one responsible for it working properly). If something happens to our CI/CD pipeline, who do you think is the first person to spring into firefight mode? Are you guessing me? Congratulations! In a sprint, I don't have the luxury of saying "oh, sorry, I can't deal with pipeline issues, I have to finish testing these stories". Therefore, any time I have issues such as a pipeline problem that needs to be addressed, I immediately put a spike into the sprint. I do my best to consider how much time it will take and if I can handle it myself (often the case) or if I need to pull in development or ops resources (also often the case).  What happens over time is that we get a clearer picture of not just actual testing focus but also the legitimate interruptions that are real and necessary to deal with. In a sprint, there is a finite amount of time and attention any of us can spend. Time and attention spent on one area necessitate that it is not spent elsewhere and no, saying you'll stay up later to cover it is robbing your future self of effectiveness. If you are doing that, STOP IT!!!

Performing a test gap analysis is also helpful. In a perfect world, we have test cases, they've been defined and we have enough information to create automated tests around them as the functionality is coming together. Reality often proves to scuttle that ideal condition, or at least it means that we come up short a bunch. What we often discover is a range of technical debt. Areas may be well covered and easily documented with test cases and automated tests. Other areas may prove to be stubborn to this goal (it may be as simple as "this is an area where we need to spend some time to determine overall testability).

The Pareto Principle is a rule of thumb, it's not absolute. Still, the old adage that twenty percent of something is going to give you eighty percent of outcomes is remarkably resilient. That's why it's a rule of thumb in the first place.

Twenty percent of test ideas can help you find eight percent of the issues.
Twenty percent of the application features will be used by eighty percent of the customers.

What does this mean? It means you need to get a read on what's actually being used. Analytics and an understanding of them are essential. More important, using analytics on your test systems is important, not just the prod numbers. One thing that was driven home to me some time back was the fact that analytics need to be examined and the configurations need to be experimented with. Otherwise, yes, you can have analytics in place but do you actually know if you have them turned on in the right places? How would you know?

One more interesting avenue to consider is that you cannot test everything but you can come up with some interesting combinations. This is where the idea of all pairs or pairwise testing comes into play. Testers may be familiar with the all pairs terminology. It's basically an orthogonal array where you take a full matrix and from that matrix, you look at the unique pairs that can be created (some feature paired with some platform as an example). By looking for unique pairs, you can trim don a lot of the tests necessary. It's not perfect, and don't use it blindly. Some tests will require they be run for every supported platform and not doing so will be irresponsible. Still, prudent use of pairwise testing can be a huge help.




No comments: