Wednesday, April 11, 2018

Performance Test Analysis & Reporting - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

One of the factors of performance testing that I find challenging is going through and actually making sense of the performance issues that we face. It's one thing to run the tests. It's another to get the results and aggregate them. It's still another to coherently discuss what we are actually looking at and how they are relevant.

Mais Tawfik Ashkar makes the case that Performance Analysis is successful when people actually:

  • read the results
  • understand the findings
  • can be engaged and most important 
  • understand the context in which these results are important

Also, what can we do with this information? What's next?

Things we need to consider when we are testing and reporting, to be more effective would be:

What is the objective? Why does this performance test matter?
What determines our Pass/Fail criteria? Are we clear on what it is?
Who is on the team I'm interacting with? Developers? BA? Management? All of the Above?
What level of reporting is needed? Does the reporting need to be different for a different audience (generic answer: yes ;) )

What happens if we don't consider these? Any or all of the following:

  • Reports being disregarded/mistrusted
  • Misrepresentation of findings
  • Wrong assumptions
  • Confusion/Frustration of Stakeholders
  • Raising more questions than providing answers

Mais starts with an Analysis Methodology.  Are my metrics meaningful? Tests pass or fail. Great. Why? Is the application functioning properly when under load/stress? How do I determine what "properly" actually means? What are the agreements we have with our customers? What are their expectations? Do we actually understand them, or do we just think we do?

By providing answers to each of these questions, we can ensure that our focus is in the right place and that we are able to confirm the "red flags" that we are seeing actually are red flags in the appropriate context.

No comments: