Monday, October 12, 2020

PNSQC 2020 Live Blog: “Rethinking Test Automation” with Paul Gerrard


I just realized that the last time I saw Paul in person was back in 2014 at Eurostar in Dublin, Ireland. I was really looking forward to seeing him again after so long but alas, I guess this will have to do.

It's interesting to see that the notion of being surprised about the troubles related to testing automation has been with us since the nineties at least (and some could argue even longer as I remember having issues and dealing with oddities back in the early 90s as I was first learning about Tcl/Tk and Expect. We still struggle with defining what test automation can do for us. Sure, it can automate our tests but what does that really mean?




Tools are certainly evolving and look nothing like the tools we were using 30 years ago. Still, we are dealing with many of the same principles. The scientific method has not changed. I share Paul's criticism that we are still debating what test automation does and what testers do. the issue isn't whether or not our tests work, it's whether or not the tests we perform are actually gathering data that can be agreed to or refuted for hypotheses. As a tester, we want to either confirm or refute the hypothesis. At the end of the day, that is what every relevant test needs to do. We can gather data but can that data that we gather give us meaningful information to actually tell us if the software is working as expected? One could argue that assertions being true are passes... but are they? They are proving we are seeing something that we expect to see but is it actually proving a hypothesis, or merely a small part of it? In short, we need people to look over the tests and the output to see if it is really doing what it should be.   

Paul suggests that we need to move away from scripted tests to more modeled based tests. OK, but what does that actually mean and how do we actually do that? Paul makes the assertion that tools don't think, they support our thinking. What if we removed all of the logistics around testing? If stripped of our usual talismans, what would we do to actually test? rather than stumble through my verbiage, I'm stealing Paul's slide and posting it here:



The key here is that test automation misleads us, in that we think that tools are actually testing and they are not. What they are doing is mapping out the steps we walk through and capturing/applying the sets of data and results that we get based on the data and actions we provide. The left is the exploration, the right is the evaluation, the middle is the testing or the setting up so that we can test. Automation won't work if we don't have a clear understanding of what the system should do. Paul is emphasizing that the problem and the area we need to improve is not the execution of tests (our tools can do that quite adequately) but in test design and test planning. In short, we need better and more robust models. 

The old notion of the human brain is that it is brilliant random, unpredictable but slow and lazy. Machines are literal, unimaginative, but blindingly fast and able to do the same things over and over again. Combined, we have a formidable combination. 

So what do we want to see the future be for our tools? First of all, regression testing needs to look at impact-analysis. How can we determine what our proposed changes might do? How can we stop being overly reliant on testing as an anti-regression measure? How can we meaningfully prove functionality? Also, how do we determine the optimal set of tests without guessing?

Paul makes the case we need to understand the history of failures in our tests. Where can we identify patterns of changes? What are the best paths and data to help us locate failure-prone features? Manual testing will not be able to do this. Machine learning and AI will certainly get us closer to this goal.

In short, we need to move away from passive to active collaboration. We need to stop being the people at the end and work towards active collaboration. We need to be able and willing to provoke the requirements. we also need to create better mental models so that we can better understand how to guide our efforts.


No comments: