Friday, April 21, 2023

The Dark Side of Test Automation: an #InflectraCON2023 Live Blog

 



Jan Jaap Cannegieter Avatar

Jan Jaap Cannegieter

Principal Consultant, Squerist


Jan starts out this talk with the idea from Nicolas Carr's book "The Glass Cage" that "the introduction of automation decreases the craftsmanship of the process that is automated". I've seen this myself a number of times. There's a typical strategy that I know all too well:

- explore an application or workflow
- figure out the repeatable paths and patterns
- run them multiple times, each time capturing a little more into scripts so I don't have to keep typing
- ultimately capture what I need to and make sure it passes.

The challenge with this is that, by the time I'm done with all this, unless a test breaks, that test will now effectively run forever (or every time we do a build) and honestly, I don't think about it any longer. The question I should be asking is, "If a test always passes, is it really telling us anything?" Of course, it tells me something if the test breaks. What it tells me varies. It may indicate that there's a problem but it also may indicate a frailty in my test that I hadn't considered. Fix it, tweak it, make it pass again, and then... what?  

I'm emphasizing this because Jan is. Just because a test is automated doesn't necessarily tell us how good the testing is, just that we can do it over and over again. Likewise, just because a test is automated, it doesn't really give us much indication as to the quality of the testing itself. Let me give an example from my own recent testing which revolves around APIs. On one hand, I am able to find a variety of ways to handle GET and POST commands but on the other, do I really know that what I am doing actually makes sense? I know I have a test or a series of tests but do I actually have tests that are worth running repeatedly? 

I appreciate the fact that automation does something important but it may not be the importance we really want. Automation makes test efforts visible. It's hard to quantify exploratory sessions in a way that is easy to understand. By comparison, it's easy to quantify the statement, "I automated twenty tests this week". Still, much of the time, the energy I put into test automation saves me repetitive typing, so that part is great but it doesn't specifically find bugs for me or uncover other paths that I hadn't considered. 

There are five common misunderstandings when it comes down to test automation:

- the wish to automate everything

I have been in this situation a number of times and it typically becomes frustrating. More times than not, I find that I'm spending more time futzing with tooling than I am actually learning about or understanding the product. There's certainly a variety of benefits that come with automation but thinking the machines will make the testing more effective and frequent often misses the mark.

- you can save money with test automation

Anyone who has ever spent money on cloud infrastructure or on CI/CD pipelines realizes that often having more automated testing doesn't save money at all, it actually increases cycles and spending. Don't get me wrong, that may very well be valuable and helpful in the long run but thinking that automation is going to ultimately save money is short-sighted and in the short term, it absolutely will not save money. At best, it will preserve your investment... which in many cases is the same thing as saving money, just not in raw dollar terms.

- automation makes testing more accessible

Again, automation makes testing more "Visible" and "Quantifiable" but I'd argue that it's not really putting testing into more people's hands or making them more capable. It does allow the user who maintains pipelines to be able to wrap their heads around the coverage that exists but is it really adding to better testing? Subjective at best but definitely a backstop to help with regressions.

- every tester should learn how to program

I'd argue that every tester who ever takes a series of commands, saves them in a script, and then types one command instead of ten is programming. It's almost impossible not to. Granted, your programming may be in the guise of the shell but it is still programming. Add variables and parameters and you are de facto programming. From there, stepping into an IDE has a bit more learning but it's not a radical step. In other words, it's not a matter of, "Does every tester need to learn how to program?" We invariably will. To what level and at what depth is the broader question.
 
- automation = tooling

I'm going to argue that this is both a "yes" and "no". As I said previously, you can do a lot of test automation using nothing but a bash shell (and I have lots of scripts that prove this point). Still, how do scripts work? They work by calling commands that pipe the output to some other command and then based on what we pipe to what, we do one thing or we do something else. Is this classic test tooling as we are used to thinking about it? No. Is it test tooling? Well, yes. Still, I think if you were to present this to a traditional developer, they would maybe raise an eyebrow if you explain this as test tooling. 

My rationale and it seems Jan feels a similar way is that we need to look at automated testing as more than just a technical problem. There are organizational concerns, there are perception issues, and there are communication issues. Having automation in place is not sufficient. We need to have a clear understanding of what automation is providing. We need clarity on what we are actually testing. We need to have an understanding of how robust our testing actually is and also how much of our testing is tangibly capturable in an automated test. What does our testing actually cover? How much does it cover? What does running one test tell us versus what ten tests tell us? Are we really learning more with the ten tests we run or is it just a number to show we have lots of tests?

The real answer to this comes down to, "Why are we testing in the first place?" We hope to get the information we can make judgment calls on and ultimately, automated tests have a limited ability to make judgment calls (if they can make them at all). People need to analyze and consider to see what is going on and if it is actually worthwhile. It has its place, to be sure, and I wouldn't want my CI/CD environments running without them but let's not confuse having a lot of tests with having good tests.


No comments: