If you have ever been to any of Jenny Bramble's talks, you know what I mean by saying that. I always look forward to seeing Jenny do these presentations. I miss seeing her in person and hope to remedy that after the current unpleasantness is more under control.
Seriously, if you've never seen Jenny give a talk in person, she is very engaging and fun, a little bit irreverent, and always thoughtful.
The talk this go-around is about the "Soft Skills of Automation"... wait, what? How does automation have soft skills? Perhaps this is better stated as developing a mindset to deal with and talk about automation. While there are lots of coding frameworks, what is our personal mental framework? Part of the process is trying to look at how we would make something work to be automated in the first place. My approach has often been a bit of brute force:
1. Start with processes that I might need to be done repeatably.
2. Set up and identify those areas that can actually be repeated.
3. Once I have those areas mapped out, now let's think about the variable items I need to deal with.
4. Now that I have a decent idea of what I need to run, now how do I actually put it into place?
I have often joked that I am a good scientist and a good tactical individual but I'm not really lazy enough to be a good automation engineer. I feel like I always have to go through these four steps and I have to get to the level of "sick and tired" before I actually get something working. Does that mean my method is bad? Not necessarily but it does mean that I pretty much have to get to a state of "fed up" before I really strive to change what I am doing. However, once I have a sequence down, then I will refine it all day long :).
Key to Jenny's talk is the fact that automation is not all or nothing. There are variations and a spectrum as to where it is appropriate/not appropriate and necessary/not as necessary. If we are doing CI/CD, it's essential for all steps. In the Exploratory phase, it's less so.
We are moving more towards machine-assisted testing and I think that's a better starting point to talk about automation. It can be seen as this complex set of algorithms but, as I am fond of saying, if you find there are multiple commands you run together and you decide that putting them in a file and running them from one command rather than five or ten is absolutely effective and usable automation.
Jenny brings up a neat idea called the Code Awareness Scale. How much knowledge of code do we need to have? The truth is, it's a sliding scale. the areas I work within my immediate area of influence, I know quite a bit. However, when it comes to peripheral areas or apps and how they interact, I don't really know what's happening nor do I really need to.
The more comfortable we are with the code, the better prepared we are to test the code. This is certainly true in my world. A lot of the things that I work with are specific to looking at how data comes in and how data goes out. again, as I currently work with data transformations, the mechanical process of delivering files, setting up parameters, and verifying that the processes are run is relatively easy to automate. Processing and comparing files to see that what we started with and what we ended up with for its destination usage... that's a little bit more daunting and the area I'm mostly concerned about improving.
'One of the cool things that we can do and that I talk about in my Testability talk is getting familiar with log files and "watching them" in real-time. By watching them and seeing what comes across is that we get used to seeing the patterns, as well as what causes error conditions to surface. One of the neat skills I learned some years ago (though I still have a ways to go to really make it automated) is to see wherever we have created an error control condition and see what it will take to make it surface. Logs can help you see that. So can unit tests.
So let's say someone is already code-aware. How can we work on other areas? If we are focused on White-Box testing, what would it take to focus on true Exploratory testing sessions? How about getting involved with Usability or Observability? accessibility or Responsive Design? All of these can help us look at software in a way that is less code-dependent. In short, get into the humanity of the application in question.
Jenny highlights some principles of Automation we should consider:
- emphasize reliability, value, speed, and efficiency
- collaboration helps us determine what kinds of tests we need
- testability is a huge factor. DEMAND IT!!!
- everyone should be able to review and examine code or participate in code reviews
- automation code is production code... treat it as such!
It's important to think about what/why we want to automate. Do we want to recapture time? Do we want to be confident our releases are solid? Do we want to be sure our deployments will be successful?
Sometimes, you may find that there are areas and steps that are easier to automate than others. It's also possible that some steps will require breaking out of what you might normally do. Let's take my example of data transformation. If I have a piece of middleware that is doing the transformation steps, I may find myself spending a lot of time automating interactions with an application that isn't even my application to be testing. Sometimes, the best thing to do is to step back and see if there's another way to accomplish what I want to do. Does it make sense to interact with the middleware's UI if making REST API calls will accomplish the same task? If I need to be dealing with file comparisons, I don't necessarily care about the steps to document the transformations (don't get me wrong, at times I care a lot about that) but often all I want is the starting and ending files for comparison's purposes. Thus, a lot of the in-between steps can be removed and I can focus on performing the necessary steps to actually get the before and after files.
One of the key differences is to be able to identify the aspects of a test the system needs. Computers are very literal, so they need things like locators, labels, and actions that associate with them correctly. This is why I tend to focus on the literal steps to see what is required to get where I need to. some years ago I used the metaphor of automation less as a train track and more as a taxi route. Taxi route automation is a lot more involved than a standardized train track route but there could be some neat things to discover if you can get there.
Often, what I will do is I will use something like Katalon and actually record my steps to get to a particular place and I will see if there are other ways to do those steps (literally creating a folder full of driving directions). Once I have those, I will run them to get me where I want to go, and then I get out and poke things manually or identify other areas I might be able to automate. Again, a lot of automation doesn't need to be as formal as specific tests. A lot of it could just be simple data population or state changes to get to interesting places.
A final statement Jenny makes is to "don't over-automate". Automating everything sounds good on the surface but over time, too much automation can add unnecessary steps and time for little benefit. Don't automate everything. Automate the right things If you get interested in some automation for the sake of it, that's okay but perhaps only check it in if it adds a tangible benefit. A lot of my automation isn't really testing, it's set-up, it's data population, it's state change. It helps me but it doesn't necessarily help the flow of testing itself. rather than automate all of the things at a user level, perhaps make background steps that set up an environment that will be effective to test and then spin it up ready to go.