Showing posts with label automated testing. Show all posts
Showing posts with label automated testing. Show all posts

Monday, October 9, 2023

Learning, Upskilling, and Leading to Testing (Michael Larsen with Leandro Melendez at PNSQC)

You all may have noticed I have been quiet for a few hours. Part of it is that I was giving a talk on Accessibility (I will post a deeper dive into that later, but suffice it to say I shook things up a little, and I have a few fresh ideas to include in the future).

Also, I was busy chatting with our good friend Leandro Melendez (aka SeƱor Performo), and I figured it would be fun to share that here. I'm not 100% sure if this will appear for everyone or if you need a LinkedIn login. If you can't watch the video below, please let me know.

 

We had a wide-ranging conversation, much of it based on my recent experience being a testing trainer and how I got into that situation (the simple answer is a friend offered me an opportunity, and I jumped at it ;) ). That led to talking about ways we learn, how we interact with that learning, and where we use various analogs in our lives. This led us to talk about two learning dualities I picked up from Ronald Gross' "Peak Learning" book (Stringers vs. Groupers) and a little bit about how I got into testing in the first place.

It's a wide-ranging conversation, but it was fun participating, and I hope you will enjoy listening and watching it :).

Friday, April 21, 2023

The Dark Side of Test Automation: an #InflectraCON2023 Live Blog

 



Jan Jaap Cannegieter Avatar

Jan Jaap Cannegieter

Principal Consultant, Squerist


Jan starts out this talk with the idea from Nicolas Carr's book "The Glass Cage" that "the introduction of automation decreases the craftsmanship of the process that is automated". I've seen this myself a number of times. There's a typical strategy that I know all too well:

- explore an application or workflow
- figure out the repeatable paths and patterns
- run them multiple times, each time capturing a little more into scripts so I don't have to keep typing
- ultimately capture what I need to and make sure it passes.

The challenge with this is that, by the time I'm done with all this, unless a test breaks, that test will now effectively run forever (or every time we do a build) and honestly, I don't think about it any longer. The question I should be asking is, "If a test always passes, is it really telling us anything?" Of course, it tells me something if the test breaks. What it tells me varies. It may indicate that there's a problem but it also may indicate a frailty in my test that I hadn't considered. Fix it, tweak it, make it pass again, and then... what?  

I'm emphasizing this because Jan is. Just because a test is automated doesn't necessarily tell us how good the testing is, just that we can do it over and over again. Likewise, just because a test is automated, it doesn't really give us much indication as to the quality of the testing itself. Let me give an example from my own recent testing which revolves around APIs. On one hand, I am able to find a variety of ways to handle GET and POST commands but on the other, do I really know that what I am doing actually makes sense? I know I have a test or a series of tests but do I actually have tests that are worth running repeatedly? 

I appreciate the fact that automation does something important but it may not be the importance we really want. Automation makes test efforts visible. It's hard to quantify exploratory sessions in a way that is easy to understand. By comparison, it's easy to quantify the statement, "I automated twenty tests this week". Still, much of the time, the energy I put into test automation saves me repetitive typing, so that part is great but it doesn't specifically find bugs for me or uncover other paths that I hadn't considered. 

There are five common misunderstandings when it comes down to test automation:

- the wish to automate everything

I have been in this situation a number of times and it typically becomes frustrating. More times than not, I find that I'm spending more time futzing with tooling than I am actually learning about or understanding the product. There's certainly a variety of benefits that come with automation but thinking the machines will make the testing more effective and frequent often misses the mark.

- you can save money with test automation

Anyone who has ever spent money on cloud infrastructure or on CI/CD pipelines realizes that often having more automated testing doesn't save money at all, it actually increases cycles and spending. Don't get me wrong, that may very well be valuable and helpful in the long run but thinking that automation is going to ultimately save money is short-sighted and in the short term, it absolutely will not save money. At best, it will preserve your investment... which in many cases is the same thing as saving money, just not in raw dollar terms.

- automation makes testing more accessible

Again, automation makes testing more "Visible" and "Quantifiable" but I'd argue that it's not really putting testing into more people's hands or making them more capable. It does allow the user who maintains pipelines to be able to wrap their heads around the coverage that exists but is it really adding to better testing? Subjective at best but definitely a backstop to help with regressions.

- every tester should learn how to program

I'd argue that every tester who ever takes a series of commands, saves them in a script, and then types one command instead of ten is programming. It's almost impossible not to. Granted, your programming may be in the guise of the shell but it is still programming. Add variables and parameters and you are de facto programming. From there, stepping into an IDE has a bit more learning but it's not a radical step. In other words, it's not a matter of, "Does every tester need to learn how to program?" We invariably will. To what level and at what depth is the broader question.
 
- automation = tooling

I'm going to argue that this is both a "yes" and "no". As I said previously, you can do a lot of test automation using nothing but a bash shell (and I have lots of scripts that prove this point). Still, how do scripts work? They work by calling commands that pipe the output to some other command and then based on what we pipe to what, we do one thing or we do something else. Is this classic test tooling as we are used to thinking about it? No. Is it test tooling? Well, yes. Still, I think if you were to present this to a traditional developer, they would maybe raise an eyebrow if you explain this as test tooling. 

My rationale and it seems Jan feels a similar way is that we need to look at automated testing as more than just a technical problem. There are organizational concerns, there are perception issues, and there are communication issues. Having automation in place is not sufficient. We need to have a clear understanding of what automation is providing. We need clarity on what we are actually testing. We need to have an understanding of how robust our testing actually is and also how much of our testing is tangibly capturable in an automated test. What does our testing actually cover? How much does it cover? What does running one test tell us versus what ten tests tell us? Are we really learning more with the ten tests we run or is it just a number to show we have lots of tests?

The real answer to this comes down to, "Why are we testing in the first place?" We hope to get the information we can make judgment calls on and ultimately, automated tests have a limited ability to make judgment calls (if they can make them at all). People need to analyze and consider to see what is going on and if it is actually worthwhile. It has its place, to be sure, and I wouldn't want my CI/CD environments running without them but let's not confuse having a lot of tests with having good tests.


Tuesday, October 12, 2021

Orchestrating your Testing Process with @joelmonte (#PNSQC2021 Live Blog)

 


Photo of Joel Montvelisky

I've been struggling lately with the fact that each of our teams does stuff a little bit differently. There's nothing necessarily wrong with that but it does make it a challenge in that one tester on our team would probably struggle to be effective with another team. we have a broad variety of software offerings under one roof and many of those products were acquired through, you guessed it, acquisitions (I mean, how else do you acquire something ;) ).

Point being, there are a variety of tools, initiatives, and needs in place for each team, mainly because each of our teams originated in a different place but also because each team did some work and adopted processes before they were picked up by the main company.

I'm sure I've explained this over the years but Socialtext, the company I worked for starting in 2012, was acquired by PeopeFluent. PeopleFluent had acquired a host of other companies along the way, as well as having their own core product. A few years ago, PeopleFluent itself was acquired by Learning Technologies Group (LTG) in the UK. Additionally, as of the past year, I now work with the team that literally is a specialty team in the middle that tries to make it possible for each of the teams to make it possible to play nice with everyone else (i.e. the Transformations or Integrations team). The neat thing is that there are a variety of products and roles to work with. the biggest challenge is there's no real lingua franca in the organization. Not for lack of trying ;). At the moment, we are as a company trying to see if we can standardize on a platform and set of languages. This is a process and I predict it will take a while before it becomes company-wide and fully adopted if it ever actually is (note to my company, that's not a dig/criticism, just my experience over thirty years of observing companies. I'm optimistic but realistic, too ;) ).


That's just looking at the automation landscape. That does not include the variety of manual test areas we have (and there are a lot of them). Each organization champions the idea of 100% automated testing. I don't particularly but again, I also don't worry about it too much because I don't believe there is such a destination to arrive to. There is always going to be a need for Exploratory Testing and as such there will always be a need and focus for manual testing.  

What this ultimately means is that we will likely always have a disjointed testing environment. There will likely never be "one ring to rule them all" and because of that, we will have disparate and varied testing environments, testing processes, and testing results. How do we get a handle on seeing all of the testing? I'm not someone who has a particular need for that but my manager certainly is. my Director certainly is. They have a need to get a global view of testing and I don't envy their situation.

Whew, deep breath.... that's why I'm here in this take, to see how I might be able to help get a better handle on all of the test data and efforts and see how we can get the best information in the most timely fashion. 

Joel's talk is about "Orchestrating the Testing Process". For those not familiar with music notation and arrangement, orchestration is the process of getting multiple instruments to work together and work off of the same score sheet (in this case, score meaning the notation for each and every instrument in a way that everyone is playing together when warranted and in the right time when called for). Testing fits into this area as well.



So what do we need to do to get everyone on the same page? Well, first of all, we have to realize that we are not necessarily even trying to get everyone on the same page in the literal sense. They need to work together, and they need to be understood together but ultimately the goal of orchestration is that everyone works together, not that everyone plays in unison or even in close harmony. 

Orchestration implies a conductor. A conductor doesn't play every instrument. generally speaking, a conductor doesn't play any instruments at all. They know where and when the operations need to take place. This may be through regular status meetings or it may be through pipeline development. It may also mean that refactoring of testing is as important as creating testing. test Reporting and gathering /distilling that information becomes critical for successful conducting/orchestration. 

Is there a clean and elegant solution for this? No, not really, it's a process that is hands-on and requires coordination to be effective. as a musician, I know full well that to write hits, we have to just write a lot of songs. Over time, we will get a little bit better at writing what might hit and grab people's attention. Even if writing "hits" isn't our goal, writing songs ultimately is. that means we need to practice writing songs. The same goes for complex test environments. If we want to orchestrate those efforts, we need to write our songs regularly and deliberately.  


[updating content, please refresh to see the latest]