Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Wednesday, October 16, 2024

The Test Automation Blueprint: A Case Study for Transforming Software Quality with Jeff Van Fleet (a PNSQC Live Blog)

Today, delivering high-quality software at speed isn’t just a goal, it’s a necessity. Whether your organization has a small Agile team or a huge corporation, creating a streamlined, efficient testing process can dramatically reduce costs and accelerate time to market. But how do you actually achieve that transformation? Jeff Van Fleet, President and CEO of Lighthouse Technologies, goes into depth with some practical tips and proven principles to guide organizations toward effective test automation. 

One of the most important steps in transforming your organization’s approach to test automation is engaging your leadership team. Test automation initiatives often require significant investment in tools, training, and process changes—investments that can only happen with leadership support. Jeff highlights the importance of showing clear ROI by presenting leaders with real-time reporting dashboards that demonstrate how automation accelerates delivery and improves quality.

These dashboards provide visibility into the success of the test automation effort, making it easy for leadership to see the value in continuing to invest. Data-driven views and knowledge keep leadership engaged and committed to long-term quality improvement.

It's a big leap from manual testing to automation. I know, I've been there! Many manual testers may feel apprehensive about making that transition. However, Jeff emphasizes that with the right training and support, manual testers can successfully transition to automation and get fully involved in the new process. Lighthouse Technologies focuses on equipping testers with the tools, skills, and confidence to tackle automation.

We have to approach this training with empathy and patience. Many manual testers bring invaluable domain expertise, which, when combined with automation skills, can significantly enhance the quality of the testing process. Investing in your existing team, instead of sidelining them, can transform teams and build a strong, motivated automation workforce.

we've pushed the idea of shift-left testing for a while now.  Many organizations are eager to adopt it, but few know how to implement it effectively. Moving testing earlier in the development cycle helps catch bugs before they snowball into more complex, costly issues.

By collaborating closely with developers to improve unit testing, teams can identify and address defects at the code level, long before they reach production. 

One of the challenges teams face is trying to implement automation while managing in-flight releases. Jeff offers practical strategies for balancing catch-up automation (automating legacy systems or current processes) with ongoing development work. His advice: start small, automate critical paths first, and build incrementally. This allows teams to gradually integrate automation without derailing existing release schedules.

Engaging with developers is another critical component of successful test automation. Often, there’s a disconnect between QA and development teams, but Lighthouse Technologies’ approach bridges that gap by partnering closely with developers throughout the testing process. By working together, developers and testers can create more effective test cases, improve unit test coverage, and ensure that automated tests are integrated seamlessly into the CI/CD pipeline.

For organizations looking to embrace test automation, the key takeaway is that it’s not just about tools—it’s about people, processes, and leadership. By following these principles, teams can accelerate their test automation efforts and create a culture of quality that drives both speed and innovation.

Tuesday, October 10, 2023

Automation, You're Doing It Wrong With Melissa Tondi (PNSQC)



This may feel a bit like deja -vu because Melissa has given a similar talk in other venues. The cool thing is I know each time she delivers the talk, it has some new avenues and ideas. So what will today have in store? Let's find out :).



What I like about Melissa's take is that she emphasizes what automation is NOT over what it is.

I like her opening phrase, "Test automation makes humans more efficient, not less essential" and I really appreciate that. Granted, I know a lot of people feel that test automation and its implementation is a less than enjoyable experience. Too often I feel we end up having to play a game of metrics over making any meaningful testing progress. I've also been part of what I call the "script factory" role where you learn how to write one test and then 95 out of 100 tests you write are going to be small variations on the theme of that test (login, navigate, find the element, confirm it exists, print out the message, tick the pass number, repeat). Could there be lots more than that and lots more creativity? Sure. Do we see that? Not often.

Is that automation's fault? No. Is it an issue with management and their desire to post favorable numbers? Oh yeah, definitely. In short, we are setting up a perverse expectation and reward system. When you gauge success in numbers, people will figure out the ways to meet that. Does it add any real value? Sadly, much of the time it does not.   

Another killer that I had the opportunity to work on and see change was the serial and monolithic suite of tests that take a lot of time to run. I saw this happen at Socialtext and one of the first big initiatives when I arrived there was to see the implementation of a docker suite that would break out our tests into groupings split into fours. Every test was randomized and shuffled to run on the four server gateways. We would bring up as many nodes as necessary to run the batches of tests. By doing this, we were able to cut our linear test runs down from 24 hours to just one. That was a huge win but it also helped us determine where we had tests that were not truly self-contained. It was interesting to see how tests were set up and how many tests were made larger specifically to allow us to do examinations but also to allow us to divvy up more tests than we would have been able to otherwise. 

Melissa brought up the great specter of "automate everything". While granted, this is impossible, it is still seen forlornly as "The Impossible Dream". More times than not, it's the process of taking all of the manual tests and putting them to code. Many of those tests will make sense, sure, but many of them will not. The amount of energy and effort necessary to cover all of the variations of certain tests will just become mind-numbing and, often, not tell us anything interesting. Additionally, many of our tests that are created in this legacy manner are there to test legacy code. Often, that code doesn't have hooks that will help us with testing, so we have to do end runs to make things work. Often, the code is just resistant to testing or requiring esoteric identification methods (the more esoteric, the more likely it will fail on you someday). Additionally, I've seen a lot of organizations that are looking for automated tests when they haven't done unit or integration tests at lower levels. This is something I've realized having recently taught a student group to learn C#. We went through the language basics and then later started talking about unit testing and frameworks. After I had gone through this, I determined if I were to do this again, I would do my best to teach unit testing, even if at fundamental levels, as soon as participants were creating classes that processed actions or returned a value beyond a print statement. Think about where we could be if every software developer was taught about and encouraged to use unit tests at the training wheels level!

Another suggestion that I find interesting and helpful is that a test that always passes is probably useless. Not because the test is necessarily working correctly and the code is genuinely good but because we got lucky and/or we don't have anything challenging enough in our test to actually run the risk of failing. If it's the latter, then yes, the test is relatively worthless. How to remedy that? I encourage creating two tests wherever possible, one positive and one negative. Both should pass if coded accurately but both approach the problem from opposite levels. If you want to be more aggressive, make some more negative tests to really push and see if we are doing the right things. This is especially valuable if you have put time into error-handling code. The more error-handling code we have, the more negative tests we need to create to make sure our ducks are in a row.

A final item Melissa mentions is the fact that we often rely on the experts too much. We should be looking at the option that the expert may not be there (And at some point if they genuinely leave, they WON'T be there to take care of it. Code gets stale rapidly if knowledgeable people are lost. Take the time to include as many people as possible in the chain (within reason) so that everyone who wants to and can is able to check out builds, run them, test them, and deploy them.

Friday, May 6, 2022

Myths About Myths About Automation: An #InflectraCON Live Blog

First of all, thank you to everyone who came to my talk about "The Dos and Don'ts of Accessibility". Seriously, it's been a great feeling to know that a place that has been so pivotal in the lives and futures of deaf and hard of hearing individuals (Gallaudet University) is the setting for my talk. How cool is that :)? I'll sum up that talk at a later date but for right now, let's go to Paul Grizzaffi and talk about the "myths about the myths of automation" (and no that's not a typo, that's the literal title :) )

There are a lot of myths when it comes to automation but are there now myths around the myths? According to Paul, yes, there is. 

Is Record and Playback bad? No, not necessarily. It can in fact be a very useful tool to use in a stable environment where the front end doesn't change. It's not so good for actively under development systems, especially if the front end is in flux/development.

Do you have to be a programmer to use automation tools? No, not necessarily but it will certainly help if you have some understanding of programming or have access to a programmer that you can work with. 

Does Automation Come from Test Cases? Not entirely. It can certainly provide value but it doesn't necessarily make sense to take all of your manual test cases and automate them. For a few valuable workflows, then yes, but if doing so will have you repeating yourself and adding time to repetitive steps, then it may not be the best use of your time. Your test cases can be helpful in this process and they can inform what you do but don't just automate everything for the sake of automating everything.

Does Automation Solve all testing Problems? Come on (LOL!). Yeah, that was an easy one but it can often be seen that running a lot of tests quickly can seem to be a high-value use of time, where it can instead just be doing a lot of busywork that looks like a lot is being done but not much that is meaningful.  

Will Automation Find all of your bugs? NO, 1,000 times NO!!! It can show you if a code change now renders an older test a failure, which you can then examine afterward. It can help you with more coverage because now you might be able to make a matrix that will cover a lot of options and run against an orthogonal array. That can be useful and provide a lot of test case coverage but that's not the same thing as finding all of the issues. 

Can we achieve 100% automation? Nope, at least not in the meaningful sense of 100% automation. You can certainly have a lot of workflows and matrices covered, and machines are much faster than humans. However, there will always be more workflows than you can automate. We're not there yet in regards to being able to automate 100% of the things. Even if we could, it will likely not be a good overall use of our time to automate all of the things. Automate the most important things? Sure.

Is There One Tool To Rule Them All? Absolutely not. Yes, shared code can be a benefit, and yes, buying many licenses can help unify a team or teams but it's highly unlikely that a single tool is going to answer everything for everyone. That's not to say that there isn't value on a standard baseline. We use a number of libraries and functions that allow us to test across a variety of products but no one tool covers everything.

Plain and simple, as in all things, context matters and no two teams are the same. Look at the myths you may be carrying and see how they measure up to the reality of your organization.

Thursday, November 4, 2021

Analytics Matter: What Are Your Users Really Doing? (an #OnlineTestConf 2021 Live Blog)

 



Let's have a bit of a metaphysical question... what do our customers want? Do we know? Do we really know? We want to say that we know what our customers want but truly, how do we know what they want? Are we asking them? Really asking them? IF we are not looking at and trying to make sense of analytics, the truth is, no we don't. I mean, we may know what we think they want or what's important to them./ Analytics tell us what they really want and what they really do. 


 


There's lots of neat tools that can help with this, there's of course Google Analytics, Adobe Analytics, and CoreMetrics. I have experience with using Pendo as well. Pendo is interesting in that it flags when customers actually use a particular page, function, or method. IT's a valuable tool for us to see what functions and features are really being used.

Let's look at the idea that analytics should be added to a site after it launches. On the surface, logical, but how about implementing them at the beginning of development. There's a lot of critical information you can discover and help your development by examining your analytics not just when a site is live but also as you are putting it together. What development is influencing your most critical areas? Your analytics may be able to tell you that.

 Another thing to realize is that analytics do not actually tell you anything by themselves. You may need to do some timed analysis and aggregating to actually get the real picture. One day's data may not be enough to make a correct analysis. Analytics are data points. You may need to do some specific analysis to determine what actually matters.

So how can we look at analytics from a testers perspective? Amanda suggests using Charles Proxy or Fiddler, or you can use a variety of browser plugins that can help you look at the data your analytics collect. These can look really spiffy and it's cool to look at data and what does what when. However, there are a variety of ways that this data may be misleading. My blog has statistics and analytics that I look at on occasion (I've learned to spread out when I look at them, otherwise I get weirdly obsessed at what is happening when. Also, when I live blog, my site engagement goes through the roof. It's not because everyone suddenly loves me, it's because I just posted twelve plus posts in the last two days (LOL!) ). 

One of the most infuriating things to see is when I go back and notice a huge spike in my data. If it corresponds with a live blog streak, that's understandable. What's not is when I have a huge spike when I haven't been posting anything. What the heck happened? What was so interesting? Often it means that something I wrote was mentioned by someone, and then BOOM, engagement when I'm not even there. That happens a lot more often than I'd like to admit. I'd love to be able to say I can account for every data spike on my system but much of the time, I can't, just because it happened at a time I wasn't paying attention and also because it's not necessarily my site doing the work, it's someone else somewhere else causing that to happen (usually through a Tweet or a share on another platform like LinkedIn, Instagram, or Facebook).

Again, analytics are cool and all but they are just data. It's cold, unfeeling, dispassionate data. However, that cold, dispassionate data can tell you a lot if you analyze it and look at what the words and numbers actually mean (and you may not even get the "actually" right the first few times). Take some time and look through the details that the data represents. Make experiments based on it. See what happens if you roll out a feature to one group vs another (A/B testing is totally irrelevant if metrics are not present).

Analytics can be nifty. they can give you insights, and you can make decisions based on what's provided but analytics by themselves don't really do anything for you. They are just data points. It's the analysis and consideration and critical thinking that's performed on the data points that really matters.

Expect to Inspect – Performing Code Inspections on Your Automation (an #OnlineTestConf 2021 Live Blog)

Paul and I have been running into each other at conferences now for the better part of a decade. In addition to both being testing nerds, we are both metal nerds, too. Some of our best conversations have been half tech and half, "So, what do you think of the new Fates Warning album?" or whatever ;).

For today, Paul is talking about the fact that test automation code is legit and literal code. It's software development and deserves the same level of attention and scrutiny as production code. Thus, it makes sense to do code inspection on test automation code. When we are on a testing team or we have multiple testers to work with, we can have test team members work with us to inspect the code. Often, I have not had that luxury as I've either been the only tester on a project or I've been the only tester at a company. Thus, who inspects our code? Too often, nobody does, we are left to our own devices and we hope for the best. We shouldn't and Paul agrees with this.



 

The benefit of having code inspection is that we can have someone else help us see past our blind spots. Think of it the way that we read our writing. The danger is not that we can't proofread effectively. We certainly can. The real danger is our brain bridges over our mistakes and interprets what we mean so that we can literally skip over blatant mistakes. Later, when we see them, we think "how could I have missed that?" Well, it's easy, because you read it and your brain was a little too helpful. By the way, there is a cool technique if you ever find yourself having to do it yourself... read it out loud as if you were delivering a speech. Mannerism, speech patterns, inflections, etc. Why? It takes you enough out of the space that when you try to speak out a misspelled word or clunky grammar, you hear it out loud and your slower thinking brain will detect, "Hang on, here's an issue".

There are a number of tools that can be used to allow you to do both static analysis and dynamic analysis but what I find really helpful is to just hand over my tools to another developer or tester and say, "Hey, can you run through this for me"? The benefit here is that they can look at what I am running and what I am doing and they can see if my rationale makes sense. 

I have had numerous occasions where a developer has run my tool and come back and said, "Hey, I walked through this and while I get what you are doing, you're going the long way around". Then often they have either helped me do it more efficiently or they realize, "Oh, hey, I could probably help you with that" and then that code inspection actually encourages a testability tweak in our code that I can then take advantage.

We have a tools repository that our development manager and one of our principal engineers handle merge requests for and I am encouraged to make sure that the code that I write for automation is in sync with our master branch, as much as possible. Thus I make frequent pull requests and those two have a direct opportunity to inspect what I am doing with the code. Encourage these interactions and if your code isn't in a proper repo, fix that.

As Paul said at the end of the talk and many times during the talk, automation is code, treat it like it is. I concur :)!!!

Monday, October 11, 2021

Soft Skills of Automation (#PNSQC2021 Live Blog)

 



Yay!!! It's a Jenny talk :)!!!



If you have ever been to any of Jenny Bramble's talks, you know what I mean by saying that. I always look forward to seeing Jenny do these presentations. I miss seeing her in person and hope to remedy that after the current unpleasantness is more under control.




Seriously, if you've never seen Jenny give a talk in person, she is very engaging and fun, a little bit irreverent, and always thoughtful.




The talk this go-around is about the "Soft Skills of Automation"... wait, what? How does automation have soft skills? Perhaps this is better stated as developing a mindset to deal with and talk about automation. While there are lots of coding frameworks, what is our personal mental framework? Part of the process is trying to look at how we would make something work to be automated in the first place. My approach has often been a bit of brute force:

1. Start with processes that I might need to be done repeatably.

2. Set up and identify those areas that can actually be repeated.

3. Once I have those areas mapped out, now let's think about the variable items I need to deal with.

4. Now that I have a decent idea of what I need to run, now how do I actually put it into place?

I have often joked that I am a good scientist and a good tactical individual but I'm not really lazy enough to be a good automation engineer. I feel like I always have to go through these four steps and I have to get to the level of "sick and tired" before I actually get something working. Does that mean my method is bad? Not necessarily but it does mean that I pretty much have to get to a state of "fed up" before I really strive to change what I am doing. However, once I have a sequence down, then I will refine it all day long :).

Key to Jenny's talk is the fact that automation is not all or nothing. There are variations and a spectrum as to where it is appropriate/not appropriate and necessary/not as necessary. If we are doing CI/CD, it's essential for all steps. In the Exploratory phase, it's less so.

We are moving more towards machine-assisted testing and I think that's a better starting point to talk about automation. It can be seen as this complex set of algorithms but, as I am fond of saying, if you find there are multiple commands you run together and you decide that putting them in a file and running them from one command rather than five or ten is absolutely effective and usable automation.

Jenny brings up a neat idea called the Code Awareness Scale. How much knowledge of code do we need to have? The truth is, it's a sliding scale. the areas I work within my immediate area of influence, I know quite a bit. However, when it comes to peripheral areas or apps and how they interact, I don't really know what's happening nor do I really need to.
 


The more comfortable we are with the code, the better prepared we are to test the code. This is certainly true in my world. A lot of the things that I work with are specific to looking at how data comes in and how data goes out. again, as I currently work with data transformations, the mechanical process of delivering files, setting up parameters, and verifying that the processes are run is relatively easy to automate. Processing and comparing files to see that what we started with and what we ended up with for its destination usage... that's a little bit more daunting and the area I'm mostly concerned about improving.

'One of the cool things that we can do and that I talk about in my Testability talk is getting familiar with log files and "watching them" in real-time. By watching them and seeing what comes across is that we get used to seeing the patterns, as well as what causes error conditions to surface. One of the neat skills I learned some years ago (though I still have a ways to go to really make it automated) is to see wherever we have created an error control condition and see what it will take to make it surface. Logs can help you see that. So can unit tests.

So let's say someone is already code-aware. How can we work on other areas? If we are focused on White-Box testing, what would it take to focus on true Exploratory testing sessions? How about getting involved with Usability or Observability? accessibility or Responsive Design? All of these can help us look at software in a way that is less code-dependent. In short, get into the humanity of the application in question.

Jenny highlights some principles of Automation we should consider:

- emphasize reliability, value, speed, and efficiency

- collaboration helps us determine what kinds of tests we need

- testability is a huge factor. DEMAND IT!!!

- everyone should be able to review and examine code or participate in code reviews

- automation code is production code... treat it as such!


It's important to think about what/why we want to automate. Do we want to recapture time? Do we want to be confident our releases are solid? Do we want to be sure our deployments will be successful?

Sometimes, you may find that there are areas and steps that are easier to automate than others. It's also possible that some steps will require breaking out of what you might normally do. Let's take my example of data transformation. If I have a piece of middleware that is doing the transformation steps, I may find myself spending a lot of time automating interactions with an application that isn't even my application to be testing. Sometimes, the best thing to do is to step back and see if there's another way to accomplish what I want to do. Does it make sense to interact with the middleware's UI if making REST API calls will accomplish the same task? If I need to be dealing with file comparisons, I don't necessarily care about the steps to document the transformations (don't get me wrong, at times I care a lot about that) but often all I want is the starting and ending files for comparison's purposes. Thus, a lot of the in-between steps can be removed and I can focus on performing the necessary steps to actually get the before and after files.

One of the key differences is to be able to identify the aspects of a test the system needs. Computers are very literal, so they need things like locators, labels, and actions that associate with them correctly. This is why I tend to focus on the literal steps to see what is required to get where I need to. some years ago I used the metaphor of automation less as a train track and more as a taxi route. Taxi route automation is a lot more involved than a standardized train track route but there could be some neat things to discover if you can get there. 

Often, what I will do is I will use something like Katalon and actually record my steps to get to a particular place and I will see if there are other ways to do those steps (literally creating a folder full of driving directions). Once I have those, I will run them to get me where I want to go, and then I get out and poke things manually or identify other areas I might be able to automate. Again, a lot of automation doesn't need to be as formal as specific tests. A lot of it could just be simple data population or state changes to get to interesting places.

A final statement Jenny makes is to "don't over-automate". Automating everything sounds good on the surface but over time, too much automation can add unnecessary steps and time for little benefit. Don't automate everything. Automate the right things If you get interested in some automation for the sake of it, that's okay but perhaps only check it in if it adds a tangible benefit. A lot of my automation isn't really testing, it's set-up, it's data population, it's state change. It helps me but it doesn't necessarily help the flow of testing itself. rather than automate all of the things at a user level, perhaps make background steps that set up an environment that will be effective to test and then spin it up ready to go.

To add to this, also take a look at Test Automation University. Definitely worth digging into :).




Wednesday, October 14, 2020

PNSQC 2020 Live Blog: Are You Ready For AI To Take Over Your Automation Testing? with Lisette Zounon

 


All right! How is that for a title ;)? I give props for coming out swinging and yes, I am indeed curious as to whether or not AI will actually have a long-term impact on what I do as a tester or automation programmer. It feels weird to say that but since "Senior Automation Engineer" is my official title, yeah, I kind of care about this topic :).

 

Many tools are built around allowing us to automate certain steps but in general, automation excels in the areas of the rote and the everyday repeatable. Automation is less good at dynamic environments and where there's a lot of variabilities. However, perhaps a better way to think about it is that automation struggles with areas that *we* feel are dynamic and not rote. Machine Learning can actually help us look for the repeatable, perhaps more specifically in areas that we are not currently seeing those patterns.
 
We are seeing the growth of pattern recognition tools and visual validation. As we get further into the process, we see that there are more uses for visual validation tools. It's not just is the picture in the same place. My question would be how can we leverage these tools for more approaches? As in most situations, a lack of imagination is not necessarily a lack of adventure or spirit but more often a lack of relevant experience. We tend to get mired in the specific details and we tend to look at these little tedious issues as taking too much time, requiring too many steps, or not being flexible enough to actually be useful.

Lisette makes the case that AI can help with API tests. Since API tests can be bounded (as in there's a set of commands and those commands can take a set of parameters, etc. So they can be generated and they can be called. In addition, auto-healing tests are often touted as a step forward., I will confess I have seen self-healing tests in only a limited capacity but that has more to do with what we use and what we currently test with rather than what is available and usable. I want to see more of this going forward and interact with it in a meaningful way.

I hear often the idea that AI will help to create more reliable automated tests. This comes down to agent counts keeping track of what is right or wrong by the definition of the software. It sounds cool on the surface but again, I'd love to see it in action. Lisette makes a good point that automation for the sake of automation doesn't really buy us anything. Our automation needs to serve a purpose so putting these AI tools to help us find critical paths or critical workflow steps and processes is worth the time. Again, I'm willing to give it a go :). 


Tuesday, October 13, 2020

PNSQC 2020 Live Blog: Breaking Down Biases and Building Inclusive AI with Raj Subrameyer

 


All right, here we go with Day 2!

First of all, I want to give props to Joe Colantonio and everyone else for the management and efforts of yesterday to keep everything on track. For those who are wondering what it is like to do a conference totally online, it's not always seamless but Joe handled issues and problems like a pro.  Some things that have been interesting changes:

There is a Virtual Expo so if you want to see what vendors are showing and do so with your own time and focus, you can check out the Virtual expo by clicking here.

The question and answer is being handled through an app called Slido and that makes for a clean way to ask questions and interact with each speaker rather than have to try to manage a Zoom chat feed. Again, a neat approach and well presented.

So for today's opening keynote, it's exciting to see friends that I interact with getting to be keynote speakers. Raj Subrameyer and I have interacted together for several years. He was also a recent guest on the Testing Show podcast talking about Tech Burnout (to be clear, not the topic of today's talk). If you'd like to hear our interview with Raj, you can check that out by clicking this link here.



Raj's talk is focused on building Inclusive AI. Sounds scary, huh? Well, it doesn't need to be. He opens with using three movies (2001: a Space Odyssey, HER, and Ex Machina). What was interesting about these movies is that they were science fiction and now, they are science fact. the point is sci-fi has caught up with our present. The question we might want to ask is, is this a good thing? It all comes down to how you look at it. Are you using Siri or Alexa regularly? To be honest, I do not use these very often but I have worked with them, so I'm not a Luddite. Still, there is a small part of me that doesn't want to rely on these tools just yet. Is that a fear-based thing? A trust-based thing?  Maybe a little bit of both. Do I really want to have these AI systems listening in on me? Well, if I'm someone who uses apps like Google, Amazon, Facebook, Instagram, or TikTok (hey, don't judge) I'm already training these systems. Alexa is just a voice attached to a similar system.

let's face it, technology can be creepy. It can also be very interesting if we understand what is happening. AI systems are getting trained all of the time. facial recognition, text recognition, voice recognition, these all are tweaked in similar ways. As Tariq King explained in a talk last year at Testbash in San Francisco, it's not anything sinister or even terribly complex. Ultimately, it all comes down to agents that keep score. When an agent gets something right, they keep a counter of the number of times they have successfully guessed or provided the right answer. They likewise decrement counters when they get things wrong. Over time, the counter helps figure out what is right more times than not. It's not perfect, it's not even intuitive, but it's not really super-human or even all that complicated. we just tend to make it and treat it as such. 

Raj points out that the neural network inside of each of our brains has a number of synaptic connections that, when calculated, equals the number of stars in our galaxy (maybe more) and to quote James Burke, "everybody has one!" The most powerful computers still pale in comparison to the connectivity and plasticity of a single human brain (though interconnected systems can certainly match or exceed single brains).

AI can be classified as weak and strong. Most of the systems that we interact with currently are classified as Weak AI systems. Yes, they can be trained to give a response and they can perform specific steps. Systems like Deep Thought can play chess and beat the best human players, but that is still an example of Weak AI. In short, the system can brute force avenues and do it fast, but it can't really "think". Strong AI can think, and emote, and sympathize, and deal with situations in dynamic ways the way people do. So far, there are very few AI systems that can do that, if any, really.

I'll use an example from my own musical life. I've recently been shopping for guitar amplifier heads, older ones. My all-time favorite guitar tone ever comes from the Marshall JMP amplifier head, which was popular in the early to mid-1970s. Additionally, I also very much like the Carvin X100B Series III amplifier head. A Weak AI would be able to compare specs of both amps and give me a readout of which amp may have the best reaction to fault tolerance or to sonic frequencies. It will not, however, be able to tell me which amplifier head "sounds better". That's a human judgment and it's not something that data will necessarily be able to provide an answer for.

We may be familiar with the study that was done where resumes were submitted using both typically "white" names and also resumes with "black names" (or traditionally seen as white or black names), the AI system was trained on the group of data and, interestingly, it would reject resumes with "black" names twice as often as it would "white" names. That definitely invites a question... how did the system "learn" to do that? Was it trained to do that purely based on the text in the resumes, or did some bias enter the system from the programmers? It's an interesting question and hey, I know what I think about this (hint: humans biased the system but I asked a Slido question, so let's see if it gets answered later ;) ).

Another thing to consider is that AI can be abused and it can also be fooled. In the world today with applications like Photoshop and video editing, deep fakes can be created. Provide enough deep fakes and systems can be trained with literally fake information and those systems can develop agent counts that are not based on reality. Scary but definitely feasible.

Ultimately, AI is as good as the data it is provided and the people that program the systems and the algorithms that train them. Systems can "learn" but again, learning is having a weighted count of something. the more it's "right", the higher the count, and the greater the odds that the "right" answer will actually be "right" in this case. Interesting stuff, to be sure but I'd argue that the odds of these systems coming together and replacing human intuition and interaction is quite a way away. that's not an invitation to be complacent, it's a recommendation to spend time to learn about these systems and how to better understand them and interact with them, and also that we have a responsibility to make sure that the systems we build are not just good quality but also fair to everyone.






Monday, October 12, 2020

PNSQC 2020 Live Blog: “Rethinking Test Automation” with Paul Gerrard


I just realized that the last time I saw Paul in person was back in 2014 at Eurostar in Dublin, Ireland. I was really looking forward to seeing him again after so long but alas, I guess this will have to do.

It's interesting to see that the notion of being surprised about the troubles related to testing automation has been with us since the nineties at least (and some could argue even longer as I remember having issues and dealing with oddities back in the early 90s as I was first learning about Tcl/Tk and Expect. We still struggle with defining what test automation can do for us. Sure, it can automate our tests but what does that really mean?




Tools are certainly evolving and look nothing like the tools we were using 30 years ago. Still, we are dealing with many of the same principles. The scientific method has not changed. I share Paul's criticism that we are still debating what test automation does and what testers do. the issue isn't whether or not our tests work, it's whether or not the tests we perform are actually gathering data that can be agreed to or refuted for hypotheses. As a tester, we want to either confirm or refute the hypothesis. At the end of the day, that is what every relevant test needs to do. We can gather data but can that data that we gather give us meaningful information to actually tell us if the software is working as expected? One could argue that assertions being true are passes... but are they? They are proving we are seeing something that we expect to see but is it actually proving a hypothesis, or merely a small part of it? In short, we need people to look over the tests and the output to see if it is really doing what it should be.   

Paul suggests that we need to move away from scripted tests to more modeled based tests. OK, but what does that actually mean and how do we actually do that? Paul makes the assertion that tools don't think, they support our thinking. What if we removed all of the logistics around testing? If stripped of our usual talismans, what would we do to actually test? rather than stumble through my verbiage, I'm stealing Paul's slide and posting it here:



The key here is that test automation misleads us, in that we think that tools are actually testing and they are not. What they are doing is mapping out the steps we walk through and capturing/applying the sets of data and results that we get based on the data and actions we provide. The left is the exploration, the right is the evaluation, the middle is the testing or the setting up so that we can test. Automation won't work if we don't have a clear understanding of what the system should do. Paul is emphasizing that the problem and the area we need to improve is not the execution of tests (our tools can do that quite adequately) but in test design and test planning. In short, we need better and more robust models. 

The old notion of the human brain is that it is brilliant random, unpredictable but slow and lazy. Machines are literal, unimaginative, but blindingly fast and able to do the same things over and over again. Combined, we have a formidable combination. 

So what do we want to see the future be for our tools? First of all, regression testing needs to look at impact-analysis. How can we determine what our proposed changes might do? How can we stop being overly reliant on testing as an anti-regression measure? How can we meaningfully prove functionality? Also, how do we determine the optimal set of tests without guessing?

Paul makes the case we need to understand the history of failures in our tests. Where can we identify patterns of changes? What are the best paths and data to help us locate failure-prone features? Manual testing will not be able to do this. Machine learning and AI will certainly get us closer to this goal.

In short, we need to move away from passive to active collaboration. We need to stop being the people at the end and work towards active collaboration. We need to be able and willing to provoke the requirements. we also need to create better mental models so that we can better understand how to guide our efforts.


PNSQC 2020 Live Blog: Of Machines and Men with Iryna Suprun

As is often the case at PNSQC, several of the talks are from people I have not seen speak before. Iryna Surpum is focusing her talk on areas of AI and Machine Learning. As we start her talk, we look at the fact that there are few tools available where AI and Machine Learning are prominent and prevalent for individual users. Some hallmarks of AI-based tools and what is being marketed are Codeless script generation, the ability to self heal with changes to the environment, meaning the script can collect data about elements of the application itself, and the ability of Natural Language Processing to be able to convert the documentation to actual tests (this is a new one to me, so hey, I'm intrigued). 

Iryna Suprun
Comparison of Visual Output and the expected design is becoming more sophisticated. More tools are supporting these features and additional levels of comparison are being applied (not just pixel to pixel comparison these days). 

So while we have these changes coming (or already here, how can we leverage these tools or learn how to use them in the first place?

Example tools to try out for these comparisons were, Testim, Mabl, and TestCraft. What did they provide? All three allowed for a Quick Start so that they could learn and be able to automate the same basic initial test case. All of the tools had recording implemented, which allows for initial set cases to be created (testCraft had a few extra steps to be set up and utilized so not quite as easily started as the other two). Modifying and inserting/deleting steps was relatively fast.
So what challenges were discovered/associated with these tools? as could be expected, the Codeless Script Generation (Recording) is good to get started but its usefulness diminishes the more complex the test cases become. This is to be expected, IMO, as this has been the same issue with most automation tools that promise an easy entry. It's a place to start but getting further will require proficiency and experience beyond what the recorder can provide. Self-healing is a useful feature but we are still at a point where we have to be somewhat explicit as to what is actually being healed. thus calling it self-healing may still be a misnomer, though that is the goal. So how about self-generated tests? what data is actually being used to create these self-generated cases? This didn't seem to be very self-evident (again, this is me listening, so I may be misinterpreting this). An example is to check to see that links work and are pointing to literally legitimate end links. that tests to see that a link exists and can be followed but that doesn't automatically mean that the link is useful to the workflow or that it will validate that the link is relevant. People still need to make sure that the links go somewhere that makes sense. 
So even though we keep hearing that AI And Machine learning are on the horizon and are even here changing the landscape, there's still a lot of underlying knowledge needed to make these tools work effectively. There's definitely a lot of promise here and there's an interesting future to look forward to but we do not have anything close toa magic want to wave yet. In other words, the idea that AI is going to replace human testers might be a possibility at some point but that promise/scare is not quite ready for prime time yet. Don't be complacent, take the time to learn about how these tools can help us and how we can then leverage out brains for more interesting testing work.
 

Friday, September 18, 2020

Bring Some Color to Your bash Shell Output

I'm not sure if this is something anyone will find valuable but it's been a neat little addition to my scripting as of late.

For those who have used Cucumber in the past for BDD, one of the visible elements is of course there paradigm of green, yellow and red text that appears for a variety of things in your output on the screen. This division also helps me quickly monitor if something seems out of place or if a command I expect to run completes correctly or not. as I do a lot with shell scripts, I decided to do some research and see if I could add a bit of that green, yellow, red (and for some variety I even added blue to the mix). Here's how you can do that.

First, the echo command allows for ANSI escape codes to be called and change the color of text. To do this, you first call echo with the '-e' flag and then for the word or words you want to highlight with a different color. You can see a list of ANSI escape codes and colors here. For my purposes, here's an example of a set of codes I keep is a shared shell library:


RED="\033[31m"
GREEN="\033[32m"
YELLOW="\033[33m"
BLUE="\033[34m"
RESET="\033[0m"


In practice, any time you use an echo -e statement, you can determine if the output you see is normal/expected, if it's an error, if it's a possible warning that doesn't rise to the level of an error, or if you want to display some information that fits some other purpose. Also, once you set a color, that color will remain in place unless you reset back to the default color option.

In practice it looks like this:

if [ $# -lt 5 ] ; then
   echo -e "${RED}Usage: $0 [CUSTOMER] [DUNS] [VERSION] [TEST_TYPE] [filename] ${RESET}"
    exit 1
fi


if [[ "${TEST_TYPE}" = "ZIP" ]] ; then
echo -e "${GREEN}Creating an initial ZIP archive${RESET}"
/c/Program\ Files/7-Zip/7z.exe a -r ./archive-test.zip 001-EmptyFile.txt
fi



Note that the plain text is the output of the command itself, the green text is my own message text to alert me that everything is running as expected or to see if something isn't working how I want to.

For each echo statement or set of statements you want to print out, you can set the color with an escape code, and then you can have that text be formatted as you see fit, anywhere from a single word to an entire line. You do need to end your statement with a RESET code to go back to regular output.

In any event, it's something I've found useful, I figured some of you might, too :).

Wednesday, March 11, 2020

New Title, Same Gig?

Well, today threw me for a bit of a loop.

Why, might you ask?

Today I received a new title. For the first time in my software testing career, it can be argued how much of my role as a tester is left. For the past almost thirty years my job titles have been some variation of the following (I'm going to say since 1994 as from 1991-1994, I was a Network Lab Administrator and an Engineering Tech):

Software Tester
Development Test Engineer
Quality Assurance Engineer
Applications Engineer (Basically a Customer Test Engineer)
Senior Quality Assurance Engineer

So what's different today? As part of my review I was asked if I would accept a new job title:

Senior Automation Engineer

I said "yes" but I will confess, a part of me is curious as to what that acceptance means going forward? Actually, I don't have to guess too much, I know, as I've been part of this conversation for a number of years leading up to this.

What this means is that I agree to be spending the lions share of my day inside of an IDE.
What this means is I agree to create code that will be reviewed and put into daily use.
What this means is I agree to step into a development role and all that that entails.
Additional to all of this, I also agreed to transition onto another team, outside of the one I have worked with for the past seven years.

Yep, I have a mixture of emotions today.

Am I ready for this? I'd like to say I think so.
Do I want to do this? As a matter of fact, yes.
Do I know enough to do this? That remains to be seen, probably not completely.
Am I comfortable with this change? Ah, now we get to the heart of the matter.

See, in large part, I'm not comfortable with this change. I'm having to put down a large part of my identity that I have fostered over the past decade. The titles and pronouns I've become so used to are at once still relevant and now totally irrelevant. A part of me is anxious, a little scared, a little concerned I may be biting off more than I can chew, especially with that Senior part. That means I'm supposed to mentor juniors. As a developer. I will confess that part feels really weird.

And yes, even though I am not necessarily comfortable, I'm excited to do this. The expectation has changed. The primary focus of my efforts has, too. In part, I was singled out as, "that person that always champions trying new avenues, looking at things in unorthodox ways, and not settling for how things have always been done." Really? They paid attention to that? I guess so. They also paid attention to my frequent frustrations that real-life testing needs often trumped my ability to make much headway in the automation space because other needs were prioritized first. They listened. They decided to let other people handle those aspects so I could better focus on coding effectively.

Have I mentioned I'm not even going to be doing this in a language and toolset I'm used to? Yeah, let me touch on that for a second. I've been a Mac focused tester and programmer for 10 years now. I've used Ruby, Java, Perl, and Python to an extent in those years. Today I'm sitting behind a relatively new Windows 10 laptop running .NET Core, Visual Studio and wrapping my head around C# patterns. Surprisingly, I'm not running for the hills. It's different but it's not fundamentally so. Sure, there's some tooling and practices I have to come to grips with, as the last time I spent any real-time with a .NET application of any kind was 2010. It's all so very weird, like I'm having to speak Dorian Greek... but at least I feel like I'm coming from Ionian Greek rather than from, say, Japanese (weird comparisons but hey, welcome to my brain today ;) ).

Long story short, I just signed up to have my world rocked and I am strangely OK with it.

Friday, October 25, 2019

A Book Commit: MetaAutomation #30DaysOfAutomationInTesting Day Two

I am writing this series based on the requirements for the "30 Days of Automation in Testing" series as offered by the Ministry of Testing. yes, I realize I am almost 18 months late to this party but work requirements and a literal change of development environments have made this the perfect time to take on this challenge.

Let's take a look at the Day Two requirement:

Begin reading an automation related book and share something youʼve learnt by day 30.

In a sense, this is actually a re-read but a first time full apply. Several years ago, Matt Griscom approached me when I was visiting Seattle and told me about an interesting idea he had for a book. We exchanged several emails, talked about a few things here and there, I looked over some early chapters and from that (and lots of talking to other lots more qualified people than me, believe me ;) ), Matt came out with the book "MetaAutomation".

Today, that book is now in its 3rd Edition and last year Matt gave me a copy and encouraged me to give it a read. Now that we are making this product transition over to C# and .NET Core (so as to work better with other teams who are already using that stack) it seemed a very good time to take Matt up on that offer :).

Here's a bit from the Amazon description of what MetaAutomation is all about:

"MetaAutomation describes how to do quality automation to ship software faster and at higher quality, with unprecedented detail from the system under test for better communications about quality and happier teams.

This book defines the quality automation problem space to describe every automated process from driving the software product for quality measurements, to delivering that information to the people and processes of the business. The team needs this to think beyond what the QA team or role can do alone, to what it can do for the broader team. Quality automation is part of the answer to all that is broken with “test automation.”

MetaAutomation is a pattern language that describes how to implement the quality automation problem space with an emphasis on delivering higher-quality, more trustworthy software faster. Much it depends on a radical, yet inevitable change: storing and reporting all the information from driving and measuring the software product, in a structured format that is both human-readable and highly suitable to automation. This change was not possible before the technology made available by this book.

Read this book to discover how to stop pouring business value on the floor with conventional automation practices, and start shipping software faster and at higher quality, with better communication and happier teams."

I have to admit, on the surface, those seem to be bold claims and from what I read in the previous versions, there's a lot to digest here. thus, I'm actually going to be trying something out during this month. If possible, I'm going to try to approach the rest of this month's activities, where relevant, by referencing this book and trying to see if I can use the principles in my practice. Thus, in addition to doing a full form book review at the end of the month, I hope to have had a chance to actually and completely internalize what I read here. In short, Matt, I'm not just going to read this book, I'm not just going to review it, I'm going to try my best to live it. Let's see what happens if I do exactly that :).




Thursday, October 24, 2019

Late to the Party, Still Going to Jam: Day One, 30 Days of Automation in Testing

OK, here we go, better late than never, right :)?

Regardless, I have decided I want to tackle this one next because I am knee-deep in learning things related to code and automation and retooling with a new stack and platform.

I'm in the process of learning how to navigate my way around Windows 10 on a Lenovo Think Pad, getting my bearings with Visual Studio, and playing with some of the finer points of C# and .NET Core. In short, it's a perfect time to take this on and perhaps augment a tectonic shift in a work environment with some additional skills to help balance it all out.

This is Day One of a Thirty Days Challenge, this time focusing on Automation in Testing.

 Look up some definitions for Ê»Automationʼ, compare them against definitions for Ê»Test Automationʼ.

Let's see what Lexico has to say about this (or at least what they had to say about it October 24, 2019):

automation: NOUN: mass noun

The use or introduction of automatic equipment in a manufacturing or other process or facility.

Origin: 1940s (originally US): irregular formation from automatic + -ation.


That makes sense, in the sense that that is a natural sense of the word. Something done to make something automatic. Granted, it uses the word automatic, which comes from the Greek word automatos:  acting of itself. Thus when we think of automatic, and by extension, automation, we are thinking of devices and processes acting on their own behalf. thanks, Lexico :).

So let's take a look at Test Automation. What does Lexico have to say?


No exact matches found for "Test Automation"

Interesting, and not really surprising, as it's not one word. Point is, there is no real dictionary definition for it. The closest thing we get is a Wikipedia entry (and yes, I'm a strong advocate of "caveat lector" when it comes to Wikipedia or any place else but there's nothing wrong with starting there and then extending the search. Wikipedia starts with:

"In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes." 

Where did that come from? 

Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 74. ISBN 978-0-470-04212-0.

Hmmmm... not trying to be critical here but a book selling the best practices of software management gets to define what test automation means? Am I the only one who finds that somewhat amusing?

OK, to be fair, test automation is still fairly new (though it is as old as any procedural programming steps, really, so it's been with us since the 1940s at least). Ultimately it comes down to the idea that we have machines and we want to control those machines' execution and determine something about those machines' control that goes beyond them just doing that thing they do. Sorry if that's a bit breezy, I never promised you peer-reviewed scholarship ;). 

Seriously though, minus the snark, we are looking at doing a little more than repeat steps over and over. We're trying to determine a way that the steps being performed are actually being done right. thus, test automation goes a little bit beyond actual automation of steps to make a repetitive and repeatable process something a machine or pattern apparatus can do.

Wednesday, October 16, 2019

Two Down, Eight More to Go - Tackling More "30 Days of Testing" Challenges

As a re-enactor, a performer, and a musician, I appreciate the fact that there is a need for regular practice in any endeavor.

- While I dress up like a pirate and participate in occasional stage shows, I need to actually practice swordwork so that I can be prepared and ready, as well as SAFE, during stage performances. In short, I need to keep my body in practice with rudiments and fencing drills.

- As a musician, I can't just show up and improvise (well, I can and I have and the results have been predictably embarrassing). To be able to play at any decent proficiency and dexterity, I must practice, even if the practice I do is to play things by ear so that I can do better live improvisation. The same goes for writing songs. If I want to write better songs, I have to (gasp!) write songs in the first place. It's a little silly to think I'm going to get inspiration and write the perfect thing every time. Likewise, if I use the music theory I do know and write songs with it, I may not make something brilliant every time but my odds of writing something good go way up. Much better than if I just wait for inspiration to strike.

- When I make clothes for historical garb or cosplay, I can't just expect to come in and knock everything out the first time in perfect order. I'm just not that skilled a tailor. I can, however, make mocks and practice and try out the ideas so I can get it solid enough to make the items well.

Why should I think that as a blogger and as a tester I am just going to have intelligent things fall into my lap? The answer is "things probably won't but they definitely won't if I don't practice or prepare for them.

This brings me back to the "30 Days" Challenges. For various reasons I looked at a number of them and said "oh, that would be cool, I will check that out later" or "hmmm, not quite in my wheelhouse, I may check that out further down the road." Any guesses how many of them I've come back to? Yep, I've not come back to any of them except for the two that I chose to hit immediately. Notice that those both completed and I learned a lot from both of them. Let's have a look at a little graphic:


There are ten challenges there. Two are done, eight I've never started. Well, that's going to change. Next up is "30 Days of Automation in Testing". Why? I'm in the middle of learning how to set up C# and .NET Core for automation needs.

The problem is, we're already up to the 16th of October. Not a really convenient start time, right? Old me would say "OK, I'll start this beginning of November" and then I'd forget about doing it. I'd still feel good because I told the world I'd do it. I mean, who is going to check up on me, right? Well, that's a lame attitude and the answer is I'M GOING TO CHECK UP ON ME!!! 

By the way, expect me to talk about "Writing While ADHD" but I'm not going to promise a timeline for it just yet ;).

So what's my plan for the "30 Days of Automation in Testing"? Simple, I'm starting it today. Seems two posts a day should be enough to get me back on track and cover 30 days (that may be aggressive and ambitious but hey, fools rush in where consultants fear to tread ;) ).

Monday, October 14, 2019

Test Scenario Design Models - a #PNSQC2019 Live Blog

This year I decided to emphasize the Test Engineering track, and I'm not saying that just because my talk was part of that track. It is actually one of the key areas I would like to see some personal improvements in what both I and my company do. With that in mind, I decided I want to check out "Test Scenario Design Models: What Are They and Why Are They Your Key to Agile Quality Success?"

Some interesting perspectives:

87% of respondents say management is on board with automated testing
72-76% of respondents say they are doing test automation and scripting
Most organizations are between 0-30% automated.
Of those, about 40% of their tests are completely redundant, meaning they are really not worth anything.

There seems to be a disconnect here. I do not doubt it in the slightest.

Systems are more complex, the proliferation of environments is continuing, manual testing will never be able to cover it all, and the automation we are doing is not even helping us tread water effectively.

Can I get a "HALLELUJIAH!", people?!!

What could happen if we actually said, each sprint, "we are going to send a couple of hours to actually get our testing strategy aligned and effective"? Robert Gormley encourages us to say "oh yeah, we dare!" :)


This is where the idea behind Test Scenario Design Models come in.

The goals are:
- user behavior is king
- test cases are short, concise, and business language-driven.

We should not care if we have 4500 total test cases. What we should care about is that we have 300 really useful high-quality tests. The numbers aren't really relevant the point is to have tests that are effective and to stop chasing quantity as to any kind of a meaningful metric.

So how do we get to that magical-unicorn-filled land of supremely valuable tests?

First, we want to get to a point where we can be specific with the tests that we need to run but no more specific. Extensive test scenarios are not necessary and should not be encouraged. Additionally, we need to be emphasizing that we are looking at testing relevant to the User Acceptance Testing level. That's where we find the bugs, so if we can push that discovery further back, we can work on more important things.


How to Start a Test Automation Framework and Not Die Trying - a #PNSQC2019 Live Blog

Hands down, the best title of the conference :).

As someone who is knee-deep in the process of revamping an automation framework at my company, this is very timely and understandable. Too often, we throw testers into the role of being an automation engineer with perhaps little to know real programming experience. Suffice it to say that having software testers' first programming project be producing a testing framework does not necessarily guarantee failure but it is certainly going to be a frustrating endeavor, no matter how we go about it.

Juan Delgado and Isaac Mende have written a paper that goes into depth about creating a formal and mature process for developing a framework. They lead off with the comment that failed automation efforts are too common and in many cases, a lack of guidance and understanding of principles is to blame.


Their examples use what is called the “UP Automation Framework”. This set of libraries has been developed by students of Artificial Intelligence engineering at “The University Panamericana campus Aguascalientes” (I think this may be the Aguascalientes my great-grandmother's family is from; its proximity to Mazatlan is the reason I think that to be the case. Not apropos of anything, it just brings a smile to my face :) ).

Juan and Isaac make the point that an automation framework is just as deserving of a development methodology as any project. This gets repeated often but I think it deserves repeating. For an automation project to be successful, it needs to be given a similar level of focus and attention as the min product software development. It can't be seen as an afterthought or as a "nice to have" project.

Tuesday, April 2, 2019

Saab 99 GLE vs Mazda Miata MK1: Adventures in Car Restoration and Test Framework Building

Now that I have had a couple of times (and some dress rehearsals leading up to them) I feel pretty good about the material that I've been presenting in my workshop "How to Build a Testing Framework From Scratch". Actually, I need to take a small step back and say that the "From Scratch" part isn't really the truth. This workshop doesn't really "build" anything from the initial code level.

Instead, it deals with finding various components and piecing them together and with additional glue code in various places making everything work together. As a metaphor for this, I like to make a comparison to restoring cars. I'm not super mechanically inclined but like many people who were young and had more imagination than common sense, I harbored a desire to take an older car and "restore" it. My daughter has had a similar desire recently. Both of us have undertaken this process in our late teen/early 20s but both of us had dramatically different experiences.

When I was younger, I had the opportunity to pick up relatively cheaply a 1978 Saab 99 GLE. It looked a lot like this:

1998 Saab 99 GLE Hatchback autobile, burgundy paint


For those not familiar with Saab, it's a Swedish car brand that produced cars under that name from 1945 until 2012. It's a boutique brand, with a dedicated fan base. It has a few distinctive features, one of the entertaining ones being the fact that the ignition (at least for many of the vehicles) was on the floor between the driver seats. The key point is that it was not a vehicle where a large number of them were made. It's a rare bird and finding parts for rare birds can be a challenge. In some cases, I was not able to find original parts, so I had to pay for specialized aftermarket products and those were expensive. It also had a unique style of transmission that was really expensive to fix. Any guesses on one of the major projects I had to undertake with this car? The price tag for this was $3,000 and that was in 1987 dollars :(. When it ran, it was awesome. When it broke, it was a pricey thing to fix. Sadly, over the few years I had it, the number of days where it didn't work or needed work outweighed the days when it was working in a way that made me happy. I ultimately abandoned the project in 1990. There were just too many open-ended issues that were too hard or too expensive to fix.

By contrast, my daughter has embarked on her own adventure in car restoration. Her choice? A 1997 Mazda MX-5 Miata MK1. Her car looks a lot like this:

1997 Mazda Miata convertible, red paint, black convertible top

Her experience with "restoring" her vehicle and getting it to the condition she wants it to be, while not entirely cheap, has been a much less expensive proposition compared to my "Saab story" (hey, I had to put up with that pun for years, so you get to share with me ;) ). The reason? The Mazda Miata was and is a very popular car, what's more, a large number of them were made and they have a very devoted fan base. Because of that, Mazda Miata parts are relatively easy to find and there are a large number of companies that make aftermarket parts for them. With popularity and interest comes availability and access. Additionally, with a small size and relatively simple construction, there are a lot of areas that she can do work on the car herself that doesn't require specialized parts or tools. In short, her experiences are night and day different as compared to mine.

Have you stuck with me through my analogy? Excellent! Then the takeaway of this should be easy to appreciate. When we develop a testing framework, it may be tempting to go with something that is super new or has some specialized features that we fall in love with. There is a danger in loving something new or esoteric. There may or may not be expertise or support for the tools you may like or want to use. There may be a need to make something that doesn't currently exist. The more often that needs to be done, the more tied into your solution you are and will have to be. That may or may not be a plus. By contrast, using something that is more ubiquitous, something that has a lot of community support will be easier to implement and will also be easier to maintain and modify over time. It also allows for a greater flexibility to work with other applications where an esoteric or dedicated framework with exotic elements may not have it.

Stay tuned in future installments as I tell you why I chose to use Java, Maven, JUnit, and Cucumber-JVM to serve as the chassis for my testing framework example. Consider it my deciding I'd rather restore a Mazda Miata over a Saab 99 GLE.

The Second Arrow: an #STPCon Live-ish Blog Entry

Yesterday was the start of the workshops day at STP Con and I was happy to present for the second time "How to Build a Testing Framework From Scratch". It's done. I've had a chance to sleep on it after being emotionally spent from giving it. Now I can chat a bit about the experience and some lessons learned.

First, I was able to deliver the entire presentation in three hours, which blows my mind.

Second, I think the time spent talking about the reasoning behind why we might do certain things is every bit as important as the actual technical details.

Third, I've come to realize that there is an odd excitement/dread mix when presenting. Many people say that they are most nervous the first time they present a talk or presentation. I've decided I'm more nervous the second time I present something. The first time I may get through on beginner's luck or if I do take an arrow in the process (meaning I realize areas I messed up or could do better) that's in the moment and it's experienced, processed and put away for future reflection.

I use the term "arrow" specifically due to an old podcast where Merlin Mann represented this idea. Someone in battle feels the first arrow that hits them. It hurts, but it doesn't hurt nearly as much as the second arrow. The reason? The first arrow hits us by surprise. The second arrow we know is coming. It's the same impact but because I've been there and done that, I am often frustrated when the efforts to mitigate the issues I dealt with the first time aren't mitigated or something else I hadn't considered happens.

Much of this came down to making materials available to people in a way that was useful and timely. As I talked to a number of participants, we realized we had several similar problems:

- the materials were made available in advance but some people waited until the night before at the hotel to download them and discovered the hotel bandwidth couldn't handle it.

- the flash drive I handed off (though I did my best to make sure it was read/write on as many machines as possible) ended up as read-only on some machines. Thus it meant copying everything over to bring up the environment, which took close to a half hour for many people.

- even with all of this, I still managed to have to hear (more times than I wanted to), "sorry, my Hyper-V manager is set up by my company. I can't mount the flash drive or open the files". Ugh! On the "bright side" that was a situation that I couldn't control for or do anything about even if everything else worked flawlessly. Still, it was frustrating to have to tell so many people to buddy up with someone who could install everything.

So what did I learn taking my second arrow with this presentation?

1. The immediate install party will only ever happen if everyone in advance confirms that they are up and running well before the event. While the flash drives certainly help, they don't provide that large a time savings as compared to just having everyone set up when they walk in.

2. The "set up" and "rationale" part of my talk... since it's a workshop, what I should be doing (I think), is getting into the nuts and bolts immediately, and sharing rationale around each part of the process as we are getting into it. As it was, my introductory material took about 40 minutes to get through before we fired up the IDE and explored the framework itself. That's too long. Granted, it's there so that people can get everything installed but I think I can pace it better going forward.

3. Though the framework I offer is bare bones, I think I can comment better in the examples and should have some before and after examples that use different aspects and let people see them as a natural progression. Perhaps have three maven projects, each being a further progression from the last one.

Don't get me wrong, I had a blast giving this workshop and I hope the participants likewise enjoyed it. Still, I hope I can make it better going forward and here's hoping I'll get another chance to present it at another conference and hopefully not end up taking the third arrow ;).