Wednesday, July 28, 2021

QA Open Season w/ panel people (@Xpanxion #QASummit 2021) : Live Blog

All right, here comes the last formal activity of the day. We are all gathered for a Q&A shootout with a panel of six participants:

  • Rachel Kibler
  • Carlos Kidman
  • Greg Paskal
  • Jason Bryant
  • Marcus Merrell
  • Matthew Heusser
The questions for this session have been generated via Sli.do and we've covered a number of questions, such as:

"What is the path for software testers going forward if you may not specifically be aiming towards being a technical tester?"

The general consensus is that there are so many possible avenues to explore and get involved in that worrying if you are not suited for automation, you need not worry that your career is over or that you will be replaced. If you apply your brain effectively in an organization and bring value with your efforts, you will run circles around any computer and script. Maybe not fast circles, but circles nonetheless. 

What is the difference between the hype and reality behind AI and ML?

In general, the hype around replacing the human with a computer seems to inspire investors much more than it inspires organizations. AI and ML should be focusing on the data science so that we can actually learn from the data we already have accumulated. Now that could be valuable (and I very much agree :) ).

How does your organization demonstrate the ROI on the testing investment?

The consensus is that if you lead with risk, the odds are that the C suite will start paying attention. Many ideas may take precedence at random times but talking about the actual risks, lead with risk and the C Level folks will year you. 

What are some ways to get testers to think more about quality?

Rachel voiced that she has a quality coach on every team but not necessarily a tester on every team. IOW, the role of testing may or may not be as critical as the role of quality but the role of quality itself certainly is.

A question that I will in no way be able to repeat because it was too verbose...?

Learn to ask better questions and learn when to avoid useless/needless buzzwords.

We interrupt this program to have a company jingle breakdown (you had to be there ;) ).

Can Unit Testing Be used as Integration Testing?

Seems the consensus is they are two different things. It's what you do with them that matters. Add two and two in your head. That's a Unit Test. Check the time... let me grab my watch... that's an integration test. Works for me :).

What is the #1 issue facing the QA world currently? What is the hottest trend?

The biggest issue is not testing the right thing. This extends to testing on devices people actually use (including mobile devices and Internet of Things devices). The biggest trend is ignoring failures and moving on as though there are not issues, and that is problematic, to say the least. Observability is a hot property and we are just at the beginning of what might be possible. 

How do we get our management teams to focus on iOS testing (or mobile testing)?

It seems that iOS in several organizations is not being actively tested or distantly compared to other infrastructures. the answers tend to range around risk and the fact that things break. Quantify how bad things could be if iOS interactions would be compromised or made unusable. My guess is a lot of users would be locked out and would effectively stop a revenue stream and that should light a fire under some people.

And that's a wrap.... oh, and Marvel (LOL!)

How Holistic Testing Affects Product Quality with @janetgregoryca (@Xpanxion #QASummit 2021) : Live Blog

 We're down to our final keynote and it's a pleasure to see Janet Gregory, if only virtually, this year. Since the border situation between USA and Canada is still in question (and considering the situation with outbreaks we are seeing, I don't blame it in the slightest), We're still getting to hear Janet talk about the value of DevOps and the fact that it genuinely works when the teams in question genuinely put in the time and energy to make sure that the teams can work. 

Quality is always a vague and odd thing to get one's head around. What makes something good to one person may not be so excellent to someone else. In some areas it is objective but much of the time it is subjective and not even related to the end product itself. Janet uses the example of a cup of coffee. For some, the best coffee is experienced black, so that every sense of the flavor of the beans can be examined. For others, the best crafted iced frappuccino with all of the extra flavors makes the experience a quality one. Does one approach replace the validity of the other? It really doesn't but it matters a lot to the person in question at that point in time. Quality is what matters to a person experiencing the item in question and in the way that they want to experience it.

So, how do you build quality into your product? In many cases, quality is not just one figure but many that come together. Some may argue that Lamborghini sports cars are of high quality. I may or may not agree but the cost for a Lamborghini puts it well out of the range where I will ever find out. Is the level of quality a consideration if you can't consider paying for it? If it is super affordable, does that automatically mean the product is of low quality? Not necessarily. I'm reminded of the app Splice, which is a video editing app that I use on my phone. Granted, I pay for it (about $3 a week) but their regularity of updates and their method of continually improving the product makes it worth that expense for me. It's not s much that it is going to discourage me but it also provides me a value that makes me willing to keep paying for it.

Holistic Testing focuses on the idea that testing happens all the time. To that end, Janet is not a fan of the terms shift-left or shift-right testing. The real question is, "what do you mean you are not doing active testing at every stage of the process?" It does help to know all areas where testing makes sense to perform and why/when we would do it. It may honestly have never occurred to people that monitoring and analytics after a product is released fits into testing and that testing can actually learn from these areas to help improve the product. 

One of the best phrases a tester can use/encourage is "can you show me?" I find that when working with developers and testers, many misconceptions and miscommunications can be avoided just by asking this question.  Using AB/Testing, feature flags, or toggles to turn on or off features allows us to do testing in production without it being a scary proposition. We also get to observe what our customers actually do and use and from that we can learn wi=hich features are actually used, or for that matter even wanted in the first place. We may also discover that features we develop to serve one purpose may actually be used in a different manner or for a different purpose than we intended. With that kind of discovery, we can learn how to better hit the mark or to provide features that we may not even be totally aware are needed.

The key to realize is there are testing initiatives that happen at every level of software development. It's important for us as organizations, not just us as testers, to learn how to leverage that testing focus at all levels and be able to learn, experiment, confirm or refute, and then experiment again. It will take time, it will take involvement, it will take investment, and it will take commitment. Still, the more that we are abe to leverage these testing areas, the better our overall quality approach will have the potential to be.

Managing The Test Data Nightmare with @AutomationPanda (@Xpanxion #QASummit 2021) : Live Blog

 Wooo hoooo! The second talk is done, I am officially free of obligations today :). However, conference still moves on and this session covers an area that I personally struggle with. My personal job does a lot of data transformation so I have a possibly endless range of test data that can be generated and transformed. 

Test data shows up everywhere. Not just the data needed to make the test work but your browser choice, your necessary created artifacts, before and after dependencies, etc.


Static Data is often created before testing. it's good fr slow or complicated data. It may make tests run faster but it may make tests brittle as data changes, and it may turn stale over time. Dynamic data gets created at run time for tests, it can avoid becoming brittle, it's exclusive tho that tests, creation and run time, it can slow down individual tests due to the creation dependencies, and it will need to be cleaned up after the test is finished.

The truth is, I probably initially do about 70% manually configured data and about 30% dynamic/automated data creation. The intermediate data that I create is always created dynamically (that's the nature of data transformation tests). Ultimately, my goal is to be able to take data from our database and dynamically generate flat files.

Additionally, there is a variety of test control inputs that we need to keep track of. Our browser. our destination URL, basically all information that can be entered and routed. There are also output references that we may know or we may not know anything about. 

The Quality Mind with Gwen Iarussi (@Xpanxion #QASummit 2021) : Live Blog

 Well, that was fun. I just delivered a talk about Self Healing Automation (spoiler: not really, much more opportunistic and using agents to help build a dynamic locator switch statement but that's nowhere near as cool sounding ;) ). Good group, great interactions, thanks to those who attended.

The next session is with Gwen Iarussi and we're talking about how to build a Quality mindset. Gwen focused on the past forty years and how we approached quality and software delivery, how the tools and processes we used developed and grew over those decades. The challenges we face today are similar but definitely "faster". We need to consider scalability as organizations often grow quickly and what worked one year will be wholly inadequate the next. More data, more interactions, more people, more, more, more! Always more!!! With this increase in infrastructure, knowledge of tools and tooling that we had last year is out of date already (not completely but there's a lot that happens in any given year).

"The quality of your thinking determines the quality of your life" -- A.R. Bernard

When we talk about replacing QA with automation, they are talking about replacing the human brain with machines. It's important to realize that machines are repetitive and exact but remarkably stupid without being told exactly what to do. Humans are slower, less prone to dealing with repetitive tasks but we can come up with so many interesting avenues. Our brains are pattern-recognition machines, in that we are very quick to catch on to patterns and then be able to anticipate what comes next.  

We can learn and believe a lot of things and how we approach the way that we learn and focus on tasks gives us the impulse to succeed or to hold us back. Neuroscience has shown that we all bring unique perspectives to our interactions. Our experiences both inform how we look at our approach to quality and, likewise, how we approach testing.

First questions:

Who uses our product? What is most important to them?

What is critical for my company's survival?

What tech do we need to invest in or understand what we already have?

What tools and heuristics can help me make sense of what I am seeing?

Continuous Learning is critical to success. When we stop learning, we stop progressing. Areas like Systems Thinking, Design Theory, Human Psychology, Learning about the world, Cognition Games, and Testing Disciplines/Methods are all important areas to study and understand.

Knowing these things is not enough. We have to actually know it and use it. that means we need to be at the table, "the room where it happens". In short, we need to be engaged and involved. If we are not, it's our loss.



On The Road Again: Speaking Today at the @XPansion #QASummit (Live Blog)

 Hi all!

I confess I have been struggling to participate with this blog. I just haven't felt mentally in it. Additionally, it took me a little while to get things sorted out with my Twitter handle (in a neat twist of fate, the person who took the account decided to give it back to me, so I will be putting mkltesthead back into my bio again. It took me a while to make sure that the gift of my account back didn't come with some "extra stuff" that would have made my reality unpleasant but thankfully that was not the case).

A couple weeks back I was asked if I'd like to speak at the XPansion QA Summit being held in South Jordan, Utah, USA. Seeing as I had a number of friends participating in the program and I hadn't spoken in a live setting in nearly two years, I decided it was time to say "yes" and get back to live speaking. That is part of what I will be doing today. I will be giving two talks today (actually, I'll be giving the same talk twice) about "Sef Healing Automation" or more to the point "what self-healing automation actually is (in most cases) and how it's basically a switch statement that rebuilds itself.

The first talk is being given by Andrew Brown and the topic is "Why Do People Break Software Projects". Andrew predicts that software development in 2031 will have about 20% of projects fail. Many will be late or over budget. Some projects will take crazy risks. Many will work in silos. They will develop too much technical debt, they will add more processes that will have no effect on quality, and their regression tests will be filled with junk. Sounds like today, huh? Well, that's the point. We've had these same problems for fifty-plus years. What are we missing? First, there's a technical part and that changes all the time but there is also a people/human part and those problems don't really change. What's worse, we don't change them because we don't really understand those issues. The key to realize is the human brain was never really designed to develop software. The fact that we can do it is kind of remarkable. The human mind is amazingly adaptable but the technology we create quickly outstrips our effective understanding of it. Our thought processes have deep evolutionary roots and many of our thoughts are much more primitive, tribal, and segmented. We are focused on survival and reproduction, and those aspects we do quite well. Those are far and away removed from the thought processes that help us develop software. The technology far outstrips our actual understanding. 

There is a lot of historical fears and issues, some might call this the lizard brain. Those fears and issues are the ones that get to the heart of being human and why we struggle with getting things done effectively. Often, we are overconfident. We see things the way we are, not the way they should be seen. 

Overall, this has been a neat discussion and some interesting ideas shared. I see, and agree, that the areas we need to spend more time on are not the technological issues but the human issues.