Monday, August 25, 2025

Building Your Tester Survival Guide with Dawn Haynes: A CAST Live Blog

For the past couple of days as we have been getting CAST ready to go, I've gone and done a number of item runs, food stops, and logistical troubleshootings with Dawn Haynes, which is a common occurrence over my years with CAST. Dawn and I have frequently been elbows deep in dealing with the realities of these conferences. One funny thing that we quipped about was the fact that any time we appear at conferences together as speakers, somehow we are always scheduled at the same time (or at least a lot of the time). I thought that was going to be the case this time as well but NO, the schedule has allowed us to not overlap... for ONCE :)!!!

I first learned about Dawn through her training initiatives long before I was actually a conference attendee or speaker. She appeared as a training course provider in "Software Test and Performance" magazine back in the mid 2000s. Point being, Dawn has been an expert in our field for quite some time, and thus ,if Dawn is presenting on a topic, it's a pretty good bet it's worth your time to sit and listen. Daw is the CEO and resident Testing Yogini at PerfTestPlus, so if you want to get a first hand experience with her, I suggest doing it if you can. For now, you get me... try to contain your excitement ;).

Onekey area tht Dawn and I are both aligned on and wholeheartedly agree with is that we are individually as testers, quality professionals, whatever we call ourselves, we are responsible for crating our own careers and if you have been in testing for an extended period, you have probably already had to reinvent yourself at least once or twice. Dawn wants to encourage all testers and quality professionals to actively develop their survival instincts. Does that sound dire. It should... and it shouldn't. Dawn's point is that testing is a flexible field and what is required one day may be old hat and not needed the next. As testers, we are often required to take on different roles and aspects. During my career, I have actually transitioned a few times into doing technical support over active day to day testing.  That's a key part of my active career curation. I've actually been hired as a tech support engineer only for them to realize that I have had a long career in software testing and the next thing I know, I'm back and actively doing software testing full time. In some cases, I have done both simultaneously and that has kept me very busy. My point is, those are examples of ways that testing skills can be applied in many different ways and with many different jobs. 

Automating stuff, doing DevOps, running performance or security audits, or looking at areas your organization may not be actively working towards and playing around with those areas. As you learn more and bring more to the table, don't be surprised that you may be asked to do more of it or leverage those skills to learn about other areas.

Some areas are just not going to be a lot of fun all of the time. Sometimes you will take a while to get the skills you need. You may or may not get the time to do and learn these things but even if you can just spend 20 minutes a day, those efforts add up. Yes, you will be slow, unsure, and wary at first. You may completely suck at the thing that you want to/need to learn. You may have deficiencies in the areas that you need to skill up on. The good news is tat's normal. Everyone goes through this. Even seasoned developers don't know every language or every aspect of the languages they work with. If you are not learning regularly, you will lose ground. I like Dawn's suggestion of a 33/33/33 aproach. Learn something for work, reach out to people, train and take care of yourself. By leveraging these three areas, we can be effective over time and have the healeth and stamina to actually leverage what we are learning. We run the risk of burning ourselves out if we put too much emphasis on one area, so take the time to balance those areas and also, allow yourself to absorb your learning. It may take significant time to get good at something but if you allow yourself the time (not to excess) to absorb what you are learning, odds are you will be better positioned to maintain and even grow those skills.

One of the best skills to develop is to be collaborative whenever possible. Being a tester is great but being able to help get the work done in whatever capacity we can is usually appreciated. A favorite phrase on my end is, "There seems to be a problem here... how can I help?" Honestly, I've never to date been turned down when I've aproached my teams with that attitude.

Glad to have the chance to hear Dawn for a change. Well done. I'm next :).   



We're Back: CAST is in Session: Opening Keynote on Responsible AI (Return of the Live Blog)

 Hello everyone. It has been quitre a while since I've been here (this feels like boilerplate at this point but yes, it feels like conferences and conference sessions are what get me to post most of the time now, so here I am :) ).

I'm at CAST. It has been many years since I've been here. Lots of reasons for that but suffice it to say I ws asl=ked to participate, I accepted, and now I am at the Zion's Bankcorp Tech Center in Midvale, UT (a suburb/neighborhood of Salt Lake City). I'm doing a few things this go around:

- I'm giving a talk about Accessibility and Inclusive Design (Monday, Aug. 25, 2025)

- I'm participating in a book signing for "Software Testing Strategies" (Monday, Aug. 25, 2025)

- I'm delivering a workshop on Accessibility and Inclusive Design (Wednesday, Aug. 27, 2025)

In addition to all of that, I'm donning a Red Shirt and acting as a facilitator/moderator for several sessions, so my standard Live Blog/post every session will by necessity be fewer this go around as I physically will not be able to do that this go around. Nevertheless, I shall do the best I can.


The opening keynote is being delivrered by Olivia Gambelin and she is speaking on "Elevating the Human in the Equation: Responsible Quality Testing in the Age of AI"

Olivia describers herself as an "AI Ethiscist" and she is the author of "Responsible AI". This of course brings us back to a large set of questions and quandaries. For a number of people, we may think of AI in the scope of LLM's like ChatGPT or Claude and many people may be thinking, "What's the big deal? It's just like Google only the next step." While that may be a common sentiment, that's not the full story. AI is creating a much larger load on our power infrastructure. Huge datacenters are being built out that are making tremendous demands on power, water consumption, and on polluion/emissions. It's argued that the growth of AI will effectively consume more of our power grid resources than if we were to entirely convert everyone over to electric vehicles. Thus, we have questions that we need to ask that go beyond just the fact that we are interacting with data and digital representations of information. 

The common refrain of "just because we can do something doesn't necessarily mean that we should". While that is a wonderful sentiment, we have to accept the fact that that ship has sailed. AI is here, it is present, in both trivial and non trivial uses, and all of the footprint issues that that entails. All of us will have to wrestle with what AI means to us, how we use it, and how we might be able to use it responsibly. Note, I am thus far talking about a specific aspect of environmental degradation. I'm not even getting into the ethical concerns when it comes to how we actually look at and represent data. 

AI is often treated as a silver bullet and something that can help us get answers for areas and situations we've perhaps mnot previously considered. One of the bigger questions/challenges is how we get to that information, and who/what is influencing it. AI can be biased based on the data sets that it is provided. Give it a limited amount of data, it will give a limited set of results based on the information it has or how that information was introduced/presented. AI as it exists today is not really "Intelligent". It is excellent pattern recognition and potential predictive text presentation. It's also good at repurposing things that it already knows about. Do you want to keep a newsletter fresh with information you present regularly? AI can do that all day long. We can argue the value add of such an endeavor but I can appreciate for those who have to pump out lots of data on a regukar basis, this is absolutely a game changer.

There are of course a number of areas that are significantly more sophisticated and data that is much more pressing. Medical imaging and interpreting the details provided is something that machines can crunch in a way that a group of humans will take a lot of time to do with their eyes and ears. Still, lots of issues can still come to bear because of these systems. For those not familiar with the "Texas Sharpshooter Fallacy", it's basically the idea of someone shooting a lot of shots into the side of a barn over time. If we draw a circle around the largest cluster of bullets, we can infer that whoever shot those shots was a good marksman. True? Maybe not. We don't know how long it took to shoot those bullets, how many shots are outside of the circle, the ratio of bullets inside vs. outside of the circle, etc. In other words, we could be making assumptions based on how we are grouping something that a bias and prejudice is leaning on. Having people look at these can help us counter those biases but it can also introduce new ones based on the people that have been asked to review the data. To borrow an old quote that I am paraphrasing because I don't remember who said it originally, "We do not see the world for what it is, we see it for who we are".  AI doesn't counteract that tendency, it amplifies it, especially if we are spcifically looking for answers that we want to see. 

Olivia is arguing, convincingly, that AI has great potential but also has significant liabilities. It is an exciting aspect of technology but it is also difficult to pin down as to what it actually provides. Additionally, based on its pattern matching capabilities, AI can be wrong... a lot... but as a friend of mine os fon of saying, "The danger of AI is not that it is often wrong, it's that it is so confidently wrong". It can lull one into a false sense of authority or reality of a situation. Things can seem very plausible and sensible based on our own experiences but the data we are getting can be based on thin air and hallucinations. If those hallucinations scratch a particular itch of ours, we are more inclined to accept the findings/predictions that match our world view. More to the point, we can put our finger on the scale, whether we mean to or not, to influence the answers we get. Responsible AI would make efforts to help combat these tendencies, to help us not just get thr answers that we want to have but help us challenge and refute the answers we are receiving.

From a quality perpective, we need to have a direct conversation as to what/why we would be using AI in the first place. Is AI a decent answer to looking at writing code in ways we might not be 100% familiar? Sure. It can introduce aspects of code that we might not be super familiar with. That's a plus and it's a danger. I can question and check for quality of noutput for areas I know about or have solid familiarity. I am less likely to question areas I am lacking knowledge in or actually look to disprove or challenge the findings. 

For further thoughts and diving deeper on these ideas, I plan to check out "Responsible AI: Implement an Ethical Approach in Your Organization" (Kogan Page Publishing). Maybe y'all should too :).