Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, August 25, 2025

Building Your Tester Survival Guide with Dawn Haynes: A CAST Live Blog

For the past couple of days as we have been getting CAST ready to go, I've gone and done a number of item runs, food stops, and logistical troubleshootings with Dawn Haynes, which is a common occurrence over my years with CAST. Dawn and I have frequently been elbows deep in dealing with the realities of these conferences. One funny thing that we quipped about was the fact that any time we appear at conferences together as speakers, somehow we are always scheduled at the same time (or at least a lot of the time). I thought that was going to be the case this time as well but NO, the schedule has allowed us to not overlap... for ONCE :)!!!

I first learned about Dawn through her training initiatives long before I was actually a conference attendee or speaker. She appeared as a training course provider in "Software Test and Performance" magazine back in the mid 2000s. Point being, Dawn has been an expert in our field for quite some time, and thus ,if Dawn is presenting on a topic, it's a pretty good bet it's worth your time to sit and listen. Daw is the CEO and resident Testing Yogini at PerfTestPlus, so if you want to get a first hand experience with her, I suggest doing it if you can. For now, you get me... try to contain your excitement ;).

Onekey area tht Dawn and I are both aligned on and wholeheartedly agree with is that we are individually as testers, quality professionals, whatever we call ourselves, we are responsible for crating our own careers and if you have been in testing for an extended period, you have probably already had to reinvent yourself at least once or twice. Dawn wants to encourage all testers and quality professionals to actively develop their survival instincts. Does that sound dire. It should... and it shouldn't. Dawn's point is that testing is a flexible field and what is required one day may be old hat and not needed the next. As testers, we are often required to take on different roles and aspects. During my career, I have actually transitioned a few times into doing technical support over active day to day testing.  That's a key part of my active career curation. I've actually been hired as a tech support engineer only for them to realize that I have had a long career in software testing and the next thing I know, I'm back and actively doing software testing full time. In some cases, I have done both simultaneously and that has kept me very busy. My point is, those are examples of ways that testing skills can be applied in many different ways and with many different jobs. 

Automating stuff, doing DevOps, running performance or security audits, or looking at areas your organization may not be actively working towards and playing around with those areas. As you learn more and bring more to the table, don't be surprised that you may be asked to do more of it or leverage those skills to learn about other areas.

Some areas are just not going to be a lot of fun all of the time. Sometimes you will take a while to get the skills you need. You may or may not get the time to do and learn these things but even if you can just spend 20 minutes a day, those efforts add up. Yes, you will be slow, unsure, and wary at first. You may completely suck at the thing that you want to/need to learn. You may have deficiencies in the areas that you need to skill up on. The good news is tat's normal. Everyone goes through this. Even seasoned developers don't know every language or every aspect of the languages they work with. If you are not learning regularly, you will lose ground. I like Dawn's suggestion of a 33/33/33 aproach. Learn something for work, reach out to people, train and take care of yourself. By leveraging these three areas, we can be effective over time and have the healeth and stamina to actually leverage what we are learning. We run the risk of burning ourselves out if we put too much emphasis on one area, so take the time to balance those areas and also, allow yourself to absorb your learning. It may take significant time to get good at something but if you allow yourself the time (not to excess) to absorb what you are learning, odds are you will be better positioned to maintain and even grow those skills.

One of the best skills to develop is to be collaborative whenever possible. Being a tester is great but being able to help get the work done in whatever capacity we can is usually appreciated. A favorite phrase on my end is, "There seems to be a problem here... how can I help?" Honestly, I've never to date been turned down when I've aproached my teams with that attitude.

Glad to have the chance to hear Dawn for a change. Well done. I'm next :).   



We're Back: CAST is in Session: Opening Keynote on Responsible AI (Return of the Live Blog)

 Hello everyone. It has been quitre a while since I've been here (this feels like boilerplate at this point but yes, it feels like conferences and conference sessions are what get me to post most of the time now, so here I am :) ).

I'm at CAST. It has been many years since I've been here. Lots of reasons for that but suffice it to say I ws asl=ked to participate, I accepted, and now I am at the Zion's Bankcorp Tech Center in Midvale, UT (a suburb/neighborhood of Salt Lake City). I'm doing a few things this go around:

- I'm giving a talk about Accessibility and Inclusive Design (Monday, Aug. 25, 2025)

- I'm participating in a book signing for "Software Testing Strategies" (Monday, Aug. 25, 2025)

- I'm delivering a workshop on Accessibility and Inclusive Design (Wednesday, Aug. 27, 2025)

In addition to all of that, I'm donning a Red Shirt and acting as a facilitator/moderator for several sessions, so my standard Live Blog/post every session will by necessity be fewer this go around as I physically will not be able to do that this go around. Nevertheless, I shall do the best I can.


The opening keynote is being delivrered by Olivia Gambelin and she is speaking on "Elevating the Human in the Equation: Responsible Quality Testing in the Age of AI"

Olivia describers herself as an "AI Ethiscist" and she is the author of "Responsible AI". This of course brings us back to a large set of questions and quandaries. For a number of people, we may think of AI in the scope of LLM's like ChatGPT or Claude and many people may be thinking, "What's the big deal? It's just like Google only the next step." While that may be a common sentiment, that's not the full story. AI is creating a much larger load on our power infrastructure. Huge datacenters are being built out that are making tremendous demands on power, water consumption, and on polluion/emissions. It's argued that the growth of AI will effectively consume more of our power grid resources than if we were to entirely convert everyone over to electric vehicles. Thus, we have questions that we need to ask that go beyond just the fact that we are interacting with data and digital representations of information. 

The common refrain of "just because we can do something doesn't necessarily mean that we should". While that is a wonderful sentiment, we have to accept the fact that that ship has sailed. AI is here, it is present, in both trivial and non trivial uses, and all of the footprint issues that that entails. All of us will have to wrestle with what AI means to us, how we use it, and how we might be able to use it responsibly. Note, I am thus far talking about a specific aspect of environmental degradation. I'm not even getting into the ethical concerns when it comes to how we actually look at and represent data. 

AI is often treated as a silver bullet and something that can help us get answers for areas and situations we've perhaps mnot previously considered. One of the bigger questions/challenges is how we get to that information, and who/what is influencing it. AI can be biased based on the data sets that it is provided. Give it a limited amount of data, it will give a limited set of results based on the information it has or how that information was introduced/presented. AI as it exists today is not really "Intelligent". It is excellent pattern recognition and potential predictive text presentation. It's also good at repurposing things that it already knows about. Do you want to keep a newsletter fresh with information you present regularly? AI can do that all day long. We can argue the value add of such an endeavor but I can appreciate for those who have to pump out lots of data on a regukar basis, this is absolutely a game changer.

There are of course a number of areas that are significantly more sophisticated and data that is much more pressing. Medical imaging and interpreting the details provided is something that machines can crunch in a way that a group of humans will take a lot of time to do with their eyes and ears. Still, lots of issues can still come to bear because of these systems. For those not familiar with the "Texas Sharpshooter Fallacy", it's basically the idea of someone shooting a lot of shots into the side of a barn over time. If we draw a circle around the largest cluster of bullets, we can infer that whoever shot those shots was a good marksman. True? Maybe not. We don't know how long it took to shoot those bullets, how many shots are outside of the circle, the ratio of bullets inside vs. outside of the circle, etc. In other words, we could be making assumptions based on how we are grouping something that a bias and prejudice is leaning on. Having people look at these can help us counter those biases but it can also introduce new ones based on the people that have been asked to review the data. To borrow an old quote that I am paraphrasing because I don't remember who said it originally, "We do not see the world for what it is, we see it for who we are".  AI doesn't counteract that tendency, it amplifies it, especially if we are spcifically looking for answers that we want to see. 

Olivia is arguing, convincingly, that AI has great potential but also has significant liabilities. It is an exciting aspect of technology but it is also difficult to pin down as to what it actually provides. Additionally, based on its pattern matching capabilities, AI can be wrong... a lot... but as a friend of mine os fon of saying, "The danger of AI is not that it is often wrong, it's that it is so confidently wrong". It can lull one into a false sense of authority or reality of a situation. Things can seem very plausible and sensible based on our own experiences but the data we are getting can be based on thin air and hallucinations. If those hallucinations scratch a particular itch of ours, we are more inclined to accept the findings/predictions that match our world view. More to the point, we can put our finger on the scale, whether we mean to or not, to influence the answers we get. Responsible AI would make efforts to help combat these tendencies, to help us not just get thr answers that we want to have but help us challenge and refute the answers we are receiving.

From a quality perpective, we need to have a direct conversation as to what/why we would be using AI in the first place. Is AI a decent answer to looking at writing code in ways we might not be 100% familiar? Sure. It can introduce aspects of code that we might not be super familiar with. That's a plus and it's a danger. I can question and check for quality of noutput for areas I know about or have solid familiarity. I am less likely to question areas I am lacking knowledge in or actually look to disprove or challenge the findings. 

For further thoughts and diving deeper on these ideas, I plan to check out "Responsible AI: Implement an Ethical Approach in Your Organization" (Kogan Page Publishing). Maybe y'all should too :).


Saturday, May 20, 2023

Yes, you can test anywhere, even the gym

​Taking a short break from the Accessibility content to ask some “what if” questions. Note, I’m not doing this to shame anyone or make broader comments, just to show that interesting things can be found everywhere :).

Thankfully, my gym is not very expensive, and during this "semi-enforced woodshedding period" I am getting to experience, I can still go to my local gym and get some much-needed body and mind adjustment. In any event, there's a lot of equipment in the gym with a mix of free weights, machines, cardio, and other apparatus to take advantage of.

With that, here’s my gym’s kettlebell rack:

My Gym's kettle bell rack, with medicine balls and slam balls as well
My Gym's kettlebell rack, with medicine balls and slam balls as well

And over here is a bigger apparatus with a number of pulleys and a heavy bag. 


Separate equipment rack with pulleys, rope, and heavy bag
Separate equipment rack with pulleys, rope, and heavy bag


What it also has on it are these QR codes. 

QR codes on the heavy bag rack
QR codes on the heavy bag rack

These are used for a variety of functions, including bringing up tracking and exercise routines to use. One of them is for the heavy bag… which makes sense, as there is a heavy bag here.


But here’s something interesting. What other code is here?

Connect Kettlebell QR Code
Connect Kettlebell QR Code


Huh. A kettlebell code. Okay, so let me scan that and pick up… oh, wait!

the kettlebell rack is on the other side of the gym
Turning from the kettlebell QR code to see the kettlebell rack, on the other side of the gym

That’s quite a distance between the kettlebell code and the actual kettlebells. Is there a problem here?

Again, I’m approaching this as a “what if” and trying to think why this would be arranged this way and it’s not as ridiculous as it looks on the surface. This gym has a sister facility in the East Bay (actually, several) and the one I attend from time to time has both of these pieces of equipment. There, these pieces are facing each other and in that case, having the QR code on the pole and turning around to grab the kettlebell makes perfect sense. However, my resident gym layout doesn’t allow for that, so they had to split these pieces of equipment up.

My point is, there may be perfectly good reasons why something is set up the way it is. Whether or not something is an issue is up to interpretation. I definitely don’t think the distance between the two is a feature. Still, the fix could be easy and inexpensive (heck, next to free). Merely taking a picture of the QR code, printing it, and taping it to the kettlebell rack would be a huge benefit. Ordering a more permanent sticker I could not imagine costing more than a couple of dollars.

So there you go, weird and random quality and testing musings on a Saturday morning. You’re welcome :).

Friday, May 6, 2022

Performance-Driven Development: An #InflectraCON LIve Blog

As was once written, all good things must come to an end and as I have to do some interesting maneuvering to make sure I don't arrive late for my flight, this is the last talk I will be attending and my last missive for InflectraCON. It's been a lot of fun being here and here's hoping they'd like to have me back again next year :).

For the last talk, I'm listening to Mark Tomlinson talk about Performance (What? Shocker! (LOL!)). Specifically,  he's talking about Performance Driven Development. Sounds a bit like Test-Driven Development? Yeah, that's on purpose.

The idea behind Test-Driven Development (TDD) (Beck 2003; Astels 2003) is "test-first". You write a test, you then write just enough production code to PASS that test,  then refactor the code. Just as Testing has "shifted left", performance is likewise shifting left. 

Can the adaptive concepts used in TDD be applied to performance engineering and scalability? The answer is yes and the result is "Performance-Driven Development (PDD). 

In short,  we need to think through the non-functional requirements and system design and all of those "ilities" before we write any code. In other words, as we develop our features, we need to not just pass tests but need to have the performance constraints defined and confirmed as development progresses.

Intriguing? Yes, but I'm wondering how this can be effectively applied. Part of the challenge that I see is that most of the sites I have seen use TDD tend to do so in a layered approach. We start with small development systems and then expand outward to demo, staging, and then production (at least where I work). It would be interesting to see how PDD would scale on each machine. Is there an obvious point where one would be seen to be working well and then as we make the step to jump up from demo to staging or staging to production, what tier do we see issues (or do we see issues immediately)? I confess that in many cases the performance enhancements happen after we've delivered the features in question. Often, we have realized after the fact that an update has had a performance hit on the system(s). The next question would be where we are able to put Performance testing into the initial development. I recall from many years ago how Selenium and JMeter can be an interesting one-two punch when developed in tandem, so it's definitely doable (whether or not concurrent Selenium and JMeter development makes sense, I'd have to say "your mileage may vary" but it is something I can at least wrap my head around :) ).

This seems like something that we might be able to address. I can only imagine my manager's face when I bring this up next week when I'm on our regular calls. I can imagine him just shaking his head and face-palming with "oh no, what is Michael going on about now?!" but hey, I'm willing to at least see if anyone else might be interested in playing along. Time will tell, I guess.

And with that, it's off to see just a little bit more of D.C. as I make my way back to National. Thanks InflectraCON and everyone who attended and helped make it possible. It's been fun and I've enjoyed participating. 

Until we meet again :)!!!

Wednesday, July 28, 2021

How Holistic Testing Affects Product Quality with @janetgregoryca (@Xpanxion #QASummit 2021) : Live Blog

 We're down to our final keynote and it's a pleasure to see Janet Gregory, if only virtually, this year. Since the border situation between USA and Canada is still in question (and considering the situation with outbreaks we are seeing, I don't blame it in the slightest), We're still getting to hear Janet talk about the value of DevOps and the fact that it genuinely works when the teams in question genuinely put in the time and energy to make sure that the teams can work. 

Quality is always a vague and odd thing to get one's head around. What makes something good to one person may not be so excellent to someone else. In some areas it is objective but much of the time it is subjective and not even related to the end product itself. Janet uses the example of a cup of coffee. For some, the best coffee is experienced black, so that every sense of the flavor of the beans can be examined. For others, the best crafted iced frappuccino with all of the extra flavors makes the experience a quality one. Does one approach replace the validity of the other? It really doesn't but it matters a lot to the person in question at that point in time. Quality is what matters to a person experiencing the item in question and in the way that they want to experience it.

So, how do you build quality into your product? In many cases, quality is not just one figure but many that come together. Some may argue that Lamborghini sports cars are of high quality. I may or may not agree but the cost for a Lamborghini puts it well out of the range where I will ever find out. Is the level of quality a consideration if you can't consider paying for it? If it is super affordable, does that automatically mean the product is of low quality? Not necessarily. I'm reminded of the app Splice, which is a video editing app that I use on my phone. Granted, I pay for it (about $3 a week) but their regularity of updates and their method of continually improving the product makes it worth that expense for me. It's not s much that it is going to discourage me but it also provides me a value that makes me willing to keep paying for it.

Holistic Testing focuses on the idea that testing happens all the time. To that end, Janet is not a fan of the terms shift-left or shift-right testing. The real question is, "what do you mean you are not doing active testing at every stage of the process?" It does help to know all areas where testing makes sense to perform and why/when we would do it. It may honestly have never occurred to people that monitoring and analytics after a product is released fits into testing and that testing can actually learn from these areas to help improve the product. 

One of the best phrases a tester can use/encourage is "can you show me?" I find that when working with developers and testers, many misconceptions and miscommunications can be avoided just by asking this question.  Using AB/Testing, feature flags, or toggles to turn on or off features allows us to do testing in production without it being a scary proposition. We also get to observe what our customers actually do and use and from that we can learn wi=hich features are actually used, or for that matter even wanted in the first place. We may also discover that features we develop to serve one purpose may actually be used in a different manner or for a different purpose than we intended. With that kind of discovery, we can learn how to better hit the mark or to provide features that we may not even be totally aware are needed.

The key to realize is there are testing initiatives that happen at every level of software development. It's important for us as organizations, not just us as testers, to learn how to leverage that testing focus at all levels and be able to learn, experiment, confirm or refute, and then experiment again. It will take time, it will take involvement, it will take investment, and it will take commitment. Still, the more that we are abe to leverage these testing areas, the better our overall quality approach will have the potential to be.

On The Road Again: Speaking Today at the @XPansion #QASummit (Live Blog)

 Hi all!

I confess I have been struggling to participate with this blog. I just haven't felt mentally in it. Additionally, it took me a little while to get things sorted out with my Twitter handle (in a neat twist of fate, the person who took the account decided to give it back to me, so I will be putting mkltesthead back into my bio again. It took me a while to make sure that the gift of my account back didn't come with some "extra stuff" that would have made my reality unpleasant but thankfully that was not the case).

A couple weeks back I was asked if I'd like to speak at the XPansion QA Summit being held in South Jordan, Utah, USA. Seeing as I had a number of friends participating in the program and I hadn't spoken in a live setting in nearly two years, I decided it was time to say "yes" and get back to live speaking. That is part of what I will be doing today. I will be giving two talks today (actually, I'll be giving the same talk twice) about "Sef Healing Automation" or more to the point "what self-healing automation actually is (in most cases) and how it's basically a switch statement that rebuilds itself.

The first talk is being given by Andrew Brown and the topic is "Why Do People Break Software Projects". Andrew predicts that software development in 2031 will have about 20% of projects fail. Many will be late or over budget. Some projects will take crazy risks. Many will work in silos. They will develop too much technical debt, they will add more processes that will have no effect on quality, and their regression tests will be filled with junk. Sounds like today, huh? Well, that's the point. We've had these same problems for fifty-plus years. What are we missing? First, there's a technical part and that changes all the time but there is also a people/human part and those problems don't really change. What's worse, we don't change them because we don't really understand those issues. The key to realize is the human brain was never really designed to develop software. The fact that we can do it is kind of remarkable. The human mind is amazingly adaptable but the technology we create quickly outstrips our effective understanding of it. Our thought processes have deep evolutionary roots and many of our thoughts are much more primitive, tribal, and segmented. We are focused on survival and reproduction, and those aspects we do quite well. Those are far and away removed from the thought processes that help us develop software. The technology far outstrips our actual understanding. 

There is a lot of historical fears and issues, some might call this the lizard brain. Those fears and issues are the ones that get to the heart of being human and why we struggle with getting things done effectively. Often, we are overconfident. We see things the way we are, not the way they should be seen. 

Overall, this has been a neat discussion and some interesting ideas shared. I see, and agree, that the areas we need to spend more time on are not the technological issues but the human issues.

Tuesday, April 13, 2021

Starting Over On Twitter with TheTestHead

 It's a strange feeling realizing that ten years of communication and connection can be taken over and made irrelevant.

For those who didn't see yesterday's post, my Twitter account was hacked, and the email address assigned to the person who hacked it. My attempts to contact Twitter about this have been answered thus far with:

"please respond with the email address associated with this account." 

Well, I would if that address was still mine but alas, it is not, and really, I only have myself to blame.

So let's have a little chat about this tale of woe, what I should have done and how I'm going to move on with this.

For starters, my plan to have "mkltesthead" be a ubiquitous tag that was once and done now has run into a bit of a problem. Granted, "mkltesthead" is a bit arbitrary to begin with. I first came upon the idea when I wanted to name the TESTHEAD blog. I really wanted testhead as a Twitter handle and name to use but I couldn't get it as it had already been used.  Thus the convention that started here spread out as a username in many places. During the pandemic, I admit that my Twitter participation was sporadic at best. I just wasn't in the mood to tweet, so I wasn't really paying attention to that account. Well, I paid attention yesterday, that's for sure, when I discovered I couldn't use it any longer!

To be clear, there are a couple things anyone who interacts with me should know:

First, I will not ask you for money, EVER! Granted, I may ask you to go over to Ensign Red's Bandcamp page and buy some music, but that's about it ;).

Second, why would I want to sell my account? For what purpose? Who else would benefit from being TESTHEAD?

Anyway, it's looking less and less likely that the account will be recoverable, so I have mentally prepared to move on. I have created a new account, and it looks like this:


Please note the new name. It's @TheTestHead. Seeing how easy it was to get that, I am a bit chagrined I never tried to change it to that before (LOL!). 

In any event, I will tell you what I suggest everyone do and what I should have done:

- update your password regularly. Even if you think you have a wildly creative password no one else will figure out, you may be surprised how easily passwords can be cracked nowadays.

- do an audit and see what devices and apps have access to your account(s). The more avenues for data flow, the more likely you will be a victim of a breach.

- if you have been delaying setting up multi-factor authentication, do so now. Make sure that you create barriers to people taking over your stuff. It may feel like an annoyance but trust me, having your account lifted and having to explain to people "no, that's not me asking for money" is much more annoying.

One might think that a seasoned software tester should be well aware of stuff like this. Just because We should be aware doesn't necessarily mean we always follow our own advice or everyday practices. We can get lazy as well. This is just an example of how getting lazy can come back and bite us.

 In short, learn from me ;).


Monday, April 12, 2021

mkltesthead on Twitter has been Hacked


I am sorry that this gets to be what breaks my radio silence on the blog for 2021 but I guess if it motivates, it's a good thing, in that sense.

If you interact with me on Twiter with my account of "mkltesthead", please be advised that that person you are communicating with as of some time today is not me. It is someone who hacked and has taken over my account. I'm in the process of trying to see what I can do to get it back but there is the distinct possibility I may not be able to.

Interestingly enough, the person who currently has it is willing to sell it back to me or anyone else. The only problem with that is I have zero interest in doing that. It would be sad to lose a ten-year and running account and one that's associated with my name so thoroughly but if I must start again, then I shall start again.

Hopefully, it won't come to that but please, if you see @mkltesthead for the next bit, and I haven't updated this page or made a new post, please assume the hacked account is still hacked and in the hands of the hacker. Also, if you'd like to do me a solid, call the hacker out and let them know it isn't me and you know it isn't me.

Thank you all very much!!!