Friday, October 15, 2021

Listen to the QA Summit 2021 After Party (The Testing Show, Episode 106)

 The latest episode of The Testing Show dropped early this morning. 

The Testing Show Episode 106 Graphic: People at a Conference

I realized today that I hadn't shared some interesting news about the distribution of the podcast lately. We've expanded our reach

In addition to having the show on Apple Podcasts, you can now listen on Google Podcasts and on Spotify.

The Testing Show on Apple Podcasts

The Testing Show on Google Podcasts 

The Testing Show on Spotify

This show was recorded at the XPansion QASummit, an event I was added to late in the process as a speaker. As I was speaking and saw there were a number of additional participants I knew and worked with on the Podcast, I decided to pack my microphone along and record two episodes of the podcast live at the event. 

Our previous episode, with was billed as the "Pre Game show", featured myself and Matt Heusser interviewing our friends Gwen Iarussi and Rachel Kibler about talk, expectations, and areas of interest.

This latest episode is about our experiences just after the conference ended. Since this was recorded live, there is a fair amount of background noise but that gives you a bit of a feel of the event itself. 

Both Gwen and Rachel joined us for the after-party episode and we had the pleasure of meeting Pax Noyes, who joined us to talk about the conference and initiatives she is active with, notably "QA at the Point".

Please go have a listen and let us know what you think. Drop a comment and let us know what you'd like to hear us talk about next.

Thursday, October 14, 2021

The Genius of Offering an Ecosystem (25 for One?!!)

This is a little late to the party but, interestingly, this may be the best time to mention this. If you are an electronic musician or a computer-based musician, have a look at what may be the most epic sale ever offered.

This is not a sales pitch. I get nothing out of this... well, no, that's not entirely true. If enough people decide to go and take advantage and we can cross the 25,000 user threshold, I can get that last free item being offered. [ETA: as of Wednesday, October 20, 2021, the 25000 participants have participated and the 25 for 1 item deal is now fully covered].

That being said, let me set the stage for this and let me say why I think this offering is brilliant.

IK Multimedia 25 titles for one promotion

For starters, this offer is courtesy of IK Multimedia. They are a digital/electronic music company located in Italy. They make a variety of hardware and software products. I'd guess their best-known items are their iRig interfaces, which make it possible for guitar and bass players, in particular, to plug their analog guitar inputs into a USB converter so that they can plug into a PC, Mac, iPhone, iPad, or tablet device to record directly. They also make a variety of software applications ranging from keyboard sample software, bass modeling software, drum modeling software, and a huge variety of plug-ins for recording and mastering.

IK Multimedia Group By Progress as of 10/14/2021
IK Multimedia Group Buy Status when I originally posted Oct. 14, 2021 

IK Multimedia is celebrating its 25th anniversary and as such, they have set up a "25 for 1" deal with the following conditions:

1. Customers have to purchase one product at the advertised price.

2. By purchasing that product, users can download another product of equal or lesser value for free.

3. For every 1,000 users that participate, they add a new product that can be downloaded for free (again, of equal or lesser value to what was originally purchased).

4. Their goal is to get to 25,000 users purchasing one product and with that, they can download 24 products for free (hence the 25 for 1).

5. The offer isn't limited to a single purchase. If you purchase a second product, that second product gives you the same benefit (meaning if you bought two products, right now you can download 46 free titles).

Also, their qualifying products are both software and hardware-based, so you could buy a hardware item (guitar pedal, UNO Synth, iRig interface, etc.) and still qualify for the software deal.

I should note that this offer has a limited time. It is scheduled to end on October 31, 2021. At this rate, the end of October date will be firm as they have met their 25,000 participants anniversary goal.

I could end the post here but really, that's not the point of why I am writing this. Understand, I am an IK Multimedia user. I was using their basic level Amplitube amp simulator and was quite happy with it.

Image of the IK Multimedia Amplitube 5 Amplifier Sim program screen

Still, the basic program leaves out a lot of features, which can be purchased individually through their Custom Shop if desired. When I first saw this opportunity, I did some math and figured that for a full-price version of the program (at the time $299.00) I could probably purchase the equivalent of five Custom Shop packs and then some with the cost of the full license. Add to that the promise of 24 additional titles for free? What's not to like? 

When I saw they had bass guitar and drum modeling software packages, I said, "Ooh, that's cool". 

When I saw they had the ability to do full orchestral sample packs, I said, "Ooh, that's cool". 

When I saw they had mastering software, mixing software, acoustic treatment software, I said... well, I think you get the point by now ;).

To be clear, these programs are large. They take up a lot of space on one's hard drive so it's unlikely anyone will be able to use all twenty-five individual titles at the same time. The beauty is, once you have downloaded them and registered their licenses, you have the ability to add or remove a component at any time and redeploy it later. Why is this powerful? It means that I as an electronic music enthusiast can leverage what I might need at any given moment, use it, and then tuck it away until I need it again. What am I likely to reach for? Probably any of the two dozen IK products I've downloaded. Not for everything, of course, but now that I have an entire ecosystem of products that are designed to work both independently and together, why wouldn't I?

This is very smart marketing on the part of IK Multimedia. Now that I have all these programs and capabilities, let's say down the line I have a need for something that may not be in this ridiculous amount of software I've downloaded and implemented. Who am I likely to think of first for more? 

In any event, this is a definite niche area and I can understand if not everyone would want to go down this rabbit hole. Still, if you ever wanted to have the ability to explore the world of music system plug-ins in an expansive way, this is a sweet deal.

Wednesday, October 13, 2021

The Do's and Don'ts of Accessibility with @mkltesthead (#PNSQC2021 Follow Up Blog)


First of all, I wanted to say thank you to everyone that helped put on the Pacific Northwest Software Quality Conference (PNSQC). It's actually still happening but I have a policy of not covering the Workshop Day as that is an add-on expense and those who paid to participate deserve to have that experience for themselves. I'm looking forward to participating in those workshops today but again, I will not be liveblogging those sessions.

I also wanted to say thank you to the attendees of PNSQC and especially the attendees of my session, as your reviews and votes made it possible for me to be considered one of the three best presentations of the entire conference (specifically, the third best, so I can say my talk took Bronze :) ).  Considering there were 50+ presentations, that's a high honor and your survey comments and votes are what put me in that top three. Seriously, thank you, that made my day yesterday.

So what was my talk about? Here's a blurb from the site and links to the talk itself, plus my interpretive take, albeit this interpretation is a little slanted by comparison (I mean, I'm interpreting ME after all ;) ).

Accessibility is a large topic and one that often gets a variety of approaches to deal with. Often it is seen as having to focus on a large checklist (the WCAG standard) and making sure that everything complies. While this is a great goal and focus, often it is overwhelming and frustrating, putting people in the unfortunate role of having to read and understand an entire process before they feel they can be effective.

My goal is to help condense this a little and give some key areas to focus on and be effective in identifying Accessibility issues quickly and helping testers become effective advocates. 

We will look at ways to find issues, advocate for them and help make strides to greater understanding and focus moving forward. We can use a little to provide a lot of benefits.

Here's the link to my Technical Paper

Here's the link to my Presentation

Ultimately, the key takeaway I aimed to impress on the participants was that WCAG, Section 508, and other technical checklists are important to understand. Tools like WebAim, Axe, Lighthouse, Funkify, and other Accessibility checkers/tools are important to understand. Having said that, my talk spent almost no time talking about the checklists or tools. Instead, I asked the participants to take some time to become aware of the variety of disabilities that people deal with (both primary and situational) and focus on being advocates for those individuals. If we live long enough, every one of us will deal with a primary disability of some kind.

Disabilities fall into various spheres (cognitive, mobility, visual, auditory) and even if we do not have a chronic/primary disability, we can find ourselves in situations that render us effectively disabled. If in those situational conditions, we find the products that we work with to be hard to use, imagine how hard/frustrating it is for those with chronic/primary disabilities. 

This talk mainly focused on the mindset of the tester and asked for testers to step up and be advocates. The earlier we address Accessibility in the development cycle, the easier it is for us to implement, make it actionable, testable, and provide services that will work effectively for the largest number of users, whether they need assistive technology or not. 

Tuesday, October 12, 2021

Mental Fitness is the X-Factor with Julie Wong (#PNSQC2021 Live Blog)

Photo of Julie Wong

One of the biggest challenges I think many of us have faced in the past year and a half plus is coming to grips with whatever "new normal" means. For me, working from home has been a reality for an extended period. I made my last visit to a formal office setting back in October of 2017. As such, I have actually been home at least twice as long as many others have had to be. That has been both a blessing and a curse in a sense that, while I have a work environment that allows me to work from home and has done so for an extended period, I was just as much hit with how my reality changed during the COVID pandemic.

Back in 2019, I had the ability to go many places, I played shows in nightclubs, I performed at festivals, and I had many options to go places unhindered. Today, while there are more opportunities than there was last year, that still has not returned to anything at all feeling like normal. Thus, even people like me who are the veteran work at home people still finds himself dealing with mental oddities because of the pandemic. It's taken a toll on me as well.

Thus it is with great interest that I find myself listening to Julie Wong and looking at how I might be able to better handle my overall mental fitness and where I actually am at this point in time. Mental Fitness is more than just thinking a lot or being "in shape". Physically in shape is easier to quantify, albeit the fact is we have at times a messed up vision of exactly what physical fit looks like and the number that that can play on us. trying to get a bead on what it actually means and looks like to be mentally fit is even more challenging.

Mental fitness is, effectively speaking, how we deal with stressful situations and the ability to being able to do so with positivity.  We currently live in a hurry-up-and-deliver culture where we have to aggressively perform for arbitrary times and reasons. 

Julie shares this idea that there are 10 saboteurs that we deal with and 5 sage powers. If we get to understanding who and what those are, we can better work with and talk down those areas that hold us back while allowing us to work with them to succeed. 

The Judge is one saboteur, and these are the nine others that work alongside The Judge

To be clear, each of these areas is not necessarily "bad" by itself. There are good elements to each of these as well, but for our mental fitness in this context, each of these can be saboteurs to our mental health and happiness. In moderation, these can actually be good for us but in excess, each of these can absolutely sabotage us (though I'd argue playing the victim is always a saboteur behavior).

By contrast, we also have Sage Powers that we can uncover to help us. These are more right-brain-oriented areas, where we exercise empathy, curiosity, creativity, compassion, serenity, and laser-focused action.


Ultimately, what we put out attention and focus on is what comes to pass. If we feed our negative side, we reap negative consequences, even if the result could objectively be seen as good. By focusing on positive outcomes, we can be in a better place, even if we don't necessarily achieve what we hope to. That's profound and quite neat. It's wild to think I could succeed and be miserable and by contrast not achieve but still be happy. Ultimately, though, I think by focusing on the positive I can ultimately achieve (note to self: while the saboteurs can be good if not applied in excess, consider the sage powers working oppositely. You can achieve them but you have to focus on them more directly and almost try to get them in excess).

This was a nice way to end this conference. Thank you very much for this, Julie :).

GUI Test Automation for EDA Software with @rituwalia20 (#PNSQC2021 Live Blog)


Photo of Ritu Walia

We are already at the last track talk of the conference. Wow, that went quickly. It's funny how fast things go when you are typing incessantly. I should probably say my fellow conference attendees probably appreciate me not pounding away on my keyboard (I confess, my keystrokes often sound like howitzers ;) ).

Chances are, unless we are lucky or work with some middleware component (hehe heh, that's my current active area) we are going to work with a GUI. In the case of what Ritu is talking about, she is describing Electronic Design Automation tools (EDA).

EDA fascinates me as it feels very much like the analog of the CDC machines used to create items in the physical space (I'm most familiar with musical instrument manufacture now using these tools but the neat thing is that these tools have some amazing capabilities and some very complex software processes. I can only imagine EDA tools fit a similar space.

This is an example of a product very much outside of my wheelhouse and my immediate question would be "How in the world would I test the software that does this stuff?" More to the point, how does someone work on Automating this? Apparently, the answer is "the same way we automate any other software with a front end". EDA software is complex, sure, but it still has a front end and that front end can be interacted with using a mouse and keyboard. Thus, that means that there is a way to interact with that mouse and keyboard and automate those actions like any other application.

To that end, there are a variety of areas that are similar to any other software application and can be driven.

From the talk, areas that need to be considered in EDA software are very similar to what we might see in any other software environment to automate:

• Error identification: determine the common user mistakes likely to be made when using the GUI
• Character risk level: determine which characters may create problems and when/where (e.g., using reserved characters incorrectly)
• Operation usage: determine if/when operations are used incorrectly in the application (e.g., loading an invalid rule file)
• Element relationships: determine if/when different settings or combinations of related elements create problems
• Limitations and boundary values: determine what issues are created when limits are exceeded, or boundary values not observed
• Performance and stress testing: typically observing time and memory consumption performance under extreme conditions
• Smoke testing: finding fundamental instability within builds, to prevent superfluous testing
• Real-world data: using actual data (e.g., customer data) that is not refined or limited, to ensure adequate coverage of customer-critical issues
• Exploratory testing: when bugs are found, performing random testing in the general area, or of elements created by the same developer, to look for additional bugs.
• Efficient bug reporting: giving back a clear bug report that can drive the efficiency of the bug fix

Also,  hearing that they use Tcl/Tk brought a very nostalgic smile to my face as I used to use Tcl/Tk back when I was at Cisco in the 90s. It's neat to hear that framework and language are still being used. I wonder if they use Expect, too :)?

Orchestrating your Testing Process with @joelmonte (#PNSQC2021 Live Blog)


Photo of Joel Montvelisky

I've been struggling lately with the fact that each of our teams does stuff a little bit differently. There's nothing necessarily wrong with that but it does make it a challenge in that one tester on our team would probably struggle to be effective with another team. we have a broad variety of software offerings under one roof and many of those products were acquired through, you guessed it, acquisitions (I mean, how else do you acquire something ;) ).

Point being, there are a variety of tools, initiatives, and needs in place for each team, mainly because each of our teams originated in a different place but also because each team did some work and adopted processes before they were picked up by the main company.

I'm sure I've explained this over the years but Socialtext, the company I worked for starting in 2012, was acquired by PeopeFluent. PeopleFluent had acquired a host of other companies along the way, as well as having their own core product. A few years ago, PeopleFluent itself was acquired by Learning Technologies Group (LTG) in the UK. Additionally, as of the past year, I now work with the team that literally is a specialty team in the middle that tries to make it possible for each of the teams to make it possible to play nice with everyone else (i.e. the Transformations or Integrations team). The neat thing is that there are a variety of products and roles to work with. the biggest challenge is there's no real lingua franca in the organization. Not for lack of trying ;). At the moment, we are as a company trying to see if we can standardize on a platform and set of languages. This is a process and I predict it will take a while before it becomes company-wide and fully adopted if it ever actually is (note to my company, that's not a dig/criticism, just my experience over thirty years of observing companies. I'm optimistic but realistic, too ;) ).

That's just looking at the automation landscape. That does not include the variety of manual test areas we have (and there are a lot of them). Each organization champions the idea of 100% automated testing. I don't particularly but again, I also don't worry about it too much because I don't believe there is such a destination to arrive to. There is always going to be a need for Exploratory Testing and as such there will always be a need and focus for manual testing.  

What this ultimately means is that we will likely always have a disjointed testing environment. There will likely never be "one ring to rule them all" and because of that, we will have disparate and varied testing environments, testing processes, and testing results. How do we get a handle on seeing all of the testing? I'm not someone who has a particular need for that but my manager certainly is. my Director certainly is. They have a need to get a global view of testing and I don't envy their situation.

Whew, deep breath.... that's why I'm here in this take, to see how I might be able to help get a better handle on all of the test data and efforts and see how we can get the best information in the most timely fashion. 

Joel's talk is about "Orchestrating the Testing Process". For those not familiar with music notation and arrangement, orchestration is the process of getting multiple instruments to work together and work off of the same score sheet (in this case, score meaning the notation for each and every instrument in a way that everyone is playing together when warranted and in the right time when called for). Testing fits into this area as well.

So what do we need to do to get everyone on the same page? Well, first of all, we have to realize that we are not necessarily even trying to get everyone on the same page in the literal sense. They need to work together, and they need to be understood together but ultimately the goal of orchestration is that everyone works together, not that everyone plays in unison or even in close harmony. 

Orchestration implies a conductor. A conductor doesn't play every instrument. generally speaking, a conductor doesn't play any instruments at all. They know where and when the operations need to take place. This may be through regular status meetings or it may be through pipeline development. It may also mean that refactoring of testing is as important as creating testing. test Reporting and gathering /distilling that information becomes critical for successful conducting/orchestration. 

Is there a clean and elegant solution for this? No, not really, it's a process that is hands-on and requires coordination to be effective. as a musician, I know full well that to write hits, we have to just write a lot of songs. Over time, we will get a little bit better at writing what might hit and grab people's attention. Even if writing "hits" isn't our goal, writing songs ultimately is. that means we need to practice writing songs. The same goes for complex test environments. If we want to orchestrate those efforts, we need to write our songs regularly and deliberately.  

[updating content, please refresh to see the latest]

Managing Mission-critical Products in Flight (Literally!) with Ben Berry (#PNSQC2021 Live Blog)


Headshot of Ben Berry
This is an interesting area I had not considered.  Ben Berry is the CEO of AirShip Technologies Group. Their product is unmanned aerial vehicles. think drones that can reach heights to release and deploy satellites. 

These are the definition of mission-critical products. If they don't perform as required, catastrophe can result, either in space, on the ground, or anywhere in between. To say the software used in these capabilities is complex and has some very specific and demanding requirements is an understatement. 

To quote Ben:

"AirShip Technologies Group’s VX Unmanned Aerial System is a reusable air platform for high altitude, micro-rocket launch for payload placement of 5G communications in low earth orbit (LEO) of up to six nanosatellite (6.5 lbs. each) or 100 picosatellite (2.2 lbs. each), or 1,000 femtosatellite (0.22 lbs. each). The autonomous VX delivers communications technologies to improve Satellite Communications (SATCOM) link resilience, throughput, and reduced user equipment when compared to SpaceX."

Okay, wow... now that got my attention. I admit the thought of testing something like this would be both amazing and terrifying.

Let's think about how we might test the following:

- R&D focuses on scientific benefits and commercial applications of on-demand microSAT 5G communications deployment; 
- just-in-time launch capabilities for expanded 5G communications via miniaturized LEO satellites. 
- Design objectives include resilient, interconnected 5G mesh communications; 
- exploitation of ultimate high ground of space communications; 
- space empowered SATCOM link resilience; 
- strategic space force projection and operational agility; 
- communications via 100x factor 5G bandwidth; 
- Modular interoperability among microSATs for communications that meet growing bandwidth and resiliency requirements.

Mind blown? Yeah, so is mine!

Any time I tend to think I've had a chance to work on some complex projects, I see something like this and go "nope, noooope". This sounds incredibly daunting and yet there is a part of me that thinks testing something like this would be a total rush (LOL!). However, to quote Dirty Harry... "A [person] has to know their limitations." 

The Testing Team Have Requirements Too with Moira Tuffs (#PNSQC2021 Live Blog)


All too often we see requirements being used for development, customer feature needs, business requirements, etc. To use Moira's company example: 

"speed improvements of 30%" 
"user should see plain English error messages." 

A big question right here... how do we actually confirm/verify that we can actually do/meet these requirements? In short, this hearkens back to one of my previous presentations... "Is this Testable?" 

We expect a 30% speed enhancement... how are we determining the fact that this 30% improvement has actually happened. 30% faster than what, exactly?

Basically, we as testers need to know what the baseline is and how to measure it. We need to have the initial hypotheses and understand them (it seems hypotheses is going to be my favorite word this conference (LOL!) ). 

I can feel this deeply in my own organization. My current test role has me being the "newbie" of the team. I've worked with them for over 18 months now and yet it seems in every story there's some new area that I need to know about that I have never even heard of. My team has been together and working on our area of expertise (data transformations) for more than a decade. Yes, my team members outside of myself and our scrum master have a decade-plus of working with each other. That means there is a tremendous amount of explicit and implicit knowledge locked up in those brains that get taken for granted and I am the one who has to try to figure out what piece of implicit knowledge is missing. 

This is interesting as Moira is hitting a lot of the areas that I remember from doing my "Is This Testable?" talk. I am in full agreement that log files can be an absolute gem in helping define and highlight implicit requirements or areas that we should be paying attention to. It may take some time to get to understand what information is relevant and interesting, and there may be many log files to have to interact with. Here's where I recommend firing up your favorite screen multiplexer and cut up a screen window to tail a variety of log files. This gives us a chance to see if we are actually seeing the data that is relevant given the transaction at hand. 

Moira is using an example that has both a software and a hardware focus. The software can be squishy but if the hardware doesn't work right, there are real issues that may need a more direct interaction. It might be as simple as clearing a nozzle for the material to flow through (this is an example using a 3D printer as the hardware). It may be different where the software is specifically the differentiator. This is something I often deal with in my home studio and with my audio interfaces. I have multi-input systems with multiple outputs that can be routed. Thus for me to be effective, I have to have a firm understanding of the control software that allows me to create signal routes on the fly. Since these input and output routing values are not going to always be in place, there's a lot of interaction, setup, and teardown to be able to route effectively for different sessions and purposes. I'm mentioning this to mainly say that I feel for my testing kin over at Focusrite (and thank you for what you all do to make this less odious than it often can be :) ).

Let Newman drive Postman to REST with @ChristinaThalay (#PNSQC2021 Live Blog)


Photo of Christina Thalayasingam

My first comment on seeing this session was to replay the Jerry Seinfeld epithet "Newman" over and over in my head (LOL!). That is, of course, intentional because what is Newman in Seinfeld? He's a Postman ;). 

I have been interested in looking at how I can get more capability away from dedicated tools and to be drivable from the command line. Postman is a neat tool, to be sure, but it is also a tool that tends to require one to actively interact with it. Thus I am excited to hear Christina Thalayasingam talk about this.

Okay so what is the purpose of a tool like Newman? For that matter, why would we need a tool like Postman? The key reason for Postman is it is a tool that is designed to test API interactions. You send data in POST commands, formatted in XML, JSON, etc. and you then get a response based on what you send. This can be super helpful when you want to test to see if parameters are set, or to get a specific value without having to dive into the UI to find those details. 

Postman does this very well, to be sure, but again, as I stated in my intro, Postman is a standalone tool that I as a user interact with. I can script it, I can add automation elements but I can't practically run a script to do the things that I want Postman to do. This would be especially problematic in a CI/CD setting. Jenkins isn't going to fire up Postman. Well, it could, but my ability to interact with it remotely would be less than desirable, to say the least. However, the collections I have in Postman are valuable and usable. It would be cool to access those and run them as needed to qualify the API tests I've already created. 

Christina pointed out some additional tools they considered including paw (limited to Mac originally though now making its way to Linux and Windows). Postman of course also works well and I'm already using it. Insomnia is another tool that is interesting but has limitations to exporting its scripts and being used outside of itself. The goal here ultimately is ti be able to control API tests with a command-line approach. I mean, if we want to get right down to it, curl can certainly do that (in fact, that's been my typical approach) but curl requires some finessing with what you get back and tweaking your return values so they can be used and shared/validated. 

So what does Newman do? It's basically a collection runner. That's it. I could use some global variables and create data-driven testing runs one after another to make sure we don't get errors when we run at scale. We can add delay and timeout values to see how robust our environment is or where we would run into issues with load or performance. 

I like how the output comes out as a table when it finishes so that it would be easy to share and view results. That alone is worth me taking a closer look at this. Also, having the ability to create a variety of scenarios to run the same collection with variables and have those options something I can run for multiple iterations sounds like a definitely useful addition.

Exploratory Testing Driven by Mind Maps with @claubs_uy (#PNSQC2021 Live Blog)


Photo of Claudia Badell
When it is applied correctly, in a targeted and purposeful manner, Exploratory Testing can be an effective and useful tool to any tester for learning, discovering new avenues of using the product (both intended and unintended), and developing additional questions for further investigation (in short, Exploratory Testing is the application of good science, IMHO).

Mind maps are a neat way to get ideas down on paper (and to be clear, paper or a whiteboard is much more my preferred method of creating them). I've worked with a few different versions on a computer but I will confess I have often struggled with using them. that's mostly a personal problem, as I know many people who are well versed and like using the software programs for mind maps. 

So what do these two things have in common? I have my own thoughts on that, of course, as I think that mind maps can be a great way of organizing thoughts to create initial charters and track what I am doing as I am doing it... but what does Claudia Badell have to say about it? that's what I'm here to find out :).

In my world view, Exploratory Testing is a way to focus on learning about the system and its dependencies, ways to interact with the system, and in the process, feel out the limitations the systems may present to us and how we might be able to exploit those limitations to find potential vulnerabilities. If we are doing this by ourselves, or we are the lone tester on a team, then our exploration and a report of what we find might be sufficient. How about if we need to work with a broader team and testing is a common activity amongst the entire team? This is where lessons learned brain dumps or report is not necessarily sufficient and having an additional "information radiation" method would be warranted and desired. In this case, Mind maps and specifically online mind maps would be a good tool to share with others. Okay, deep breath, guess my paper strategy isn't going to work here, but that's fine :).

Claudia recommends building a mind map for each feature or each story, and have them contain test conditions, ideas, and variables to cover for the specific feature or story.

As we see each of these nodes created, each of those nodes could effectively be an Exploratory Testing session. For that matter, we could also create sessions for each leaf coming off of a node, such as we see below:

Now something I think could be cool (and it may be available, I just haven't used it) would be a way to attach an exported text file or image file for each leaf node. this way we could gather our notes from Exploratory test sessions and incorporate them within the mind map if desired (at least I could see that as a benefit, others mileage might vary :) ).

Claudia suggests the idea of a meta-language so that we have a clear understanding of how the ideas are represented. Consider this sort of like a Gherkin for Mind Maps but not necessarily that strict. The metal language doesn't just mean words, it could mean colors, symbols, images, and branching. The point, as I'm hearing it, is not so much what the meta-language is but that the org as a whole understands it so that everyone who would use it can use it.

So overall the goal of the mindmaps is to be a reference for points to start and avenues to consider for Exploratory Testing sessions. Each session will be unique but we need not reinvent the wheel every single time (as appealing as that sounds at times ;) ). Instead, the Mind Map allows us to focus on developing hypotheses, and our exploratory sessions are our experiments. the data we gather can then be included with or imported back into the mind map and now we have a collection of values we can use again later. Thus the mind map isn't just a once-and-done item. Instead, it is continually updated and reused. Cool, this is already getting me a little more excited about moving away from my preferred paper method.

One of the lessons Claudia shared is that having the ability to reference and notate a shared mind map and keep exploratory notes is that the team as a whole can improve and update the testing ideas and testing strategy (see my comments yesterday on the value of a less formal test strategy; this seems like it might fit that bill beautifully). Also, I like the idea of iteratively bumping up and adding/pruning information so that we are dealing with what is pertinent and important for each feature rather than just having an ever-growing list of busywork to deal with. 

I have to admit, I entered curious and I'm leaving with a concrete plan as to how I can implement this with no permissions needed. I love when I get those, so thank you, Claudia :).

Day Two Now Underway: Built-In Quality (#PNSQC2021 Live Blog)

Photo of Derk Jan De Grood
Good morning and here we are on day two of the Pacific Northwest Software Quality Conference. It was interesting to see the effect of the program yesterday and my liveblogging approach. This is something I am used to doing and I have done it for a decade now, but I am definitely feeling the after-effects today. There was a lot to take in yesterday and I went to bed early last night (LOL!).

In any event, we are now into day two, and the first keynote, which is Derk Jan de Grood talking about "Built-In Quality".  Now when I hear the term "built-in" I almost always associate it with the idea of "bespoke", meaning it is literally built to fit into a place seamlessly. Think of the idea of a refrigerator and/or a microwave oven that blends perfectly with kitchen decor. Those are what I usually think of as built-in. Built-in is a great concept and can look fantastic but it comes at a pretty high cost. One that is obvious and up-front, but one that is not so obvious, and that comes later. I'm aware of this right now specifically because of the fact I had to recently replace a microwave oven in my cabinet space that was originally built-in twenty years ago. The issue? that model was no longer being made and replacing it was not so simple. We had to go to great lengths to find a unit that would mostly fit and decide we were okay with it. For the most part, we are, as we care about the functionality of the unit (we need an oven that works reliably) but we can also see in the space it resides now that, if we want to have it look like it belongs seamlessly there now, we are going to have to retrofit the cabinet to fit the purpose.

This may seem like a weird tangent but please work with me here ;). I mention this because I want to set the stage a little bit for this talk that is literally happening as I type. This is what I'm coming to as I think of "built-in" and the warning that goes off in my head now as I consider that term. 

As we talk about organizations and how they are going to deliver a quality product, this brings me back to the idea of "built-in" and how I often think about it. Built-in points to custom work and well-crafted work, sure but it also points to something that may not be easily repeatable. Teams that make a quality change that has great inroads do so because of the people, the culture, and the processes that all embrace and opt to work with. 

Derk also points out that the idea of built-in quality takes from this idea as well, in that it is easy to do this level of bespoke work and keep quality high when teams are small and projects are closely contained (again, using my microwave example) we are able to make fit and finish work well with a small project (repair and replace is a different issue but at the point of initial creation and deployment, we can do an amazing job in a small space). Now let's think about that level of quality for an entire house, or for an entire building complex. Now the issue becomes harder to implement. Just because we can do something well in a small space does not necessarily mean that same level of focus and detail will automatically scale to larger projects. 

Now let's step away from my metaphor which realistically will only take us so far as we talk about software quality. In many cases, we might think that building in quality just means we test more and screen the software more vigorously. Again, in a small organization or a small application, that may very well be a good strategy but as the project becomes more complex, and the project has more touch points and people involved with them, testing more and more falls prey to the law of diminishing returns. We now have to deal with the fact that time is against us, we can't just automatically do more testing when time demands we move more quickly. Either testing needs to be focused in a much tighter and deliberate fashion, or the approach to building software itself needs to change. 

If we look at Agile today and how Agile is meant to have us release software, as well as CI/CD and a DevOps model of software development, the idea here is to try to utilize that bespoke model of the craftsperson and focus on small targeted changes more frequently. Going back to my house metaphor (okay, I guess it still matters a bit ;) ), we can consider this to be working on small remodels regularly, such as replacing a single light switch here and there, and doing that with a full focus on clean and effective construction and wiring techniques, whereas if we had to replace al of the switches at the same time in the same amount of time, the overall quality across all of the switches would be, well, questionable at best. 

To come back to Derk's presentation, he has identified seventy-five possible areas where quality can be focused on and that quality can build and increase inside of our products. One of the most obvious ways is that everyone in the software development organization has literal ownership of quality. That means that development and testing should be integrated at all levels and at all touchpoints of the product life, including before the product ships and after the product ships. Additionally, Derk suggests that we all understand and have an organizational-wide understanding of what the 'Definition of Ready" means. How our pipeline is created and how the steps interact with each other is also critical.   

Overall, built-in quality is not just something you can strive and wish for. It's understanding where the quality and skill aspects of development already exist and under what circumstances they are optimally applied. To expand that level of quality, it will take time, talent, and energy, along with bringing people up to speed to be able to be effective as the implementations grow.  There are a lot of potential distractions and challenges that come to play as we try to expand this to larger organizations and to more people. It's also important to realize that quality in and of itself is a cost and that cost can often be very expensive (back to my built-in example of our microwave. Odds are we could have gotten a really good unit that just sat on our counter. However, we'd have to trade valuable counter space for that and in the end, that's the whole reason why we built it into the cabinet to begin with. Thus it is important to consider what the quality we are providing actually buys us.

Monday, October 11, 2021

Model Based Testing – Shifting Right into the Real World (#PNSQC2021 Live Blog)

Jonathon Wright
Wow, it's already the end of day one. That went fast!

We're now down to our last keynote for the day, which is Jonathon Wright talking about Model-Based Testing. Serious props to Jonathan as he is in Oxfordshire, UK, so he's already into tomorrow to talk to us today.

So we hear a lot about shift-left, meaning tet to the earliest possible moment to test. That must mean that shift right means that we can wait to the very last moment to test... well, no, not really ;).

It might make more sense to look at shift-left and shift-right not so much as when to test as opposed to what to test in what capacity. Shift-left means we are testing requirements, initial development, unit tests, integration tests, system test, and exploration to help us learn as much about the product as we can before the product goes out. 

To be clear, that's the middle part of this shift metaphor. Shift-left is all about what we do before the product ships. Shift-right is everything related to what the product does after it is deployed. Testing is happening in earnest at this point, only the testers now are our customers as well as us monitoring and examining outcomes. Curious about what features are actually being used and by who? that's a shift-right process and relevant after the product ships. The point being, our testing efforts don't stop just because the product has gone out the door. In fact,a lot of interesting testing can only be done after it does.


Much of the process of looking at testing options can be broken down into a number of key steps, and each of those steps can be modeled or stubbed to help look at how interactions relate to one another. Using last year's example of COVID-19 tracing, there's an ability to use these model-based options to help us potentially set up transactions and communications so that we can look at what the possible options for each area might be. Using COVId-19 as the example (a contact tracing app) it looks at the details of what individuals do or expect to do and to see what happens at each node. what do we do if we determine we have been exposed? How can we determine that we have been using the contact tracing models? What happens if I test positive? What should I do? What does the app tell us I should do with a positive test? Who can I contact, who should I contact? Will that contact bubble out and through to others?  Ultimately, how do we react to the data we receive under specific circumstances?
As a musician, I am used to using modeling software for instruments (bear with me, this will make sense, I promise). A number of applications go to great lengths to model various instruments and parameters that we can change. One of the interesting tools I use is an application called Moto Bass. It has the ability to make whatever bass I want to create, and in addition, it lets me move a number of the elements so that I can craft a sound, an attack, a playing method that might fool people into thinking it's playing a particular style of bass with a specific attack.  It's 100% fake, as in no bass player is actually playing it but with practice, I can literally create a bass line that would be indistinguishable from a real bass player playing. That may sound powerful but really the true power is the modeling that is going on, and how much I can manipulate those modes. I can literally create something that adds a human approach such as varying plucking at different places on the string and creating movement in the playing. It's wild and defies logic, but it sounds remarkably convincing. 

In many ways, this is specifically what I am doing as I model testing. I'm looking to see what odd or unusual data sets can be surfaced and see where and how they will be represented. I find that quite cool :).