Wednesday, November 10, 2021

Is It Testable? A New Episode of The Testing Show is Now Available

 

Image placard for The Testing Show: Episode 107: Is This Testable?

It's that time again. Another episode of The Testing Show is now available. Last week we posted "Is This Testable?" This was a fun interview that Matt and I had with Gil Zilberfeld and as an added bonus, I was able to add some input as a co-quest this go around (as Testability is one of my talking points :).


I'm trying an experiment, so I'm hoping this works. I'm embedding the show in these posts and seeing if they appear when I post them. If not, here's the link to the episode page.

As a follow-up, it was interesting to hear where Gil and I shared ideas and where we differed. Well, saying we "differ" isn't correct. More accurately it was interesting to see what we tended to prioritize of where we focused our respective attentions :).

Anyway, the show can tell you lots more than I can/should, so go have a listen, or a read if a transcript is more up your alley ;).

Thursday, November 4, 2021

About Human Issues and Some Possible Solutions (an #OnlineTestConf 2021 Live Blog)

 


Well, this is an interesting surprise. Joel had another talk scheduled that I had heard at PNSQC last month and as I was considering what I might say that would be different, I see the slide with a very different subject. Cool. This means I get to react to something fresh and unique.



So have you ever done scenario-based testing? That's where we get to describe a person we want to test with. We want to use meaningful data about that person so we might invent some interesting details about them. I've done this a whole bunch and I use tools like Fake Name Generator and I can't count how many anime series I have mined for names and relationships to set up testing environments. 

Okay, that's nerdy... but what's the point here? The point is that when I do this, I am setting up users, I'm setting up relationships, I'm setting up scenarios to look at. All of that is true. What am I missing? I'm missing genuine humanity. I'm using cartoon characters and some made-up details but I'm not really considering flesh and blood humans. More to the point, I'm expecting interactions and reactions that might actually be meaningful inside of tools where that humanity has been stripped away. We think we're being human but only in the most superficial of ways.

Joel's main point here is not about testing. It's literally about being human. There's a lot of weird things we as people deal with and regularly have to make sense of. Impostor syndrome is something so many people deal with. We fear that we don't have the right or the status/stature to hold an opinion or talk to it. Every presentation I give is plagued with this. I always wonder "why are people paying attention to anything I say?" Even though I've been doing these types of presentations for over a decade, and I've been focused on Accessibility as a primary topic for over six years now, I still wonder if what I'm saying makes sense, if I'm prepared, if I have actually got any insights that matter. After the fact, I realize yes but at that moment, I'm freaking out a little.

Joel also mentioned the Superman Complex, which is hilarious because I deal with this as well. Three decades as a scoutmaster has been a struggle because often I have to physically stop myself from doing all of the things. It takes serious restraint for me to not just jump in and take care of a lot of things. I have at times literally forced myself to sit on my hands and stand aside so that things can get done by others. Yes, it hurts at times.

Do you ever suffer through a case of  "historical myopia"? I laugh about this because I'm literally hearing people talking about how great the music of the 80s was. the reason I laugh is I have vivid memories of so many people talking about how much the music of that time SUCKED!!! Did it somehow become better over the ensuing three decades? No, our memories have become more selective. We are filtering and remembering the best of the best. It's easy to do with a somewhat distant past. It's a lot harder to do that with a current present and a future that is yet to be written. Truth is a lot of stuff in the past was terrible and a lot of the stuff going on now is going to be vindicated to be way better than we consider it in the here and now.

Oftentimes we want to focus on that "fresh start" vibe and we often put strange rituals around them. Why do I want to focus on better fitness but have to wait until Monday? Why do I want to save money but I need to focus on the next month to start it right? Why do I need to wait until I get back from vacation to eat more healthily? The reason is we tend to feel like we will be more successful if we have a formal ritual to point to (this is why New Years Resolutions are so popular). Rituals have power but we give them too much power at times. What makes any day, hour, or minute better than "RIGHT NOW" to make a change or a determination to do something? the answer is, it doesn't, it just allows us to delay/defer making a decision or a commitment when we don't really want to.. If we really wanted to be doing something, we would be doing it now, no delay, no ritual required. We'd just step up and we'd do it. 


I appreciated this slide as I think it really does encapsulate what we actually have some control over. It's a way that we can be prepared, be mindful, and be focused without beating ourselves up for the things we can't actually control. This won't cure us of pessimism, it won't solve our imposter syndrome. It won't assuage our guilt or our misgivings but what it will do is it will help ground us and give us some perspective that we can use to our advantage.

Analytics Matter: What Are Your Users Really Doing? (an #OnlineTestConf 2021 Live Blog)

 



Let's have a bit of a metaphysical question... what do our customers want? Do we know? Do we really know? We want to say that we know what our customers want but truly, how do we know what they want? Are we asking them? Really asking them? IF we are not looking at and trying to make sense of analytics, the truth is, no we don't. I mean, we may know what we think they want or what's important to them./ Analytics tell us what they really want and what they really do. 


 


There's lots of neat tools that can help with this, there's of course Google Analytics, Adobe Analytics, and CoreMetrics. I have experience with using Pendo as well. Pendo is interesting in that it flags when customers actually use a particular page, function, or method. IT's a valuable tool for us to see what functions and features are really being used.

Let's look at the idea that analytics should be added to a site after it launches. On the surface, logical, but how about implementing them at the beginning of development. There's a lot of critical information you can discover and help your development by examining your analytics not just when a site is live but also as you are putting it together. What development is influencing your most critical areas? Your analytics may be able to tell you that.

 Another thing to realize is that analytics do not actually tell you anything by themselves. You may need to do some timed analysis and aggregating to actually get the real picture. One day's data may not be enough to make a correct analysis. Analytics are data points. You may need to do some specific analysis to determine what actually matters.

So how can we look at analytics from a testers perspective? Amanda suggests using Charles Proxy or Fiddler, or you can use a variety of browser plugins that can help you look at the data your analytics collect. These can look really spiffy and it's cool to look at data and what does what when. However, there are a variety of ways that this data may be misleading. My blog has statistics and analytics that I look at on occasion (I've learned to spread out when I look at them, otherwise I get weirdly obsessed at what is happening when. Also, when I live blog, my site engagement goes through the roof. It's not because everyone suddenly loves me, it's because I just posted twelve plus posts in the last two days (LOL!) ). 

One of the most infuriating things to see is when I go back and notice a huge spike in my data. If it corresponds with a live blog streak, that's understandable. What's not is when I have a huge spike when I haven't been posting anything. What the heck happened? What was so interesting? Often it means that something I wrote was mentioned by someone, and then BOOM, engagement when I'm not even there. That happens a lot more often than I'd like to admit. I'd love to be able to say I can account for every data spike on my system but much of the time, I can't, just because it happened at a time I wasn't paying attention and also because it's not necessarily my site doing the work, it's someone else somewhere else causing that to happen (usually through a Tweet or a share on another platform like LinkedIn, Instagram, or Facebook).

Again, analytics are cool and all but they are just data. It's cold, unfeeling, dispassionate data. However, that cold, dispassionate data can tell you a lot if you analyze it and look at what the words and numbers actually mean (and you may not even get the "actually" right the first few times). Take some time and look through the details that the data represents. Make experiments based on it. See what happens if you roll out a feature to one group vs another (A/B testing is totally irrelevant if metrics are not present).

Analytics can be nifty. they can give you insights, and you can make decisions based on what's provided but analytics by themselves don't really do anything for you. They are just data points. It's the analysis and consideration and critical thinking that's performed on the data points that really matters.

Expect to Inspect – Performing Code Inspections on Your Automation (an #OnlineTestConf 2021 Live Blog)

Paul and I have been running into each other at conferences now for the better part of a decade. In addition to both being testing nerds, we are both metal nerds, too. Some of our best conversations have been half tech and half, "So, what do you think of the new Fates Warning album?" or whatever ;).

For today, Paul is talking about the fact that test automation code is legit and literal code. It's software development and deserves the same level of attention and scrutiny as production code. Thus, it makes sense to do code inspection on test automation code. When we are on a testing team or we have multiple testers to work with, we can have test team members work with us to inspect the code. Often, I have not had that luxury as I've either been the only tester on a project or I've been the only tester at a company. Thus, who inspects our code? Too often, nobody does, we are left to our own devices and we hope for the best. We shouldn't and Paul agrees with this.



 

The benefit of having code inspection is that we can have someone else help us see past our blind spots. Think of it the way that we read our writing. The danger is not that we can't proofread effectively. We certainly can. The real danger is our brain bridges over our mistakes and interprets what we mean so that we can literally skip over blatant mistakes. Later, when we see them, we think "how could I have missed that?" Well, it's easy, because you read it and your brain was a little too helpful. By the way, there is a cool technique if you ever find yourself having to do it yourself... read it out loud as if you were delivering a speech. Mannerism, speech patterns, inflections, etc. Why? It takes you enough out of the space that when you try to speak out a misspelled word or clunky grammar, you hear it out loud and your slower thinking brain will detect, "Hang on, here's an issue".

There are a number of tools that can be used to allow you to do both static analysis and dynamic analysis but what I find really helpful is to just hand over my tools to another developer or tester and say, "Hey, can you run through this for me"? The benefit here is that they can look at what I am running and what I am doing and they can see if my rationale makes sense. 

I have had numerous occasions where a developer has run my tool and come back and said, "Hey, I walked through this and while I get what you are doing, you're going the long way around". Then often they have either helped me do it more efficiently or they realize, "Oh, hey, I could probably help you with that" and then that code inspection actually encourages a testability tweak in our code that I can then take advantage.

We have a tools repository that our development manager and one of our principal engineers handle merge requests for and I am encouraged to make sure that the code that I write for automation is in sync with our master branch, as much as possible. Thus I make frequent pull requests and those two have a direct opportunity to inspect what I am doing with the code. Encourage these interactions and if your code isn't in a proper repo, fix that.

As Paul said at the end of the talk and many times during the talk, automation is code, treat it like it is. I concur :)!!!

“Make it public!” And other things that annoy developers about testability



This is a fun session as I have already had about an hour-long conversation with Gil about this very topic for the next The Testing Show (hopefully to be out shortly :) ). Testability is also a bit of a pet topic of mine so it's really cool to hear Gil talking about this directly.

Testability is often of those areas that can be hard to implement late in the game but is much easier to deal with earlier in the process. It's not a common topic in development circles or at least that may seem to be the case. Sure, developers are encouraged (and in many cases required) to work on unit tests. However, unit testing does not translate to overall testability. Testability is putting hooks or developing a product to be able to actually be easier to share information in ways that can help a tester make a decision as to a feature's genuine fitness for use.

Another challenge we all face is the fact that developers are not interchangeable. What one developer does that may be beneficial may be totally outside the wheelhouse of another developer. Additionally, one developer may do very well in delivering their pieces of code but struggle to integrate with other components. Testers can often help fill in the blanks but often we run into places where a system that has been developed is frustratingly opaque. Often it takes for granted we will be looking at an application with a UI. Many of those interactions, to actually look at the workflows in question, may require the user to set up their tests to interact with the UI multiple times just to get to a place that's useful. 

An area that Gil and I agree on is the fact that testers if they want to advocate for testability, need to at least have a working knowledge of and the ability to scan and read code. Not necessarily writing code at a developer level but having enough proficiency to read code and get the gist of what's happening can be very helpful. Let's take a classic example, look at the error handling code and see what error messages have been coded in. Can you surface those messages? What would it take to do so? In this circumstance, I'd probably sit down and see if I could surface each of those messages at least once. If I can't do so, I would then get back with the developer and say "what will it take to surface this?" In some cases, it's Oh, right, you need to configure this or do it this way" but sometimes we've been able to prove we can't really get there. These are examples where, if there were API hooks that we could interact with, we could more easily determine if they could be properly surfaced or if they were perhaps not necessary.

One of the other areas Gil points to as frustrating both from a Development and testing standpoint is code duplication. There's of course refactoring to remove used code in places but that duplicate code doesn't just end up being a drag on development. IT can also stymie testing efforts because we may think we have a fix in one place only to discover another place is still broken because the code that got fixed on one place didn't get fixed here. This isn't as simple as adding a hook or putting an API call in but it's definitely a testability issue that needs to be discussed.

The key point Gil is making is the fact that we should make our interactions public. Specifically, we should have an API that we can use for testing purposes and also for active production uses for those who want to do it. "But what if people call those APIs after we ship?!" Well, what about it? If exposing something via an API literally breaks your application, how robust is your original design? More to the point, does the risk of exposing elements and methods outweigh the benefits of easier testability and potentially more usability for the customers? I can't answer that for everyone but from personal experience, making items available in the API and adding testability hooks has not come back to bite us later. In fact, it has actually benefitted us overall by giving our users other options they can use to complete their workflows, in some cases not even interacting with the application at all. 

The key takeaway here, IMO, is that we as testers can absolutely influence this conversation and the earlier we get involved in this conversation (And encourage mocking, API calls, dependency injections, etc.) can help us test better and more comprehensively. The critical element, though is that we need to be involved in this process early/ The sooner we can get into a story being developed, the more likely we can have a positive effect on testability. The longer we wait, the more resistant the process of testability will become, not just from the developer's perspective but from the literal code perespective ;).


Why Should we Take Things Personally? (an #OnlineTestConf 2021 Live Blog)



Well, this is an intriguing title. I've long considered the advice that it is best to not take things personally or at least to not take things so personally that it causes me stress or grief. Still, I think I have a feeling where this talk might be going, so let's see if I'm right :). 



As a performing musician, I know full well how it feels to internalize everything that happens to me when I'm performing. Some of those things are entirely within my control. I can blame myself if my stringed instrument goes out of tune (and I can take the time to tune it again). I can blame myself when I sing something that is causing me to stretch for notes. Of course, I also have the benefit of on-the-fly transposition. If I know my material and for a time I can't "hit the note" I can transpose to something that is harmonically interesting if not exactly equivalent. If I forget the words, I can either put in stand-in words or I can vocable until I get back on track (any singer who claims this has never happened to them is lying to you, by the way).  Why am I going on this tangent? Other than the fact it's my blog and I do that from time to time... my point is these are all areas that I have some control over. I have the ability to impact these areas and I have the ability to interact and mitigate any issues.

What do I not have any effect over? I can't do anything about other performers if they mess up. I can work with them to help them get back on track but I can't play their part for them. Also, if an audience member decides I'm not to their taste or they consider my talents lackluster, I have absolutely no control over that. Maybe with the next song, I might be able to sway their opinion but ultimately I have to accept the fact that I cannot sway everyone nor can I convince everyone to like me. That's an example of something I cannot and must not take personally. Everything else I mentioned? That's in my control and dagnabit yes, I should take that personally.

As I am hearing Indranil discuss some of the challenges he has faced and what he chose to do likewise fits the metaphor I described above. If something is totally out of our sphere of influence, then there is little we can do and we either consider ways to make it work or find a workaround for it. However, if there is an area where we have the ability to influence a decision, we should take it personally and get ourselves into the equation. 

A key idea that Indranil is focusing on is "Test Automanuation"... wait, what?! It's the idea that you can mix in test automation and manual steps where necessary so that you can take care of the most repetitive steps while protecting the areas that must require human interaction (his example is a bak application that uses a security protocol that literally cannot or must not be automated). This idea fits very nicely with the "taxicab" model of automation, where we do all that we can to get us to a particular point, and then we step out of the cab (we stop the automation process) and we look around or explore the area we have found ourselves at. After we are done, we get "back in the cab", we start the meter again, and we go to our next destination. Efficient? In context, it certainly can be. Effective? Often times yes, very much so. Easy to implement? Statically, yes, just create a drop point and run your test. At the moment, I just have a statement/method I drop into my script called "And Let Me See That" which allows me to set pre-determined stopping points. When I reach them, the script pauses and I can take control. 

Indranil is pointing out in demos that there are ways that we individually can affect the development process but odds are we will be limited to non-effective unless we get ourselves involved directly. The idea of asking for/advocating for testability is in and of itself "taking things personally". Ultimately, I'd say it comes down to knowing where your circle of influence/concern actually is. You may find yourself in areas where you will have no control whatsoever as to what happens. In those cases, don't take it personally. If there is a chance something will fall into your sphere of influence, then yes, take it personally and get involved. You may find that you make a change that helps many more than just yourself.

How to Tame Bugs in Production (an #OnlineTestConf 2021 Live Blog)

 


One of the interesting aspects of being a firefighter is the fact that you spend a surprisingly small amount of time actually fighting fires. Frankly, that's a good thing but what they do is they study up on methods to fight fires, prepare to have their equipment at the ready should they need to, and reach out into the community to help people avoid fires in the first place. Granted, fires are to be expected and they deal with it. They know that they can do a lot to be prepared, and they can do a lot to help mitigate fires in the first place but no matter how much training and mitigation they do, fires will occur and the best option for them is to be calm, in control and deal with them when they happen. 



To this end, bugs that appear in production are akin to the occasional fire. The point is, they happen. We can't prevent every single bug from making it into production any more than any fire department can prevent every fire from happening. At some point, they just appear due to conditions being right, and then the bugs have to be dealt with. In short, the fire needs to be put out. 

Elena points out in her talk that often, bugs that appeared in her product (in this case, she was talking about developing and shipping the game "Candy Crush") there were specific issues with third-party products that the app depended upon. Thus it was clear that there were going to be bugs no matter what happened. As such, it helps to realize that instead of panicking it would make more sense to monitor the bug, see the impact, determine the best approach to fix it, and release an update in the least obtrusive way possible. IN today's online delivered and SaaS environments, this is a possibility. It's nothing like the days of game and software development where software went out on CDs or DVDs and was effectively treated as "eternal". Still, even in those days, we didn't just despair. Instead, we spent time looking at what caused the bug, how to avoid reaching that point and then publishing this fact. 

The first and foremost consideration is to consider what the impact on users is. Show-stoppers would be where the game or app crashes or literally prevents users from being able to advance or complete a workflow. Elena shared an example where the game she was releasing had an issue with the Chinese language display on the login. The question was "do we release as is and patch, or do we fix it immediately? This came down to a question of "how much of an issue would it be for an extensive group of users?" If it affects a small group of users or it's fairly localized, or it's a situation that other context clues can help to get past (login pages tend to be pretty standard), then we can determine it may not be all that critical to stop everything and fix. As long as there is a communication of the issue and a plan to fix it, for many users, that's fine, even if the bug affects them.

In a game setting, often a way to mitigate a bug in production is to reward users who stick with you until the fix is applied. Something on the order of "Hey, thank you to everyone who was patient with us while we were working on this issue. As a thank you for your patience, we are awarding everyone 10,000 Gil" (yes, I'm a Final Fantasy fan, sue me ;) ). The point is, acknowledge the issue, be effective with how to interact with it, look at a legitimate timetable, and then plan your course of action. Also, this emphasized the benefit of working on or preparing for issues. This is a great opportunity for testers especially to examine rapid response protocols or to look at some broader areas of testing that may not be as pertinent in the immediate moment.    

In general, I think the firefighting approach makes the most sense but I appreciate the fact that Elena approached the firefighting metaphor from a different angle. We usually focus on the immediate fire and the panic that goes with it. Seasoned firefighters don't panic because they have trained for exactly these scenarios. We as software developers and testers would do well to emulate that example :).


Wednesday, November 3, 2021

Deming’s Management Philosophy (an #OnlineTestConf 2021 Live Blog)



It has been a while since I've seen a talk that referenced Deming. For those wondering what that means, Steve Hoeg is talking about Dr. William Edwards Deming, who was an American engineer, statistician, professor, author, lecturer, and management consultant. He helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics, as well as studying the techniques of Japanese Manufacturing and Assembly. His key idea was that defects were not free but had a significant cost. In the role of looking at Japan's recovery from World War II, Deming developed principles that later on helped cement him as the "Father of Quality". 

 


To understand Deming's ideas and how his principles helped to shape Japanese Manufacturing (where companies like Toyota were devastating the US auto industry, showing how Japan was able to make higher quality products at lower costs). Deming is most well known (at least to me) for his 14 points of quality, which I have put down here. Roll through these to understand the context of Steve's talk:

1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.

2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change.

3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move toward a single supplier for any one item, on a long-term relationship of loyalty and trust.

5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.

6. Institute training on the job.

7. Institute leadership. The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers.

8. Drive out fear, so that everyone may work effectively for the company.

9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service.

10. Eliminate slogans, exhortations, and targets for the workforce asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce.

11a. Eliminate work standards (quotas) on the factory floor. Substitute leadership.

11b. Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

12a. Remove barriers that rob the hourly worker of his right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality.

12b. Remove barriers that rob people in management and in the engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective.

13. Institute a vigorous program of education and self-improvement.

14. Put everybody in the company to work to accomplish the transformation. The transformation is everybody's job.


In addition to the 14 points, he also identified five Deadly Diseases companies dealt with:

1. Lacking consistency of purpose

2. Emphasis on short term

3. Employee rating

4. Job Hopping

5. Over-reliance on visible metrics (interesting observation for a statistician)


The big factor of Demin's points is not that each one of them is important but that it is a holistic process.  Steve mentioned a Deming example called the Funnel test, where it takes a set system (like a funnel) and sees where the balls dropped into it actually land. One would expect that the process would be so precise that everything would hit the right place. In truth, there's a lot of calibration necessary, and getting it all right means there is a lot of errors and issues to work through and recalibrate.

Deming wanted to link his teachings together in a large Venn diagram that was called the System of Profound knowledge:


The biggest problem Deming reported was that companies would take one or two principles and try to apply them and then wonder why little if anything changed. It's one thing to copy principles. It's another thing entirely to actually know, embrace, and apply a unified theory into a cohesive system of practice. they can't be understood by breaking them down to other component parts. they need to be practiced together in the entire collection and their relationships to understand and implement the system.

An example of the US auto industry was that they were inspecting and reverse-engineering their construction and while looking at individual parts, they couldn't understand the differences. It wasn't until they started looking at the entire assembly process that they recognized how well items fit together and how standardization (actual standardization of parts and pieces) made the assembly easier and more sound/effective.

Additionally, creating conflict and competition between co-workers did not motivate teams. Instead, it had the opposite effect. Rather than stack ranking and providing bonuses, encourage pride in work and acknowledgment of effort for everyone. Running good teams outpaces and outweighs rewarding individuals. 

Much thanks, Steve, it's been a few years since I examined Deming's points. Excellent update and reconsideration :).

Full-stack Testing in/is the New Normal (an #OnlineTestConf 2021 Live Blog)


So perhaps the most obvious question is "What exactly is a Full Stack Test Engineer?" and maybe a great follow-up question is "Is it even possible for a single tester to be a Full Stack Tester?" The idea behind a more familiar corollary, the Full Stack Developer, is the idea that a developer can create a solution or system at all layers and all aspects of functionality. That means front end, back end, services, middleware, etc. Full Stack Testers do not necessarily have the same level of code understanding or acumen (if they did, it is likely they would actually be developers).


Thus the idea is a full-stack tester is capable of testing at all layers and at all levels. Cool, OK. What still does that mean and do such people really exist? I'd argue that any single tester that has everything is probably not really a full stack tester, meaning they are expert level at all of the possible areas. However, it's highly likely you may already have a full-stack testing team, that is if you have a handful of testers. If you have any single tester that covers everything, the next question is "to what depth?" Testers, in general, have to be generalists. Man, that's awful phrasing but work with me here. We often know a lot of areas to a shallow level. The deeper we go on any given area, the less likely we will be to be able to provide that level of expertise for other areas. Can I be great at Accessibility? Sure, Does that lead to Usability? Absolutely? Can I leverage those skills into Responsive Design? Well, yeah. So does that make me an expert at performance? Okay, now we're starting to break down the paradigm. Being everything to everyone at an expert level as one person is a very rare unicorn and I might argue by the time you actually get that level of expertise, you may already be obsolete.

Let's be clear here, Christina is not saying every person needs to have this perfectly filled out square. It's impractical and it would take too much time for everyone to fit that level. However, what we can do is leverage the fact that most people are already T-shaped (meaning they have shallow expertise in a broad number of areas but they can go deep on a couple of a few areas). Certainly, cross-training is going to be helpful and ideally, anyone can be effective in any role if desired. Have everyone an expert in everything? No, that's ridiculous. Have all of the areas covered by the team in place or build the team to be that? Now that is attainable. 

So here's the clear takeaway. Let everyone have a strategy to get some level of coverage in all areas. Make sure that no member of your team is completely blind in an area. In my case, while I cover Accessibility fairly deeply, I can help get everyone in my team to a level where they can be effective if not understanding literally all of the aspects. My talk today on the "Dos and Dont's of Accessibility" would by itself be an effective enough training to get everyone on a testing team far enough along to be effective with an Accessibility mindset. From there, the details of tools and processes can be added. Someone else on my team may have other skills that I might be able to learn from. Cycle these responsibilities and everyone has effective knowledge of all areas and each tester has some depth to draw on for specific areas. Granted, the smaller the team (or Lone Testers) yeah, this is going to be a lot harder but it's a journey, not necessarily a set destination. Besides, by the time you get "there", then "there" will be somewhere else and you'll just have to keep walking anyway ;).


The Tester’s Role: Balancing Technical Acumen and User Advocacy (an #OnlineTestConf 2021 Live Blog)

 



Melissa is starting out her talk by talking about balance and the idea that a pendulum ultimately seeks balance. Radical swings will be met with radical swings in the opposite direction but over time momentum and movement will cause the pendulum to slowly reach a balanced momentum (and under the right conditions, keep moving in that direction).

Testers often find themselves being that pendulum. We are always having to move and adapt and learn and those movements can teach us a lot, give us our fair share of frustrations, and also help us grow and learn how to be effective. That's great but effective for what? What is our purpose as a tester? Why are we ultimately really here? We can do a number of things and our role can change dramatically. As we sometimes can get overloaded with tools and tooling and less on an actual testing and user advocacy focus, the goal is to keep in mind how to "get that balance right"


 There's a lot of us who shift our focus from being testers, being automation developers, being build managers, being customer support specialists, being system administrators, being accessibility advocates, being security analysts, focusing on Localization or Internationalization, etc. Often times we are a jack of all trades by necessity. We don't often really get to be an expert in any of those areas. I often laugh at the fact that my formal role is as a Senior Automation Engineer when easily a majority of my time is doing anything and anything not specifically focused on automation. This can both be good (as in I have the ability to do a lot of things and I can be effective in a lot of areas) and bad (I never really got a good head of steam to get something working in a truly effective manner). 

What happens when you are that "lone tester" on a team. How can you strike a balance? I have become interested in Alan Page and Brent Jensen's "Modern Testing" principles that they discuss frequently on the A/B Testing podcast. In many ways, the biggest enemy of my striking a balance isn't the demands on my time, it's how I specifically respond to them. Why am I doing the testing and detailed test cases? Because I'm the tester. Am I the only one able to test? Of course not. Then why am I not encouraging others to test and get involved in the testing process? Probably because I'm standing in the way.

It's also a little bit myopic to think that I am the only one doing testing. What unit tests are in place? Why are they there? What do they do? More to the point, what do they *not* do? Is it possible that we are duplicating effort and not even aware of it? Have we done a basic risk assessment to see if we are actually hitting areas that matter the most? do we even know what areas actually matter to our customers? We can learn a lot from interacting with developers and seeing what they have put in place. I remember a little over a decade when I was reading James Whittaker's book "How to Break Software" and I read and considered his first recommendation: "Look at the code and find any error handling message that has been created. Figure out how to surface every one of those messages at least once". I attempted this and I found a tremendous number of issues. Why? because the error handling had issues in and of itself. Had I not tried this approach, I may never have uncovered these issues.

Ultimately, it comes down to realizing that we are often the ones most guilty of that wild pendulum swing. There are more efficient ways to do things. There are more effective uses of our time. We have the ability to change the narrative. Perhaps the first step is to stop being our own worst enemy ;).

Testers – The Constant Chameleon (an #OnlineTestConf 2021 Live Blog)

So what do you know about chameleons? They are those cool little reptiles that have prehensile tails, grippy claws, tongues that can shoot out beyond the length of their entire bodies. Neat but, what? Oh, right, they are often capable of selective camouflage, as in when I say that they are often capable, I mean they are really good at it. It's an evolutionary advantage if you don't want to be eaten ;).



To this end, the goal of a "testing chameleon" is to fit in effectively with the organization and to be effective in the organization. That means it is likely that the tester(s) are going to be the ones who will need to adapt the most and be the most open to change and chaos to be effective and beneficial to an organization.  

I've had more experiences being a Lone Tester, often as a practical tactic being embedded in a functional team for an extended period. Also, I have had experience being the Lone Terter period, as in my company had a single tester, and I was it. That's a wild place to be (and I'm using wild as in the uncharted and unnavigated sense, not the "party animal" way ;) ). Often that means I get to put on several hats and sometimes balance several hats on my head at the same time. In addition to being a tester, I have also been tech support, rapid response, administration, ops, customer advocate, and trade show demonstrator. About the only thing I haven't done or been is direct sales, and frankly, I'm okay with that!

One of the most valuable skills a tester can bring to the table is advocacy. As a teacher of the BBST Bug Advocacy course for AST (and actually about to wrap up a session of that class this week), I encourage testers to consider themselves an advocate first and an operator second. We don't test by just working with the software. We can test with our words. We can test with our input and questions. That doesn't mean we insert ourselves into every situation but it does mean that we assert a bit of effort and inquiry as early as possible in the development process. I often go back to a phrase Jon Bach said at a Rose City SPIN meetup session about a decade ago where he said one of the most effective times we have as a tester is in the initial development workshops when stories are first being proposed. Jon encourages people to "provoke the requirements". That sounds bold and brash but really what it means is, "we should be testing as soon as these conversations start". If a requirement is set, we need to know what it means, how it will be accomplished (as much as possible at that point), and how we might be able to effectively perform the testing we need to do). This is the absolute best time to address testability. 

Ultimately, testing goes beyond the tester and while we may lead the charge in that regard, our ultimate goal is to make sure that effective testing is done, whether or not we specifically do the testing in question. We may ultimately find ourselves being a resource others use to do effective testing and we can be coaches and cheerleaders rather than hands-on button-pushing. That experience will vary from team to team but it's definitely a possibility. Don't fear it :).

Delivering Quality Software at Speed (an #OnlineTestConf 2021 Live Blog)



It's another conference I am attending. With that, hello again. I hope you will enjoy this live blog extravaganza that I do when I attend these conferences. A word of warning. These are stream-of-consciousness affairs. I do not pretend to do any great editing with these. They are my notes and impressions in the moment. If you choose to follow along as I write these, they are oftentimes disorganized and may appear very random. Realize, this is not me giving a blow-by-blow of the talk. Rather, it's what I'm thinking about as I am hearing the talk and how it may be relevant to me. Possibly, my musings may be relevant to you as well. Hey, here's hoping :).


I am enjoying these banner cards, they make for a nice encapsulation of the event and the person presenting. In this case, I get to hear from an old friend. I've known Huib now for over a decade and I enjoy hearing his presentation. He started the presentation with a poll asking about how we consider and look at our organization, how we communicate and how we actually deliver software. Many of the answers I could have gone either way and it seems that there are a handful of questions that are clear YES or NO, the results of the participants also seem to mirror my thoughts. Areas that I said a hard YES or NO to seem a lot of others did as well. I'm happy to see I'm not a huge outlier (LOL!). Also interesting to see that the areas I was "Ehhh, could go either way" also tended to be a 50/50 split.


At the core of these questions is the idea of speed vs effectiveness. Do we go faster because we are better at what we do or is the fact that we do good work consistently allowing us to move faster? I think there's a case to be made that both are true. To borrow from Adam Neely "Repetition legitimizes". That's a musical term but it also tends to be the case with software as well, we legitimize a lot of our actions and methods and the more legitimized they are, the better we feel about doing/using them. If we find that an approach doesn't work so well, we can likewise de-legitimize those processes and move away from them. To be clear, it is easier to legitimize new methods and processes than it is to de-legitimize an entrenched process. It's possible but inertia often works against us.

Huib makes the point that formula one car, while very fast, will likely not be very effective in the hands of a novice who doesn't know how to drive it (and before people comment, the mechanics of driving a vehicle are for the most part standard but I will not pretend that I will be able to drive a formula one race car the way a professional racer does. They trained to get that good and over time, they learned the tricks of the trade to be effective.

Likewise, just because we have amazing software tools doesn't mean that throwing tools at software problems and issues will effectively solve our problems. There is research, implementation, and practice. Yes, we as teams developing and releasing software have to practice to get good at what we do. As individuals, there is a lot that we can do but we do not operate in vacuums. We operate with flesh and blood people and those people will both help and hinder our efforts.

One of the key things to realize is that the speed of an organization (actually the speed of anything) is about 1/3 the responsibility of the infrastructure and 2/3 comes down to the behavior of the participants. A formula one race car might be an amazing machine but if I have a phobia of going fast, men behind the wheel might not be the best strategy (that's just for comparison's sake; to be honest I'm pretty cool at speed but realize I'd need the practice to be able to corner or evade effectively). 

I've often felt this in the team I work on right now. Even though I have been with this team for over 18 months, I am by far "the newbie on the team". The level of knowledge the team has and the approaches taken to date are things that have been developed over the course of a decade and this crew has been together every bit as long. Thus, there is a level of understanding and training I perpetually need to be going through to hope to catch up with the rest of the team. Fortunately, I get a lot of help from my teammates and I learn something new every day, sometimes lots of things a day. One of the things we frequently do is to record sessions and conversations where we demo ideas and consider tools, processes, and methods of doing things. I have the ability to go back and review these sessions and really see what I know and understand. IT also gives me a chance to go back and see if my practice sessions are on track or focus on the things that matter.  

Sometimes even with the learning, training, and practice, we also have to realize there is only so much gas in the tank and practically available refills (I'm going to drive this formula one metaphor into the ground... ba dum, tshhh ;)  ). The point being, there's a balance that has to be maintained. We can only go so fast before quality suffers but also, there's only so much testing we can and should do and still be fast and effective.