Friday, May 6, 2022

Performance-Driven Development: An #InflectraCON LIve Blog

As was once written, all good things must come to an end and as I have to do some interesting maneuvering to make sure I don't arrive late for my flight, this is the last talk I will be attending and my last missive for InflectraCON. It's been a lot of fun being here and here's hoping they'd like to have me back again next year :).

For the last talk, I'm listening to Mark Tomlinson talk about Performance (What? Shocker! (LOL!)). Specifically,  he's talking about Performance Driven Development. Sounds a bit like Test-Driven Development? Yeah, that's on purpose.

The idea behind Test-Driven Development (TDD) (Beck 2003; Astels 2003) is "test-first". You write a test, you then write just enough production code to PASS that test,  then refactor the code. Just as Testing has "shifted left", performance is likewise shifting left. 

Can the adaptive concepts used in TDD be applied to performance engineering and scalability? The answer is yes and the result is "Performance-Driven Development (PDD). 

In short,  we need to think through the non-functional requirements and system design and all of those "ilities" before we write any code. In other words, as we develop our features, we need to not just pass tests but need to have the performance constraints defined and confirmed as development progresses.

Intriguing? Yes, but I'm wondering how this can be effectively applied. Part of the challenge that I see is that most of the sites I have seen use TDD tend to do so in a layered approach. We start with small development systems and then expand outward to demo, staging, and then production (at least where I work). It would be interesting to see how PDD would scale on each machine. Is there an obvious point where one would be seen to be working well and then as we make the step to jump up from demo to staging or staging to production, what tier do we see issues (or do we see issues immediately)? I confess that in many cases the performance enhancements happen after we've delivered the features in question. Often, we have realized after the fact that an update has had a performance hit on the system(s). The next question would be where we are able to put Performance testing into the initial development. I recall from many years ago how Selenium and JMeter can be an interesting one-two punch when developed in tandem, so it's definitely doable (whether or not concurrent Selenium and JMeter development makes sense, I'd have to say "your mileage may vary" but it is something I can at least wrap my head around :) ).

This seems like something that we might be able to address. I can only imagine my manager's face when I bring this up next week when I'm on our regular calls. I can imagine him just shaking his head and face-palming with "oh no, what is Michael going on about now?!" but hey, I'm willing to at least see if anyone else might be interested in playing along. Time will tell, I guess.

And with that, it's off to see just a little bit more of D.C. as I make my way back to National. Thanks InflectraCON and everyone who attended and helped make it possible. It's been fun and I've enjoyed participating. 

Until we meet again :)!!!

Myths About Myths About Automation: An #InflectraCON Live Blog

First of all, thank you to everyone who came to my talk about "The Dos and Don'ts of Accessibility". Seriously, it's been a great feeling to know that a place that has been so pivotal in the lives and futures of deaf and hard of hearing individuals (Gallaudet University) is the setting for my talk. How cool is that :)? I'll sum up that talk at a later date but for right now, let's go to Paul Grizzaffi and talk about the "myths about the myths of automation" (and no that's not a typo, that's the literal title :) )

There are a lot of myths when it comes to automation but are there now myths around the myths? According to Paul, yes, there is. 

Is Record and Playback bad? No, not necessarily. It can in fact be a very useful tool to use in a stable environment where the front end doesn't change. It's not so good for actively under development systems, especially if the front end is in flux/development.

Do you have to be a programmer to use automation tools? No, not necessarily but it will certainly help if you have some understanding of programming or have access to a programmer that you can work with. 

Does Automation Come from Test Cases? Not entirely. It can certainly provide value but it doesn't necessarily make sense to take all of your manual test cases and automate them. For a few valuable workflows, then yes, but if doing so will have you repeating yourself and adding time to repetitive steps, then it may not be the best use of your time. Your test cases can be helpful in this process and they can inform what you do but don't just automate everything for the sake of automating everything.

Does Automation Solve all testing Problems? Come on (LOL!). Yeah, that was an easy one but it can often be seen that running a lot of tests quickly can seem to be a high-value use of time, where it can instead just be doing a lot of busywork that looks like a lot is being done but not much that is meaningful.  

Will Automation Find all of your bugs? NO, 1,000 times NO!!! It can show you if a code change now renders an older test a failure, which you can then examine afterward. It can help you with more coverage because now you might be able to make a matrix that will cover a lot of options and run against an orthogonal array. That can be useful and provide a lot of test case coverage but that's not the same thing as finding all of the issues. 

Can we achieve 100% automation? Nope, at least not in the meaningful sense of 100% automation. You can certainly have a lot of workflows and matrices covered, and machines are much faster than humans. However, there will always be more workflows than you can automate. We're not there yet in regards to being able to automate 100% of the things. Even if we could, it will likely not be a good overall use of our time to automate all of the things. Automate the most important things? Sure.

Is There One Tool To Rule Them All? Absolutely not. Yes, shared code can be a benefit, and yes, buying many licenses can help unify a team or teams but it's highly unlikely that a single tool is going to answer everything for everyone. That's not to say that there isn't value on a standard baseline. We use a number of libraries and functions that allow us to test across a variety of products but no one tool covers everything.

Plain and simple, as in all things, context matters and no two teams are the same. Look at the myths you may be carrying and see how they measure up to the reality of your organization.

Are You Using Your Leadership Voice? An #InflectraCON Live Blog

Amy Jo Esser is starting out her talk by sharing about how she wanted to try out for cheerleading when she was younger and how she did not make the team the first time and why she didn't make it. While she wanted to be a cheerleader, she didn't really realize what it would take to be effective in that role. After she was told the reasons she was not chosen, she took the feedback and applied it to the next year's try out and she practiced many things, the most important being the fact that as a cheerleader, you need to use your voice and it needs to carry.

I have a similar story in that I spent years developing my voice over many years to perform on stage as a singer. I developed a rhino-thick hide (metaphorically speaking) and I focused on getting out there and communicating with people, especially to promote and sell tickets. I jokingly told people that I trained myself to be an extrovert. If that is accurate or not, I do know that learning to project and learning to promote helped me considerably but just as the quietest voice I the room will not be effective, just because I am confident, can project, and can interact with people doesn't necessarily mean I am using my voice effectively, especially as a leader. So what might I learn/consider/apply today and going forward? 

As a lyricist, I tend to place an emphasis on the words that I use but if I'm being frank, strip away the music and just read my words and they more times than not just read like bad poetry. To be fair, most lyrics read like bad poetry when stripped from the music and vocal delivery. It's the emotion of the voice that sells it. More to the point, it's the swelling and falling away, the dynamics and the delivery, that make the difference. IF I sing monotone or quietly for the entire song, much of the impact is lost. Going completely overboard also loses the plot and then no one can take you seriously. It's the ability to measure and gauge when to be quiet and when to be bombastic that makes the song work.

Too often, we suffer from two big issues with our voice. The first is not being assertive and struggling to make ourselves be heard. As a leader, we can't be timid and we can't be shy about speaking out. Our internal voice (what Seth Godin likes to call our "Lizard Brain") tries to keep us quiet and reserved. It also tends to encourage us to couch our words with a lot of filler speech (true story: this resonates, as my most viewed TikTok is me talking about how I mostly avoid filler words in my speech ;) ). To have a more confident voice, eliminating or limiting fill words and also making a point to limit the "semantic bleaching" that we do. What's semantic bleaching? It's when we "overstate" or when we really, completely want to be sure that people really understand the totally valuable thing that we want to share... or I could just say "I want to ensure people understand". The latter is direct. The former is semantic bleaching. 

Something that any speaker can learn from singers is standing and breathing from their diaphragm. This is what singers often call "back breathing, where you feel your lower obliques and spinal erectors expand, and then you expel air by squeezing from the obliques and the spinal erectors. We have a phrase in singing to "sing from your groin". It may sound a little crude but if you do, you will be surprised how well your breathing and breath support is focused.  

One of the things that editing a podcast has helped me do is to stop, pause, and allow my brain and my mouth to synchronize. That pause may feel as though it is forever. Record yourself speaking and review. You may find that the long pause was much shorter in real-time as you listen back. Think about where you would see a period or a comma in your speech. Try to do your best to pause at these points and be deliberate as you speak. I also use hand gestures to help me do this. I often joke that I am part Italian so hand gestures are genetic (LOL!) but they do help considerably.

We have five building blocks we can use to change our speech patterns; pitch, pace, tone, melody, and volume. These are areas there is a lot of play and variation but knowing when and where to make these changes can make a dramatic difference in your received message.

There are a lot of benefits to practicing speaking, especially if you want to be a leader. The key part of leading is encouraging people to follow you and the best way to do that is to have a voice that persuades and encourages people to want to follow you in the first place. The key here is we have to practice it and make it a part of our everyday communication. By doing so, our voices will come across as more confident and that confidence will radiate out and help position us to have people ready and interested in listening to us and what we say.

From Fear To Risk: An #InflectraCON Live Blog

Next up is Jenna Charlton with a realistic look at the rhetoric of Risk-Based Testing. As many may well be aware, there's a lot of branding and promises that surround a variety of terms. Many of the phrases that we like to use have a certain comforting ring to them. Risk-Based Testing is one of them. Think about what it promises. If we identify the areas of greatest risk and test around those areas, we can deliver the best bang for the buck quality and we can do it so much faster because we are not testing every single thing. 

Sounds great, right? However, what does this ultimately tell us? We have said we care about risk but what actually is risky? We are only alert to the risks if we have thought about them. The biggest fear I have when I think about doing a risk assessment is that I have made risk assumptions from what I know and can anticipate. Is that really a good risk assessment? It's an okay and workable one. However, if I'm not able to consider or understand certain parameters or areas that may be blind spots to me, I cannot really do a great risk assessment, so my risk assessment is incomplete at best and flying blind at worst. 

One of the first things that can help ground us in these considerations is to start with a simple question... "what am I most afraid of?" Understand, as a tester, what I am most afraid of is missing something important. I'm afraid of having shallow coverage and understanding. That's not necessarily something that a general risk assessment is going to focus on. How many of us have said, "I don't know enough about the ins and outs of this system to give a full risk assessment here"? I certainly have. What can I do? Much of the time, it's a matter of bringing up my concerns about what I know or don't know and being up-front about them. "I have a concern about this module we are developing because I do not feel I fully understand it and thus, I have foggy spots here and here". Sound familiar? What is the net result of this? Do we actually get a better understanding of the components and that leads to a more lean testing plan because now we know the items better? Do we double up our coverage and focus model so we can "be sure" we've addressed everything? Here's where risk assessment breaks down and we fall back into the "do more testing, just to be sure" approach.  

Something else that often doesn't get addressed is the fact that what is a risk at one point in time, as the organization matures and they have covered these areas, risk in those areas actually goes down. Still, how many of us have continued focusing on the "riskiest areas" because tradition has told us that they are, even though we have combed through every aspect of this area we consider so risky. If you have made tests for a risky area, you've run them for an extended period, and no problems have been found (the tests pass all the time), what does that tell us? It could tell us we have inadequate tests (a real risk, to be sure) or it could also tell us that this area has been thoroughly examined, we've tested it vigorously and now we have a system in place to query multiple areas. In short, this area has been moved into an area where it might be risky if something blows up but as long as it doesn't, the risk is actually quite low. Thus, we now have the ability and the need to reassess and consider which risks are the current ones, not yesterday's.

We have to come to grips with the fact we will never cover every test possible and as such, we will never fully erase the risk. Also, we will never get it perfect. Still, we often operate under the assumption that we will be blamed if something goes wrong, or that we made bad assumptions, and of course, we fear the retribution if we get it wrong. Thus, it helps to see how we can mitigate those fears we have. If we can quantify the risk and define it, then we can look at it objectively, and with that, we can better consider how we will address what we have found. Are afraid of an outcome (nebulous) or are we addressing the risks we can see (defined and focused)? To be clear, we may get it wrong, or we may make a mountain out of a molehill. Over time, we might get better at that. Our goal is to deal with the molehills effectively but miss the entire mountain. 

Again, there's a chance that we will miss things. There's a chance something that matters to our organization will not get the scrutiny it deserves. Likewise, fear may be making us focus on solidly functioning software over and over again because "it just pays to be safe" only to realize we are spending so much time on an older risk that isn't as relevant now. It's more art than science but both are improved with practice and observation. 

Alphabet Soup - What Do DevOps, DevSecOps, DevTestOps, SecDevOps, Etc. Really Mean? An #InflectraCON Live Blog

Good morning, everyone! Day two and a rainy day in the nation's capital. Might make for some interesting maneuvering as I get to go home tonight but there are still plenty of things to talk about for the remainder of the day.

The starting keynote for today is Jeffery Payne talking about the proliferation of acronyms and initialisms related to the outgrowth of the portmanteau of Development and Operations. We all know that today as DevOps. Of course, like any catchy/sticky term, we can't leave well enough alone and we add other aspects to each. With that, we now have (as the title shows) DevSecOps, DevTestOps, SecTestOps, and if we want to get a little more creative (for some definition of the term) I guess we can have DevPerfOps and even BizDevTestSecPerfOps! So what is the point of all of this? If we end up with something like BizDevTestSecPerfOps... how is that really in any meaningful way different from general everyday software development?

Realistically speaking, the whole point of DevOps as it was originally introduced was the idea that by combining the disciplines of Development and Operations, we would be able to more quickly develop and deploy our solutions. By having our Development teams and our Operations teams working in tandem, we'd be able to deploy more frequently, make smaller changes more often, and be able to address smaller changes and test them more frequently rather than try to gather up a bunch of features and drop them all at once. The net goal is that we are able to create a better quality product and deliver value to our customers more frequently and effectively. That's the promise, in any event. How well that is done is going to vary from organization to organization. Additionally, the DevOps methodology is really a balance of accelerating change while providing operational stability. still, even with this rapid change and stable deployment, there are a lot of things that could fall by the wayside. Such as where is the testing performed? Who is doing it? What is the quality of that testing? What is the endpoint of that testing, as in does it actually deliver the quality expected/desired?

Over time, as these principles have been applied and examined, there are a variety of items that we notice are not entirely addressed by DevOps alone. How do we focus on security? How do we focus on general testing? What do we do about performance? What do we do about usability? Over time, we get feedback for the changes we make and deploy. The bigger question is "what do we do with that feedback?" Do we learn from the feedback we receive, do we integrate that feedback into the process of Development and Delivery? How do these processes affect Lead Time, Deployment Frequency, Mean Time to Restore (if needed), and what is our Change/Fail Percentage? The key point is, what are we gaining in benefits when it comes to speeding up this feature delivery?

Additionally, some environments and markets get better benefits considering the risks of implementing these processes. Updating Netflix frequently, even if it may have bugs or areas of annoyance? The aggravation level is low, or not really meaningful in the broader sense of things. By contrast, if we have issues with speed and delivery for a product that monitors a person's pacemaker, bugs or missed features are not just undesired, they can be dangerous and life-threatening. Thus, DevOps and its ilk will not be beneficial in all areas. 

This proliferation is going to continue and over time the differences between software development ten to fifteen years ago and today are very different for most teams. Getting the systems to work the way we want to will always be a challenge and the discipline to manage development and deployment will always be an area of interest. There will certainly be changes and a desire to add more coverage areas and more "ilities" to the list. Ultimately we will have to pick and choose which areas we are able to do well and how quickly. As Jeffrey points out there is a variety of augmentations that can be included but there's nothing really special about any of these. We have to pay attention to what we are doing and how well we are doing it, be early with our development and testing ideas, work with them in a focused fashion     , and once they are out monitor for any issues and be ready to react and update to address the feedback we receive. With that, cooking metaphors fit pretty well and our goal is to make our collective soups in a way that we don't burn it or create something that is dreadful to consume. Recipes require a balance and so does the variety of DevAlphapetSoupOps. Understanding the recipe and how to cook it will spell the difference between a great soup and an inedible one.

Thursday, May 5, 2022

Built-In Quality – How Do You Build Quality In? An #InflectraCON Live Blog

I confess that the title of this talk has me instinctively saying, "This is a sarcastic title, right?" Granted, Lean and SAFe have the idea of  "Built-in Quality" as one of their main principles but again, I have been trained and conditioned to be skeptical of such a thing. Still, let's give Derk-Jan de Grood the benefit of the doubt ;). What does Built-In Quality mean and what could it look like? 

In a variety of larger projects, especially in the physical sphere, a lack of quality isn't just inconvenient. It could literally be a matter of life and death. Thus, sure, we want to look at the abilities we can leverage to increase quality. Still, is this a matter of semantics? Can you actually build quality in? Of course, the answer is yes, you can take care while the product is being built to help ensure that the quality of a product is as high as it can be. Again, though, what does Built-IN Quality actually mean, and what does it actually look like?

Ultimately, in my worldview, Quality has to be present from the very beginning. Let's consider something I'm somewhat familiar with in regards to building a guitar. The first and foremost area to consider is the wood that makes up the body and the neck of the guitar. If there are hidden cracks inside of the wood, it doesn't really matter if the rest of the guitar is top notch. An uncovered but spreading crack in the interior of the wood would counteract any and all quality I might put into the frets, tuner,  bridge, pickups, etc. In this example, if I were to scan the wood to look for cracks or imperfections within the wood, eliminating board stock that would not allow me to cut a solid piece without structural cracks, that will "build in" some quality in that instrument. 

Likewise, we need to be able to look at our applications the same way. To build in quality means we as an organization need to be willing to go into our processes and practices with as much attention as my hypothetical luthier and them scanning the wood to look for potential defects before they make cut #1. 

Ii a software example, we of course can and should apply similar ideas. If we have processes that address quality from the very beginning of the software development cycle, we can likewise create processes and practices that will allow us to test effectively from the earliest point of effort. 

Agile Coaches Can Wear Many Hats: an #InflectraCON Live Blog

As I read the description of this talk, I decided I needed to see what this was about. As someone who has been involved in a lot of peripheral roles along with being a tester, I know how it feels to slide into the role of "coach" for a variety of things. Thus I was intrigued with what Steve Moubray would be talking about here.  

While I wouldn't call myself a proper "Agile Coach" I do get the ideas behind coaching and how we can do what is needed in a number of situations. I've been a Scoutmaster, a Ren Faire Performer, a founder of a competitive snowboard team, and a number of other things as well. As such, the discipline of coaching is something I'm intimately familiar with.

There are a variety of situations where we might need to get involved and apply different skills as they are required. We may not be able to cover everything all the time but we may well be actively called upon to wear any number of hats (and by the way, Steve is actually wearing each of these hats as he goes ;) ).

An experienced Agile Coach will put on 12 different hats.

1.       Teacher (Graduation Cap) 

2.       Mentor (Hogwarts Wizard’s Hat)

3.       Facilitator (Party Hat)

4.       Coach (Ted Lasso’s Hat)

5.       Consultant (Surgery Scrub Cap or my Judge’s Wig)

6.       Dojo Coach (Karate Kid Headband)

7.       Project Manager (Hard Hat with Head Lamp)

8.       Auditor (Green Accountant Hat)

9.       Agile Police (NY PD Hat)

10.   Spy (Sherlock Holmes Hat)

11.   Deliver This Product! (Crash Helmet)

12.   Yes Person (Propeller Beany)

This is a great and fun way to bring this point home and props for what I think will be the most creative talk of this session.

Tech, Gaming, and Metaverse: An #InflectraCON Live Blog

Jennifer Bonine is a speaker that I've been able to hear in conference talks as long as I've been giving them (2010). IT's been fun to hear her talk about a variety of projects she's been part of over the past decade-plus and it's been intriguing to hear her talk about E-Sports teams, crypto kitties, and other areas that are being collectively wrapped up in the phrase "The Metaverse", AKA Web 3.0. The idea here is the notion that we are "creating digital replication of experiences in created worlds outside of our physical world". 

OK, that sounds wild. What specifically does that mean? 

The idea of Web 3.0 is that we do more than read and write stuff on the web. Going forward (and now) we can effectively "own" a part of the Web 3.0 with things like crypto kitties, NFTs, and other things. We are also creating digital economies and this is even going into the idea of countries changing their economies over to working with crypto-currencies (I'm not going to even pretend I understand the logic or ramifications behind that but it's intriguing).

The idea of the Metaverse is that we all have the ability to literally become the metaverse, not just participate in it. In many ways, our apps where we are interacting  (games, shopping, etc.) allow us to be able to create our own experiences. There are for example contact lenses that allow the user to be able to bring up a screen to view information entirely shielded from others to see or interact with it. 

Granted, there is a lot of this that is foggy in my own head but I am sure that I am interacting with this stuff day in and day out. However, I can already understand that a lot of what I used to do in traditional media spaces (television, movies, etc.) has given way to online interactions. I was joking at dinner last night that, when it comes to time spent, YouTube and Podcasts are probably my biggest time commitments for digital interactions (I'm still a bit of a snob music-wise in the fact that I really like physical media but I have bought my share of digital-only music; I'm still a full album purchaser, btw, and I doubt that will ever change ;) ).

One thing that fascinates me is the growth of gameplay channels. I find it wild that there is a dedicated audience for watching other people play video games and yet, at a certain level, I can understand it. If you are not particularly adept at gameplay mechanics (raises hand) I can totally understand the allure of watching a gamer who is good at it do a run-through of a favorite game, so that you can watch the playthrough and experience the game without the anxiety of dying over and over or getting frustrated when you hit a wall.  

One area that I think is both interesting and alarming is the idea that there are a variety of algorithms that are at play here and my question is "who is watching the watchers?" There's a documentary called "Coded Bias" that I have not seen but I am now interested in seeing (this also ties into Carina Zona's talks about "The Consequences of an Insightful Algorithm" or at least I think that's the title (I reference it a lot, you'd think I'd have it memorized by now ;) ). The point is, that we have to be aware of the fact that unconscious biases exist and can if we are not careful, get tied into these developing systems and perform/interact in ways that are not intended (or in some cases, entirely intended).

A lot of the stuff that developed in Web 2.0 has literally created a psychological addiction to technology. There's a literal sense of needing to go to our apps and interact with them. Many companies rely on this interaction but there are also companies that are working on tools to allow people to better manage and perhaps wean off of an overabundance of need for social networking connections. If this becomes a reality and a real emphasis on helping people get over things like anxiety and depression, I'm excited about it.

The relationship of humans and technology is evolving and coming closer and closer all the time. While I'm glad we're not quite to the point of cybernetic implants being all the rage (really, Ghost in the Shell can wait a while longer, as far as I'm concerned) I do find it interesting to see where these devices are heading and how we are interacting with them. 


I’m A BA Girl In Agile World: An #InflectraCON Live Blog


Mindy Bohannon leads off with the idea that the Business Analyst (BA) can do many things, including own products, help with design, and also help with testing. Being a BA allows for a lot of leeway as to what they actually do and when they actually do it.

I've often been curious as to what a BA actually does. They can bring a number of things to the table they can be:

- the Proxy Product Owner

- the User Experience Expert

- the Tester (or one of them)

- the Scrum Master

As well as the more typical roles of:

- Data Analyst

- System Analyst

- Process Analyst

Okay, that's neat and all but for those of us in the back... what is Business Analysis in the first place? BA defines what the need is and then recommends solutions that deliver actual value. They also define and validate the solutions that meet business needs, goals, and objectives. 

The core focus of a BA? Communication, working with stakeholders, Facilitating conversations and work needs, prioritizing those needs, and defining/explaining the requirements and how they are implemented. An important skill to be emphasized is the ability to have a flexible mindset. The requirements as defined originally may change, and they may need to change rapidly and frequently. BA's will need to continually assess and adjust based on what they learned, as well as the fact that they may need to operate on more of a Just in Time development model for requirements. In short, Agile requires flexibility and a willingness to work on things up to and including the last responsible moment (and maybe the last irresponsible moment :) ).

The key to note here is that the BA is a much more versatile role than creating requirements. If we have them, leverage them. 


How to Nudge Your Way Through Agile Testing: An #InflectraCON LIve Blog

 I confess the title alone convinced me to come to this talk. I've never met or interacted wth Ard Kramer before. Ard calls himself a "Qualisopher" (Quality Philosopher, nerdy to a fault and I like him already ;) ). Ard starts out with an interesting question... "Do you drive better than the average driver?" When we ask such a question, we have to think "what is average?" What does that represent? Are we more favorable for ourselves or are we less favorable? In other words, do we perceive ourself as being better than others, at least in an average sense?

We have a lot of biases, as the question helps illustrate. We have our own biases and those biases are actually things that we can leverage to our advantage. Marketing uses biases to help sell all the time. Maybe we can use that knowledge to help us. This is the key to where "nudging" comes in. By leveraging the biases of people, we can encourage behaviors. By encouraging behaviors, we effectively give them a nudge. An example that was used in the talk was the idea that by making something fun (in this case, a waste receptacle was wired for found so that any time something was thrown into the waste bin, a noise of a falling object (in a cartoonish manner) was played back. This sound was amusing, so for many people, it was fun to use that waste bin. Over time, that bin had more garbage thrown away in it over other receptacles that didn't have the sound wired for it. By making it a little more fun, the municipality changed/nudged the behavior of its residents.

There are two main systems that people put into play when we are working on things, one is the sense of the "Doer" while the other is that of the "Planner". While most of us like to think we are capable of doing the second, we are actually much more focused on the former. Both have their benefits and advantages but both are different from one another. The question then comes down to "how can we use these models to help us "nudge" people in desired directions?" In the sense of testing, how do we feel about the product we use? IS it a product I can stand behind? If so, then it is likely that we can nudge people to help come along for the ride we hope to have people go on. 

An effect that is used is "The default option". In many cases, this is an area that few people explore. If the default is either too low or too high, there is a likelihood that people will not be happy with the results. It is safe to say, though, that if something requires a lot of interactions and confirmations, the abandonment rate to accomplish a task/create an order increases considerably. Thus, a reasonable set of defaults is helpful but the defaults need to be explained in a way that will make sense and be appreciated by the user(s). 

Another effect that is helpful is consistency. When people do things and are encouraged to do them, they often get a greater commitment. Why? Because when people give themselves the ability to accomplish tasks for and by themselves, it solidifies ownership of the process and thus greater consistency in accomplishing and succeeding in the task. 

Additionally, things we are asked to sign off on tend to get a greater commitment. Why? We are not anonymous at this point, we are now putting out there our own desires and focuses. By signing our name to a process, we own it, we own the outcome, and that either gives us great pride or great pause. Either way, we tend to be focused on what we are doing because we are known entities doing so.

An interesting approach that I had not heard before is "The Zeigarnik Effect". The idea here is to leave things open or unresolved because we are more likely to remember things that are not completed. Once we have completed something, well, completely, we tend to offload the learning we have done. Not completely but somewhat and then it is hard to recall where everything is. By ending a given day's efforts with a question allows us to hang on to what we have learned.

Another clever approach is to trigger an unconscious behavior. One example was the idea that trains that had quiet car designations struggled to encourage people to honor the principle of the quiet car. What they did to solve the problem was paint the cars to suggest/represent the users were in a library. What's the typical behavior of people in libraries? They tend to treat it as a space to be silent, or at least quiet. By painting the cars to look like actual libraries, they helped set people's behaviors as to how they would react and interact. Frankly, I think that is clever :).

There are lots of methodologies we can use to allow us to do the things we need to and what we expect others to do. Nudges are just that, they are not extreme, they are not unpleasant, and they are often simple things that can help guide people to move just a little bit one way and accomplish tasks in a better way.

Data, Not Software, Is Eating the World: An #InflectraCON2022 Live Blog

 I found this topic both intriguing and believable, in the sense that it's not the fact that software applications exist and are being used but the output of those applications and what it ultimately means for the present and future of usage and how we interact with them.

I remember well the days before the Internet, when I was working (barely) as a musician and my whole computer life resided on three floppy disks. That's not an exaggeration. I had three disks; one for WordPerfect and writing, one with Lotus 123 and my band's financial data, and one with Dbase III+ with my band's database/mailing list. Those three disks were indeed precious but literally, everything that mattered to us, including the actual software programs to run them, resided on these disks. This was back in 1990. Less than a year later I would be working in a company where I would be using network-based systems, interacting with USENET newsgroups, and other entities where I was generating a lot more data than my original three floppy disk example could contain. Nowadays, those three disks couldn't even hold the equivalent of a single photo I would take today at our standard resolution. 

The Data that we produce today is staggering. Trying to make sense of it is and to have actionable processes come out of that data is a challenge. Some companies make this easier than others. Some details are easier to track than others. Some data points are simple to keep track of, others are much more difficult simply because the volume and types of data being collected are disparate and hard to correlate/quantify.

Data that gets produced by applications can come from many sources and unless we can set parameters around it, we will struggle to make sense of it. Additionally, in some cases, the applications we use may make getting that data harder than it needs to be or should be. Let's take a simple example from a popular application today, TikTok. While I can see which videos I posted in the last week are doing compared to others, I can't see longer trends beyond 60 days. What are my top ten all-time most viewed videos? I have no clue short of scanning my output and getting the view counts for all of the videos and then ranking them. It seems this would/could be a valuable metric but they don't make it easy for me to figure that out. Granted, TikTok is perhaps a trivial example but I'm showing here that there is a ton of data that would be interesting to me and helpful to know about but I do not have a clear path as to how to get to that information or make it useful for me. 

The point of scientific experiments is to be able to collect data and analyze it. Many apps allow us to create a variety of interactions but what are those interactions for? What is our output of those interactions? How can we leverage them? These questions matter from each user all the way up to the largest organizations. In short, before we can have data be useful and informative, we have to be able to access the data in the first place (and prevent access to those who should not have it). More to the point, data has to be reliable, repeatable, and usable in the capacity that we are able to make sense of it and put actions in motion based on the data we have received. 


Discussions Testers Should No Longer Be Having: Live From #InflectraCON2022

Again, there's a limit to the network reliability this go around so I'm keeping the posts light on images the first go-around. I will be uploading pictures and such later to augment these posts, so consider that a heads up.

It's always fun to be able to see friends of mine at these events. Mike Lyles and I have talked about many things over the years and I'm excited to see how Mike has developed over the past few years and it's been great to see his book "The Drive Through Is Not Always Faster" get some traction as well. It's somewhat ironic that his book came out late 2019, and then when COVID hit, the philosophy had to change a little bit, as in many ways, the drive through was the only way to get anything ;).

Mike's talk is about the conversations that software testers just shouldn't still be having. Testers have a critical voice in the alignment of teams. We have to be at the table from the beginning. It's no longer sufficient to be waiting for the developers to drop code in your lap to start testing. We have a place to be starting with the design meetings. Our questions need to be asked early and often. We have the ability to help make a more testable product as well as a better quality product by actively provoking and making sure we have a clear understanding of what the product needs to have. One of the key things we need to consider is the fact that we are the ones that are meant to hold people accountable. That doesn't mean that we have to be perpetually adversarial. Fact is, most of the interactions we have are not personal or meant to be. 

Mike points to the idea of Steven Covey's Fifth Habit of Highly Effective People, which is "seek first to understand, then to be understood". For that to work, we need to listen, and we need to listen attentively. More to the point, Mike encourages Empathetic Listening. This is listening with the intent to understand. Yet Mike is talking about the discussions we shouldn't be having, so what gives? The point Mike is talking about is the fact that we are having conversations in the wrong way and that we can do this better overall.

A part of this comes down to a "debating response" that is instilled in us. In part, this is not a desire to be understood or to seek understanding, it's us trying to be right regardless of the other participants' feelings or experiences. To get out of that, it helps us to be able to be empathetic in our conversations. Some examples of conversations that we can have a lot better are:

- why did QA miss this bug?

- anybody can do testing

- it works on my machine

- we don't have enough time/budget to test effectively 

- we can test this in production

- we need 100% automation

Depending on your experiences, this list may seem challenging or no big deal. The difference is in how we deal with and talk about it. The point is, that these conversations aren't going anywhere but there's a good chance that these conversations can be had a lot better and a lot more effectively when we as empathetic listeners both try to understand what is needed/required without getting defensive or shutting down.  By actively communicating the realities and the opportunities, we can be truthful and effective with those we work with. 


Live From Gallaudet University, It's InflectraCON 2022

 I'm excited to be in Washington, D.C. this week. It's the first time since last year that I've been able to travel for a conference and it's also the first time I've been in the U.S.A. Capital since I was a teenager. 

I am at InflectraCON which is an interesting blend of both Agile Testing and DevOps and more to the point, it's the first event done with the merger of Inflectra and Software Test Professionals (STP). Thus, for those who have read or seen me talking about STP-CON, this is the evolution of STP-CON in a different jacket, so to speak.

This conference is being held at Gallaudet University, which for me is a poignant place to be delivering my particular talk. As many of you know, one of my primary focuses is Accessibility and Inclusive Design. The talk I will be delivering for this conference is "The Dos and Don'ts of Accessibility". I can think of no better place to deliver this talk than at the U.S.A.'s pre-eminent university for the education of deaf and hard-of-hearing people.

One of the challenges that we face at this event is of course the fact that we are hammering this poor University's network so I will do my level best to make updates as I can. My goal is to get talks out as they are happening but I may need to delay my postings. Regardless, as always, I intent to summarize and pontificate on each talk given, so watch this space.