Monday, December 31, 2018

And You May Find Yourself

I just realized that this is the ninth installment of this little year-end feature for my blog. I started writing it early in 2010, so that means that it is nearly a decade old. Much has changed and I've done and learned a lot in that time over the past almost nine years. However, I still manage to find a way to come back to this joke and see if the lyrics to Talking Heads "Once in a Lifetime" will line up to my life's experience. Hence the title this go around.

It's that time again, the end of another year. With it a chance to reflect on some of what I've learned, where I've been, what I could do better and what I hope to do going forward.

Some may notice that the blog entries have been fewer here this year. There are a variety of reasons for that, but specifically, I've been writing guest blog posts over at the Test Rail Blog. Thus, I've "found myself" stepping into broader topics, many of them related to software testing and software delivery, accessibility, inclusive design, and automation techniques. One of my most recent entries is here:

Let the Shell Be Your Pal

The Testing Show had an interesting year. While we have scaled back to monthly shows, we have had a variety of interesting topics and broad discussions on software testing, software delivery and quite a bit of coverage of Artificial Intelligence and Machine Learning. In fact, that was the topic of our latest show, so I'd encourage anyone interested to drop in and have a listen:

The Testing Show: Testing with AI and Machine Learning

Last year, I talked about my transition to being a 100% remote worker. This is the first full year that I worked remotely. My verdict? It's mixed to be truthful. On the plus side, I never have to leave my house. On the negative side, that is often a self-fulfilling prophecy. While I was working in Palo Alto having that daily ritual of traveling to a train station, walking and exploring around Palo Alto was a great way to break up my day. Now that I'm at home full time, I really have to remind myself to make those diversions and get up and get out.

This year, in our family, my daughter accepted a full-time mission call for our church. She has been in Sao Paulo, Brazil since early May and she will be returning back home in early November of 2019. Our weekly emails have been a very bright spot and something I look forward to. My son is still in Los Angeles, working with a recording studio and doing a number of site management details so he can live there (seriously, just think about that, how cool is it that my son literally lives in a multi-track recording studio ;) ). He's doing a lot of photographic and graphic artwork for a variety of performers so he's "living the dream". How lucratively? That's always up to interpretation and as you might guess, he's not telling me much (LOL!). Not that I blame him. I remember full well how it felt to be a performer almost thirty years ago. Creatively, I was on cloud nine. Financially, I struggled. I still wouldn't have changed any of those years and I'm pretty sure he feels the same.

On the speaking front, I'm still focusing on Accessibility and Inclusive Design. I've also expanded into developing a workshop devoted to building a testing framework from scratch. Well, building a framework from available parts and connecting them to each other is a more appropriate description. This is an initiative I've worked with my friend Bill Opsal to put together. The feedback from presenting this workshop has been great and I've appreciated the feedback I've received to make it better. I presented it at the Pacific Northwest Software Quality Conference back in October an I will present it again in April at the Software Test Professionals Conference (STPCon). I should also mention that the materials I used to put this workshop together have also been rolled out as a new framework approach at my company. It's neat when my speaking engagements and workshop presentations can filter back into my day to day work :).

If the following looks a little verbatim to last year, it's because the sentiment is the same. My thanks to everyone who has worked with me, interacted with me, been part of The Testing Show podcast as a regular contributor and as a guest, shared a meal with me at a conference, come out to hear me speak, shown support to the Bay Area Software Testers meetup, and otherwise given me a place to bounce ideas, think things through, and be a shoulder to cry on or to just hear me out when I feel like I'm talking crazy. Regardless if you have done that just a little bit or a whole lot, I thank you all.

Here's wishing everyone a wonderful 2019.

Thursday, November 8, 2018

Live Blogging at #testbash Same approach, Different location

Hello everyone and welcome to #testbash San Francisco.

As many of you are familiar, one of the things I actively do when I attend conferences is that I live Blog the sessions I attend. I am doing/have done this at #testbash but instead of those posts appearing on TESTHEAD, they are posted at Ministry of Testing's forum "The Club", specifically in the TestBash San Francisco section.

Please stop by and have a read or several, as there's plenty of posts there :).

Thursday, October 11, 2018

Results of the Install Party - a #pnsqc workshop followup

Yesterday I said I was interested in trying to see how far we could go with trying to solve the install party dilemma. That is where a bunch of people who are sitting in a room try to get the code or application installed so that it can be useful. Often this turns into a long process of trying to determine the state of people's machines, struggle with trying to see why some machines work and some don't, and overcome other obstacles. It's not uncommon to have an hour or so go by before everyone is in a working state, or at least those who can be.

Bill Opsal and I thought that making a sandbox on a Virtual Machine would be a good way to go. By supplying two installers for VirtualBox, we would be able to have the attendees install VirtualBox, set up the virtual machine, boot it and be ready to go. Simple, right? Well...

First of all, while Macs tend to be pretty consistent (we had no issues with installing to Macs yesterday) PC hardware is all over the map. I had a true Arthur Carlson moment yesterday (Station manager of "WKRP in Cincinnati") who famously quoted in an episode, "As God is my witness, I thought turkeys could fly".

Well, in that classic fashion "as God is my witness, I thought all Operating Systems supported 64-bit configurations in 2018".

Oh silly, silly Testhead!!!

To spare some suspense, for a number of participants that had older PC hardware, the option to select a Linux 64 bit guest operating system wasn't even available. Selecting a 32-bit system presented the users with a blank screen. Not the impression I wanted to make at all. Fortunately, we had a lot of attendees that were able to load the 64 bit OS without issue. Some other details I hadn't considered, but we were able to overcome:

- Hyper-V configured systems don't like running alongside VirtualBox, but we were able to convert the .vdi file to a .vhd file and import the guest OS into Hyper-V

- one of the participants had a micro book that had 2 GB of RAM for the whole system. That made setting up the guest with enough space to run in a realistic way to be difficult.

Plus one that I hadn't considered and couldn't... one attendee had a Chromebook. That was an immediate "OK, you need to buddy up with someone else".

In all, we had about eight people out of the 28 participants unable to get the system working for them. By the time we got everyone sorted, settled and we felt sure we could continue, 30 minutes had elapsed. That's better than the hour I'd routinely experienced, but we had what is, to me, an unacceptable level of people who couldn't get their systems to work.

Through talking with other workshop facilitators, we all tried a variety of options and one that I think will likely have to be the one I use going forward is the "participant install prerequisite" which one of the instructors instituted. He encouraged all of the participants to contact him before the course started and make sure they could install the environment. If they couldn't they would work out what would be needed for them to be able to do so. While this might take more time for all involved prior to the workshop, it would be balanced by the fact that all attendees were confirmed ready to go at the start of the workshop. My goal was to speed up that adoption by using a sandbox environment that was all set up. It was partially successful but now I know there are other variables that I need to pay closer attention to. Good things to keep in mind for next time.

Wednesday, October 10, 2018

Lifting Radio Silence - Building a Testing Framework from Scratch(*) at #PNSQC

Last year, my friend Bill Opsal and I proposed something we thought would be interesting. A lot of people talk about testing frameworks but if you probe deeper, you realize that what they are actually after is an end to end solution to run their tests. More times than not, a "testing framework" is a much larger thing than people realize or at least what they are envisioning is a larger thing.

Bill and I started out with the idea that we would have a discussion about all of the other elements that go into deciding how to set up automated testing, as well as to focus on what a framework is and isn't.

The net result is the workshop that we will be delivering today (in about three hours as I write this).

We will be presenting "Building a Testing Framework from Scratch (*)". The subtitle is "A Choose Your Own Adventure Game". In this workshop, we will be describing all of the parts that people tend to think are part of a testing framework, how essential they are (or are not), and what you can choose to do with them (or choose to do without them). Additionally, we are giving all participants a flash drive that has a fully working, albeit small, testing framework with plenty of room to grow and be enhanced.

OK, so some of you may be looking at the title and seeing the asterisk. What does that mean? It means that we need to be careful with what we mean by "From Scratch". When Bill and I proposed the idea, it was from our impression of "starting with nothing and going from there" and that is what we have put together. Not being full-time programmers, we didn't realize until later that that could also be interpreted as "coding from the ground up". To be clear, that is not what this is about. Neither Bill or I have the background for that. Fortunately, after we queried the attendees, we realized that most were coming to it from the perspective of our intended example. We did have a couple who thought it was the latter and gave them the option of finding a workshop that would be more appropriate for their expectations ;).

In the process, we also agreed we would do our best to try to overcome another challenge that we had experienced in workshops for years; the dreaded "install party". That's the inevitable process of trying to get everyone to have the software running on their systems in as little time as possible. This has been a long-running challenge and workshop coordinators have tried a variety of ways to overcome it. Bill and I decided we would approach it in the following manner:

  1. Create a virtual machine with all code and examples, with a reference to a GitHub repository as a backup.
  2. Give that Virtual machine to each participant on a flash drive with installers for VirtualBox.
  3. Encourage each participant to create a virtual machine and attach to the virtual disk image on the flash drive.
  4. Start up the machine and be up and running.
Today we are going to see how well this goes with a room of twenty-eight people. We will test and see if we are successful (for science!).

Tomorrow and in the coming days, I will share the results of the workshop, the good, bad, and ugly that we witnessed (hopefully much of the first but if we get some of the second or third I want to see how we can do better), as well as some of the decisions we made in making the materials that we did. We hope you will join us :).

Taking My Own Advice - A New Look for TESTHEAD

One of the comments that I made during my talk on Monday was that you could go to great lengths, make your site Accessible, pass all of the WCAG recommendations and still have an experience that was less than optimal. That point was driven home to me this morning by a message that a reader really enjoyed the material but that the white on black text was hard for them to read and that it was too small (even though it was set up to be in compliance).

Therefore, for the first time in many years, I stepped back, reconsidered the blog aesthetics vs the blog's usefulness and I redid everything.

  • The white on black look is gone.
  • The contrast level has been pumped up (I may do some more tweaking on this).
  • The default font is larger.
  • I will have to go back and check the images to make sure that the tags are still there, but the goal is that every image has an alternate description.

My goal in the next few weeks is to re-evaluate this change and then ratchet up the WCAG 2 coverage.

In other words, I ask you all to "pardon the dust" as I reset the look and feel of my home away from home. As always, I appreciate feedback and suggestions for making my words and message as available to all as possible :).

Tuesday, October 9, 2018

The Lost Art of Live Communication - a #pnsqc Live Blog

Wow, have we really already reached the end of the program? That was a fast two days!

Jennifer Bonine is our closing keynote and her talk is centered on the fact that we seem to be losing the ability to actually communicate with people. We are becoming "distanced". I've found that there are a handful of people that I actually talk to on the phone. It's typically limited to my family or people who I have a close connection to.

I work from home so I really have to make an effort to get up and out of my house. Outside of meetings and video calls I don't directly interact with my co-workers. There are no water cooler chats. We do use our chat program for intra-work communication, but otherwise, there really isn't any random communication. Is this good or bad? Jennifer is arguing, and I'd say successfully, that it's a bit of both, with a solid lean towards bad.

What is not debatable is that we are definitely communicating less in person and in real time. Is this causing a disconnect with families? I think it's highly likely. Jennifer does as well.

How much of our communication skill is non-verbal? To what level do we have to go through to make sure that a text message gives the full nuance that an in-person communication does? when we explain something to someone, how do we know they actually have received and processed the message effectively. Outside of an in-person discussion, we really don't. Often, even in an in-person discussion, there may well be a lot lost. Communication styles are unique to individuals and different people communicate and receive information differently.

I read a great deal, so I have a vocabulary that may go over the head of many of the people I communicate with. I pride myself on trying to "Speak Dude" as much as possible but my sense of speaking "Dude" may still have a lot of words that people may not understand. Having a big vocabulary can be cool but it's not necessarily a plus if the people I am communicating with doesn't get the words that I am using.

Jennifer suggests that, perhaps, one of the biggest positives of AI-based test automation making inroads has less to do with the fact that it can automate a bunch of testing and that it can free up our minds for doing lots of other things, things that are potentially a lot more interesting compared to the repetitive tasks.

We had a conversation break that amounted to "what would we want to do/be if we had one year to live and had a million dollars in the bank?" It was interesting to me to see that, after a very short time to think, I knew what I wanted to do. With those parameters, I would want to gather my wife and children and just tour the world. Go to places I've never been or visit places my kids have seen and I haven't. I'd love to have my daughters show me their experiences and memories of their time in Japan. I'd love my older daughter to be able to show me the areas she has been living in while she has been in Brazil (she'll be there for another thirteen months so I hope this experiment can be paused until she returns ;) ). The neatest part of this is how quickly that clarity comes.

Communication takes time, it takes energy, and it takes commitment. I'm on board with being willing to make a better effort at communicating better. Not necessarily communicating more but certainly upping the quality of the communication I participate in.

Testing all the World’s Apps - a #pnsqc Live Blog

In my world, I basically test one app. I test its appearance on the web and I test it as it appears on a variety of mobile devices. That's about it. By Contrast, Jason Arbon runs a company that tests lots of apps. Like, seriously, LOTS of apps!

Jason asks us what we would do if we had to test a huge number of apps. Would we take an approach to try to test all of them as unique projects? It's logical to think that we would but Jason points out that, no that's not needed. A lot of apps reuse components from SDKs and even when elements are unique, they are often used in a similar manner. What if you had to test all of the world's apps? How would you approach your testing differently?

Three major problems needed to be solved to test at this scale:

Reuse of test artifacts and test logic

By developing test approaches at as high of a level as possible, we have the ability to create test templates and methods for creating test artifacts in a reliable or at least close to uniform manner. Over time, there is a way to look for common components and make a series of methods to examine which places an element might be and how it might be interacted with. Chances are that many of the steps will be very close in implementation.

Reliable test execution

Once the patterns have been determined and the set of possible locations has been mapped, it is possible to create a test harness that will go through and load a variety of apps (or a suite of similar apps) to make sure that the apps can be tested. It may seem like magic, but it's really leveraging the benefit of reused and reusable patterns.

One challenge is that a lot of services introduce latency to testing over the Internet. By setting up queuing and routing of test cases, the cases that need to be run get the priority that they need.

Unique ways to correlate and report on the test results

The reporting structure that Jason shows includes the type of app, the page type, and the load time on average for each of the page. This allows for an interesting view of how their own app relates to or competes with other apps. Wild stuff, I must say :).

The Do Nots of Software Testing - a #pnsqc Live Blog

Melissa Tondi makes a point that, at this and many other conferences, we learn a lot of things we should do. She's taking a different tack and suggesting things we should not be doing.

Do NOT Be the Enabler

I can relate to this in the sense that, in a number of sprints or times of challenge, we jump in and we become the hero (sometimes). However, there is a danger here and that danger is that it can communicate to others on the team that things can wait because the test team will be the ones to make sure it gets in under the wire. Possibly, but then that also means there may be things we miss because of being left at the end.

Risk-based and Context-driven testing approaches can help here, and we can do quality testing without necessarily driving ourselves to distraction but ultimately if we are in a situation where we can see that we are going to find that testing is about to enter an enabling phase, we as a team need to be able to figure out how to not allow that enabling to happen. As Scrum Master on my team, I am actually in a good position to make sure that this doesn't happen (or at least I can give my best effort to help see that that doesn't happen if possible).

Do NOT Automate Everything

I agree with this. I think there are important aspects that can and should be automated, especially if it helps us avoid busywork. However, when we focus on the "Everything" we end up losing focus and perhaps developing better solutions, not to mention having to charge through a lot of busywork. We should emphasize that automation help make us more efficient. If we are not achieving that, we need to push back and ask why or determine what we could/should be doing. In my world view, what I want to determine is "what am I doing that takes a lot of time and is repetitious?" Additionally, there are methods of automation that are better than others. Simon Stewart gave a great talk on this at STPcon in Newport Beach this year, so I suggest looking up that talk and reviewing it.

Do NOT Have QA-ONLY" Sprints of Cycles

If you hear the term "hardening sprint", then that's what this is. The challenge of this is that there is a lot of regression testing that needs to be processed and has the potential of losing both momentum and time as we find areas that don't mesh well together. Melissa describes "The ABC Rule":

- Always Be Coupled (with dev)
  Try to keep development and testing work coupled as closely as possible

Do NOT Own All Testing

Software Testing needs to happen at all levels. If a team has developers that have all of the testing aspects performed by the testing group, that's both a danger and a missed opportunity. By encouraging testing at all levels of development, the odds of delivering real quality product goes up considerably.

Do NOT Hide Information

This is trickier than it sounds. We are not talking about lying or hiding important things that everyone should know. This is more the danger of implicit information that we might know and act on but may not even be aware of it. We need to commit to making information as explicit as we possibly can and certainly do so when we become aware of it. If we determine information is needed to be known and then we don't act on it, then we are just as guilty as if we deliberately hid important information.

Talking About Quality - a #PNSQC Live Blog

Kathleen Iberle is covering a topic dear to my heart at the moment. As I'm going through an acquisition digestion (the second one, my little company was acquired and that acquiring company has recently been acquired) I am discovering that words we are used to and the way that we defined quality is not necessarily exactly in line with the new acquiring company. Please note, that's not a criticism, it's a reality and I'm sure lots of organizations face the same thing.

In many ways, there are lots of conversations we could be having and at times there are implicit requirements that we are not even aware of the fact that we require. I consider that an outside dependency where, if I don't know I need to do something, I can't do it until I get the knowledge that needs to be given to me so I can act on it. There's a flip side to this as well. That is the implicit requirement where there's something I do all the time, so much that I don't even think about it, but someone else has to replicate what I am doing. If I can't communicate to them the fact that there is a step they need to do because of me literally jumping through it so fast that I don't notate it, can I really be mad when someone doesn't know how to do that step?

Many of us are familiar with the idea of SMART Goals, i.e. Specific, Measurable, Achievable, Relevant and Time-based.  This philosophy also helps us communicate requirements and needs for products. Taking the time to make sure that our goals and our stories take into account whether or not they add up to the SMART goal model is a good investment in time.

An interesting distinction that Kathleen is making is the differentiation between a defect and technical debt. A defect is a failure of quality outside of the organization (i.e. the customer sees the issues). Technical debt is a failure of quality internal to the team (which may become a defect if it gets out in the wild).

An approach that hearkens back to older waterfall testing models (think the classic V model) is the idea of each phase of development and testing having a specific gate at each point in the process. Those gates can either be ignored (or have them only used at the very end of the process) or they are given too large an amount of attention out of cope or context for the phase in question. Breaking up stories into smaller atomic elements can help improve this process because the time from initial code to delivery might (can) be very short. using terms like "Acceptance Test", "Definition of Done", "Spike", "Standard Practice", etc. can help us nail down what we are looking at and when. I have often used the term "Consistency" when looking at possible issues or trouble areas.

Spikes are valuable opportunities to gain information and to also determine if there is an objective way to determine the quality of an element or a process. They are also great ways to be able to determine if we might be able to tool up or gain skills that we don't have or need more abilities with to be effective.

Risk Based Testing - a #PNSQC Live Blog

Its a fact of life. We can't test everything. We can't even test a subset of everything. What we can do is provide feedback and give our opinion on areas that may be the most important. In short, we can communicate risk and that's the key takeaway of Jeny Bramble's talk. By the way, if you are not here, you are missing out on Dante, the deuteragonist of this presentation (Dante is Jenny's cat ;) ).

Jenny points out off the bat that, often, words are inadequate when it comes to communicating. That may sound like unintentional irony but I totally get what Jenny is saying. We can use the same words but have totally different meanings. One of the most dangerous words (dangerous as in its fluidity) is "risk". We have to appreciate that people have different risk tolerances, often in the same team. I can point to my own team of three testers and I can feel in our discussions that risk is often a moving target. We often have to negotiate as to what the level of risk actually is. We get the idea that risk exists, but how much and for whom is always up for discussion.

Jenny points out that risk has a variety of vectors. There's a technical impact, a business impact, and a less tangible morale impact. When we evaluate risk, we have to determine how that risk will impact us. What is the likelihood that we will experience failure in these scenarios? I often have these discussions when it comes to issues that I find. Rather than just come out and say "this is a bug!", instead, I try to determine a consensus of how bad this issue might be. This is often done with discussions with our product owner and asking questions like "if our customers were to see this, what would your impression be?" I likewise have similar discussions with our developers and often, just asking questions often prompts people to look at things or to have them say "hey, you know what, give me a couple of hours to harden this given area".

Risk isn't always limited to the feature you are developing at the given moment. A timetable changing is a risk. Third party interactions can increase risk, sometimes considerably. If your infrastructure is online, consider where it is located (Jenny is from North Carolina and as many are probably aware, we recently had a hurricane sweep through that made a mess of Eastern North Carolina. Imagine if your co-lo was located there.

Ultimately, what it comes down to is being able to perform an effective risk assessment and have a discussion with our teams about what those risks are, how likely they are to happen, and ultimately how we might be able to mitigate those risks.

Jenny has a way of breaking down a risk matrix to make it a numerical value. By looking at the level of likelihood with the level of impact, multiply the two numbers and that gives you the risk factor. A higher number means higher risk and higher efforts to mitigate. Lower values mean lower risk and therefore lower cost to mitigate.

"This feature has been drinking heavily and needs to go to rehab!" Best. Risk. Based. Metaphor. Ever (LOL!).

This is my first time seeing Jenny present, though I see her comments on Twitter frequently. If you haven't been to one of her presentations, may strongly suggest that, should she be speaking at a conference near you, that you make a priority to see her speak? Excellent, my work here is done :)!