Thursday, October 11, 2018

Results of the Install Party - a #pnsqc workshop followup

Yesterday I said I was interested in trying to see how far we could go with trying to solve the install party dilemma. That is where a bunch of people who are sitting in a room try to get the code or application installed so that it can be useful. Often this turns into a long process of trying to determine the state of people's machines, struggle with trying to see why some machines work and some don't, and overcome other obstacles. It's not uncommon to have an hour or so go by before everyone is in a working state, or at least those who can be.

Bill Opsal and I thought that making a sandbox on a Virtual Machine would be a good way to go. By supplying two installers for VirtualBox, we would be able to have the attendees install VirtualBox, set up the virtual machine, boot it and be ready to go. Simple, right? Well...

First of all, while Macs tend to be pretty consistent (we had no issues with installing to Macs yesterday) PC hardware is all over the map. I had a true Arthur Carlson moment yesterday (Station manager of "WKRP in Cincinnati") who famously quoted in an episode, "As God is my witness, I thought turkeys could fly".



Well, in that classic fashion "as God is my witness, I thought all Operating Systems supported 64-bit configurations in 2018".

Oh silly, silly Testhead!!!

To spare some suspense, for a number of participants that had older PC hardware, the option to select a Linux 64 bit guest operating system wasn't even available. Selecting a 32-bit system presented the users with a blank screen. Not the impression I wanted to make at all. Fortunately, we had a lot of attendees that were able to load the 64 bit OS without issue. Some other details I hadn't considered, but we were able to overcome:

- Hyper-V configured systems don't like running alongside VirtualBox, but we were able to convert the .vdi file to a .vhd file and import the guest OS into Hyper-V

- one of the participants had a micro book that had 2 GB of RAM for the whole system. That made setting up the guest with enough space to run in a realistic way to be difficult.

Plus one that I hadn't considered and couldn't... one attendee had a Chromebook. That was an immediate "OK, you need to buddy up with someone else".

In all, we had about eight people out of the 28 participants unable to get the system working for them. By the time we got everyone sorted, settled and we felt sure we could continue, 30 minutes had elapsed. That's better than the hour I'd routinely experienced, but we had what is, to me, an unacceptable level of people who couldn't get their systems to work.

Through talking with other workshop facilitators, we all tried a variety of options and one that I think will likely have to be the one I use going forward is the "participant install prerequisite" which one of the instructors instituted. He encouraged all of the participants to contact him before the course started and make sure they could install the environment. If they couldn't they would work out what would be needed for them to be able to do so. While this might take more time for all involved prior to the workshop, it would be balanced by the fact that all attendees were confirmed ready to go at the start of the workshop. My goal was to speed up that adoption by using a sandbox environment that was all set up. It was partially successful but now I know there are other variables that I need to pay closer attention to. Good things to keep in mind for next time.

Wednesday, October 10, 2018

Lifting Radio Silence - Building a Testing Framework from Scratch(*) at #PNSQC

Last year, my friend Bill Opsal and I proposed something we thought would be interesting. A lot of people talk about testing frameworks but if you probe deeper, you realize that what they are actually after is an end to end solution to run their tests. More times than not, a "testing framework" is a much larger thing than people realize or at least what they are envisioning is a larger thing.

Bill and I started out with the idea that we would have a discussion about all of the other elements that go into deciding how to set up automated testing, as well as to focus on what a framework is and isn't.

The net result is the workshop that we will be delivering today (in about three hours as I write this).



We will be presenting "Building a Testing Framework from Scratch (*)". The subtitle is "A Choose Your Own Adventure Game". In this workshop, we will be describing all of the parts that people tend to think are part of a testing framework, how essential they are (or are not), and what you can choose to do with them (or choose to do without them). Additionally, we are giving all participants a flash drive that has a fully working, albeit small, testing framework with plenty of room to grow and be enhanced.

OK, so some of you may be looking at the title and seeing the asterisk. What does that mean? It means that we need to be careful with what we mean by "From Scratch". When Bill and I proposed the idea, it was from our impression of "starting with nothing and going from there" and that is what we have put together. Not being full-time programmers, we didn't realize until later that that could also be interpreted as "coding from the ground up". To be clear, that is not what this is about. Neither Bill or I have the background for that. Fortunately, after we queried the attendees, we realized that most were coming to it from the perspective of our intended example. We did have a couple who thought it was the latter and gave them the option of finding a workshop that would be more appropriate for their expectations ;).

In the process, we also agreed we would do our best to try to overcome another challenge that we had experienced in workshops for years; the dreaded "install party". That's the inevitable process of trying to get everyone to have the software running on their systems in as little time as possible. This has been a long-running challenge and workshop coordinators have tried a variety of ways to overcome it. Bill and I decided we would approach it in the following manner:


  1. Create a virtual machine with all code and examples, with a reference to a GitHub repository as a backup.
  2. Give that Virtual machine to each participant on a flash drive with installers for VirtualBox.
  3. Encourage each participant to create a virtual machine and attach to the virtual disk image on the flash drive.
  4. Start up the machine and be up and running.
Today we are going to see how well this goes with a room of twenty-eight people. We will test and see if we are successful (for science!).

Tomorrow and in the coming days, I will share the results of the workshop, the good, bad, and ugly that we witnessed (hopefully much of the first but if we get some of the second or third I want to see how we can do better), as well as some of the decisions we made in making the materials that we did. We hope you will join us :).

Taking My Own Advice - A New Look for TESTHEAD

One of the comments that I made during my talk on Monday was that you could go to great lengths, make your site Accessible, pass all of the WCAG recommendations and still have an experience that was less than optimal. That point was driven home to me this morning by a message that a reader really enjoyed the material but that the white on black text was hard for them to read and that it was too small (even though it was set up to be in compliance).

Therefore, for the first time in many years, I stepped back, reconsidered the blog aesthetics vs the blog's usefulness and I redid everything.


  • The white on black look is gone.
  • The contrast level has been pumped up (I may do some more tweaking on this).
  • The default font is larger.
  • I will have to go back and check the images to make sure that the tags are still there, but the goal is that every image has an alternate description.


My goal in the next few weeks is to re-evaluate this change and then ratchet up the WCAG 2 coverage.

In other words, I ask you all to "pardon the dust" as I reset the look and feel of my home away from home. As always, I appreciate feedback and suggestions for making my words and message as available to all as possible :).

Tuesday, October 9, 2018

The Lost Art of Live Communication - a #pnsqc Live Blog




Wow, have we really already reached the end of the program? That was a fast two days!

Jennifer Bonine is our closing keynote and her talk is centered on the fact that we seem to be losing the ability to actually communicate with people. We are becoming "distanced". I've found that there are a handful of people that I actually talk to on the phone. It's typically limited to my family or people who I have a close connection to.

I work from home so I really have to make an effort to get up and out of my house. Outside of meetings and video calls I don't directly interact with my co-workers. There are no water cooler chats. We do use our chat program for intra-work communication, but otherwise, there really isn't any random communication. Is this good or bad? Jennifer is arguing, and I'd say successfully, that it's a bit of both, with a solid lean towards bad.

What is not debatable is that we are definitely communicating less in person and in real time. Is this causing a disconnect with families? I think it's highly likely. Jennifer does as well.

How much of our communication skill is non-verbal? To what level do we have to go through to make sure that a text message gives the full nuance that an in-person communication does? when we explain something to someone, how do we know they actually have received and processed the message effectively. Outside of an in-person discussion, we really don't. Often, even in an in-person discussion, there may well be a lot lost. Communication styles are unique to individuals and different people communicate and receive information differently.

I read a great deal, so I have a vocabulary that may go over the head of many of the people I communicate with. I pride myself on trying to "Speak Dude" as much as possible but my sense of speaking "Dude" may still have a lot of words that people may not understand. Having a big vocabulary can be cool but it's not necessarily a plus if the people I am communicating with doesn't get the words that I am using.

Jennifer suggests that, perhaps, one of the biggest positives of AI-based test automation making inroads has less to do with the fact that it can automate a bunch of testing and that it can free up our minds for doing lots of other things, things that are potentially a lot more interesting compared to the repetitive tasks.

We had a conversation break that amounted to "what would we want to do/be if we had one year to live and had a million dollars in the bank?" It was interesting to me to see that, after a very short time to think, I knew what I wanted to do. With those parameters, I would want to gather my wife and children and just tour the world. Go to places I've never been or visit places my kids have seen and I haven't. I'd love to have my daughters show me their experiences and memories of their time in Japan. I'd love my older daughter to be able to show me the areas she has been living in while she has been in Brazil (she'll be there for another thirteen months so I hope this experiment can be paused until she returns ;) ). The neatest part of this is how quickly that clarity comes.

Communication takes time, it takes energy, and it takes commitment. I'm on board with being willing to make a better effort at communicating better. Not necessarily communicating more but certainly upping the quality of the communication I participate in.

Testing all the World’s Apps - a #pnsqc Live Blog


In my world, I basically test one app. I test its appearance on the web and I test it as it appears on a variety of mobile devices. That's about it. By Contrast, Jason Arbon runs a company that tests lots of apps. Like, seriously, LOTS of apps!

Jason asks us what we would do if we had to test a huge number of apps. Would we take an approach to try to test all of them as unique projects? It's logical to think that we would but Jason points out that, no that's not needed. A lot of apps reuse components from SDKs and even when elements are unique, they are often used in a similar manner. What if you had to test all of the world's apps? How would you approach your testing differently?

Three major problems needed to be solved to test at this scale:

Reuse of test artifacts and test logic

By developing test approaches at as high of a level as possible, we have the ability to create test templates and methods for creating test artifacts in a reliable or at least close to uniform manner. Over time, there is a way to look for common components and make a series of methods to examine which places an element might be and how it might be interacted with. Chances are that many of the steps will be very close in implementation.


Reliable test execution

Once the patterns have been determined and the set of possible locations has been mapped, it is possible to create a test harness that will go through and load a variety of apps (or a suite of similar apps) to make sure that the apps can be tested. It may seem like magic, but it's really leveraging the benefit of reused and reusable patterns.

One challenge is that a lot of services introduce latency to testing over the Internet. By setting up queuing and routing of test cases, the cases that need to be run get the priority that they need.

Unique ways to correlate and report on the test results

The reporting structure that Jason shows includes the type of app, the page type, and the load time on average for each of the page. This allows for an interesting view of how their own app relates to or competes with other apps. Wild stuff, I must say :).





The Do Nots of Software Testing - a #pnsqc Live Blog


Melissa Tondi makes a point that, at this and many other conferences, we learn a lot of things we should do. She's taking a different tack and suggesting things we should not be doing.

Do NOT Be the Enabler

I can relate to this in the sense that, in a number of sprints or times of challenge, we jump in and we become the hero (sometimes). However, there is a danger here and that danger is that it can communicate to others on the team that things can wait because the test team will be the ones to make sure it gets in under the wire. Possibly, but then that also means there may be things we miss because of being left at the end.

Risk-based and Context-driven testing approaches can help here, and we can do quality testing without necessarily driving ourselves to distraction but ultimately if we are in a situation where we can see that we are going to find that testing is about to enter an enabling phase, we as a team need to be able to figure out how to not allow that enabling to happen. As Scrum Master on my team, I am actually in a good position to make sure that this doesn't happen (or at least I can give my best effort to help see that that doesn't happen if possible).

Do NOT Automate Everything

I agree with this. I think there are important aspects that can and should be automated, especially if it helps us avoid busywork. However, when we focus on the "Everything" we end up losing focus and perhaps developing better solutions, not to mention having to charge through a lot of busywork. We should emphasize that automation help make us more efficient. If we are not achieving that, we need to push back and ask why or determine what we could/should be doing. In my world view, what I want to determine is "what am I doing that takes a lot of time and is repetitious?" Additionally, there are methods of automation that are better than others. Simon Stewart gave a great talk on this at STPcon in Newport Beach this year, so I suggest looking up that talk and reviewing it.

Do NOT Have QA-ONLY" Sprints of Cycles

If you hear the term "hardening sprint", then that's what this is. The challenge of this is that there is a lot of regression testing that needs to be processed and has the potential of losing both momentum and time as we find areas that don't mesh well together. Melissa describes "The ABC Rule":

- Always Be Coupled (with dev)
  Try to keep development and testing work coupled as closely as possible

Do NOT Own All Testing

Software Testing needs to happen at all levels. If a team has developers that have all of the testing aspects performed by the testing group, that's both a danger and a missed opportunity. By encouraging testing at all levels of development, the odds of delivering real quality product goes up considerably.

Do NOT Hide Information

This is trickier than it sounds. We are not talking about lying or hiding important things that everyone should know. This is more the danger of implicit information that we might know and act on but may not even be aware of it. We need to commit to making information as explicit as we possibly can and certainly do so when we become aware of it. If we determine information is needed to be known and then we don't act on it, then we are just as guilty as if we deliberately hid important information.


Talking About Quality - a #PNSQC Live Blog


Kathleen Iberle is covering a topic dear to my heart at the moment. As I'm going through an acquisition digestion (the second one, my little company was acquired and that acquiring company has recently been acquired) I am discovering that words we are used to and the way that we defined quality is not necessarily exactly in line with the new acquiring company. Please note, that's not a criticism, it's a reality and I'm sure lots of organizations face the same thing.

In many ways, there are lots of conversations we could be having and at times there are implicit requirements that we are not even aware of the fact that we require. I consider that an outside dependency where, if I don't know I need to do something, I can't do it until I get the knowledge that needs to be given to me so I can act on it. There's a flip side to this as well. That is the implicit requirement where there's something I do all the time, so much that I don't even think about it, but someone else has to replicate what I am doing. If I can't communicate to them the fact that there is a step they need to do because of me literally jumping through it so fast that I don't notate it, can I really be mad when someone doesn't know how to do that step?

Many of us are familiar with the idea of SMART Goals, i.e. Specific, Measurable, Achievable, Relevant and Time-based.  This philosophy also helps us communicate requirements and needs for products. Taking the time to make sure that our goals and our stories take into account whether or not they add up to the SMART goal model is a good investment in time.

An interesting distinction that Kathleen is making is the differentiation between a defect and technical debt. A defect is a failure of quality outside of the organization (i.e. the customer sees the issues). Technical debt is a failure of quality internal to the team (which may become a defect if it gets out in the wild).

An approach that hearkens back to older waterfall testing models (think the classic V model) is the idea of each phase of development and testing having a specific gate at each point in the process. Those gates can either be ignored (or have them only used at the very end of the process) or they are given too large an amount of attention out of cope or context for the phase in question. Breaking up stories into smaller atomic elements can help improve this process because the time from initial code to delivery might (can) be very short. using terms like "Acceptance Test", "Definition of Done", "Spike", "Standard Practice", etc. can help us nail down what we are looking at and when. I have often used the term "Consistency" when looking at possible issues or trouble areas.

Spikes are valuable opportunities to gain information and to also determine if there is an objective way to determine the quality of an element or a process. They are also great ways to be able to determine if we might be able to tool up or gain skills that we don't have or need more abilities with to be effective.


Risk Based Testing - a #PNSQC Live Blog


Its a fact of life. We can't test everything. We can't even test a subset of everything. What we can do is provide feedback and give our opinion on areas that may be the most important. In short, we can communicate risk and that's the key takeaway of Jeny Bramble's talk. By the way, if you are not here, you are missing out on Dante, the deuteragonist of this presentation (Dante is Jenny's cat ;) ).

Jenny points out off the bat that, often, words are inadequate when it comes to communicating. That may sound like unintentional irony but I totally get what Jenny is saying. We can use the same words but have totally different meanings. One of the most dangerous words (dangerous as in its fluidity) is "risk". We have to appreciate that people have different risk tolerances, often in the same team. I can point to my own team of three testers and I can feel in our discussions that risk is often a moving target. We often have to negotiate as to what the level of risk actually is. We get the idea that risk exists, but how much and for whom is always up for discussion.

Jenny points out that risk has a variety of vectors. There's a technical impact, a business impact, and a less tangible morale impact. When we evaluate risk, we have to determine how that risk will impact us. What is the likelihood that we will experience failure in these scenarios? I often have these discussions when it comes to issues that I find. Rather than just come out and say "this is a bug!", instead, I try to determine a consensus of how bad this issue might be. This is often done with discussions with our product owner and asking questions like "if our customers were to see this, what would your impression be?" I likewise have similar discussions with our developers and often, just asking questions often prompts people to look at things or to have them say "hey, you know what, give me a couple of hours to harden this given area".

Risk isn't always limited to the feature you are developing at the given moment. A timetable changing is a risk. Third party interactions can increase risk, sometimes considerably. If your infrastructure is online, consider where it is located (Jenny is from North Carolina and as many are probably aware, we recently had a hurricane sweep through that made a mess of Eastern North Carolina. Imagine if your co-lo was located there.

Ultimately, what it comes down to is being able to perform an effective risk assessment and have a discussion with our teams about what those risks are, how likely they are to happen, and ultimately how we might be able to mitigate those risks.

Jenny has a way of breaking down a risk matrix to make it a numerical value. By looking at the level of likelihood with the level of impact, multiply the two numbers and that gives you the risk factor. A higher number means higher risk and higher efforts to mitigate. Lower values mean lower risk and therefore lower cost to mitigate.

"This feature has been drinking heavily and needs to go to rehab!" Best. Risk. Based. Metaphor. Ever (LOL!).

This is my first time seeing Jenny present, though I see her comments on Twitter frequently. If you haven't been to one of her presentations, may strongly suggest that, should she be speaking at a conference near you, that you make a priority to see her speak? Excellent, my work here is done :)!

Adventures in Modern Testing - a #PNSQC Live Blog From #OneOfTheThree


As a long time listener to Alan Page and Brent Jensen's "A/B Testing" podcast, I consider myself to be an early contingent of being "One of the Three".  For those wondering what this means, when the show first started, they joked that really there were only three or so people at Microsoft that even bothered to listen to it. As the show grew in popularity, the joke grew to mean that there were still only three people listening, just at any one time. Longtime listeners have taken some pride in regularly referring to themselves as "One of the Three" and I'm one of those people. Anyway, that's a long buildup to me saying that I've listened to and read Alan for a number of years (plug: check out his book "The A Word" for some great thoughts on the plusses and minuses of software test automation).

Alan and Brent have been focusing on an initiative dedicated to "Modern Testing Principles" and many episodes of A/B Testing have been dedicated to these principles. Happily, Alan isn't just throwing out the ideas, he's putting it in context and sharing his recent adventures and real-world experiences getting to these principles. Alan was part of the XBox One team in 2011-2013 and he describes those two years as "the best five years of his life" ;). It was in this role that he began thinking about what would become the core of the modern testing ideas, specifically about the test org as a community.

Testers are often not all part of the same group in an organization. Often, they are working on a number of different teams in different places and often, those people are working on similar problems in isolation. I can appreciate this as I work for a small team (used to be its own company) that was acquired by a larger company (with a number of functional teams with testing groups) which in turn has been acquired by another company (with already existing testing groups). How often do all of these people communicate? Simple answer. Outside of my immediate team (three testers) and a handful of conversations with a few other team leads (I think about six total people), that is all of the "testers" I have actually spoken to in my entire organization. I know that we have something in the neighborhood of fifty or so testers throughout the entire organization. Who is willing to take the bet that we have similar challenges and issues that we could probably solve or make movement on if we were all able to communicate about them? I'd be willing to take that bet.

Something else to consider is that we need to overcome the hubris that developers can't test, or at least not as well as a dedicated tester. It's wrong and it's selling developers short. That's not to say there aren't a lot of developers that don't want to test. I've worked with a lot of those people over the years but I've also worked with several developers who were/are excellent testers. By leveraging our interaction with and encouraging the teaching and shared learning with developers,  can help foster their growth in testing skills and abilities and that can help us focus on other areas that we haven't had time to work on or consider.

Like Alan, I tend to get bored easily and I look for new areas to tickle my brain. Often, we need to be willing to dive into areas we may not be qualified for or somehow find a way to get involved in ways we may not have been able to be involved before. Alan's suggestion is.... well, don't outright lie... but if you express at least an interest in an area and show some ability or desire to develop said ability, we may be surprised how many organizations will let us run with new initiatives. That's how I became a Build Manager and the Jenkins Product Owner. Am I a Jenkins expert? Not even close, but I'm working with it all the time and learning more and more about it every day. I wouldn't have been given that choice if I just asked for it.

We should get used to the idea of Testing without Testers? Wait, what?!! Hold on, it's not what you think... or at least it's not as much of what you might think. For a variety of teams, it is possible to do testing and do good testing, without having a dedicated testing team. I get it, why would a tester be excited about that proposition? Overall, if we are not the sole person responsible for testing, we can actually look at areas that are not being addressed or might be less covered. While I still do a lot of testing, I actually spend a significant time as a Scrum Master and as a Release Manager and Build Manager. Neither of those roles is specific to testing but they are enhanced by my being a tester. Both roles allow me to actually exercise some dominion and allow me to address quality issues in a more direct way.

Traditional testing harms business goals by creating unnecessary delays, an over-focus on specification correctness over actual quality, trying to be a "safety net" for the organization, and focusing on testing as a dedicated specialty with no interaction or an isolated approach. Wouldnt we instead want to be more focused on having everyone be focused on creating solutions that are better quality from the start, where everyone understands how to test and make that a part of the process?

In Alan's model of the Modern Tester, we are more of a Quality Coach. We focus on speeding up the team. We do some testing as well but on the whole, we use our expertise to help others tests. In that process, Alan and Brent and the rest of "The Three" have gone through and vetted what the Modern Testing Principles are:

The seven principles of Modern Testing are:


  1. Our priority is improving the business.

    We need to understand where the pain points are so that we can help change the culture and way things are done. We should place priority on delivering quality and customer satisfaction over code or functional correctness.
  2. We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system.

    If testing is the bottleneck, then it needs to be addressed. If delivery is the issue, that needs to be addressed. Find the bottleneck and figure out how to mitigate those issues if they can't be eradicated.
  3. We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures.

    Pair testing with developers, tooling and building good infrastructure can be a big help, as well as serially working on items to completion rather than having a million things in-process that move so slowly that they don't really progress.
  4. We care deeply about the quality culture of our team, and we coach, lead and nurture the team towards a more mature quality culture.

    Test experts will be able to leverage their skills and help others develop those skills. Don't think of this as losing influence or the team losing the need for you. The opposite is true; the more effective coaching testers can provide, the more helpful they can be and as such, more indispensable.

  5. We believe that the customer is the only one capable to judge and evaluate the quality of our product.

    At the end of the day, the decision of good enough isn't ours. We need to be willing and able to let go and encourage the product owner and the customers to let us know if they are happy with what we have delivered.
  6. We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact.

    We need to get a better understanding of what the data actually says. What features are really being used and are important? How can we know? How can we break down what the customer is actually doing and how positive their experience is?
  7. We expand testing abilities and know-how across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.

    This is the most daring part. If we are really willing to embrace this, we might work ourselves out of a job. If we are truly good enough to do this, this shouldn't be that big of a concern because there are a LOT of organizations out there who are not even close to this. A "Modern Tester "will always be needed, but perhaps not in the same place all of the time.
As I've said, I've listened to these ideas come together over the past couple of years and it's really cool to hear this in this condensed format after all this time. Alan has a bold and interesting take on the transitioning of testing into the brave new world. It's a compelling view and, frankly, a view I'd like to embrace in a greater way.


Rise of the Machines - a #pnsqc live blog


All right, it's day 2, the last day of the technical program and we are starting off with Tariq King's talk "Rise of the Machines". The subtext of this talk is "Can Artificial Intelligence Terminate Manual Testing?" In many ways, the answer is "well, kind of..."

In a lot of ways, we are looking at Machine learning and AI through a lens that Hollywood has conditioned us to. Our fears and apprehensions about robotic technology outstripping humanity has been a part of our common lore for the past 100 years or so. Counter to that is the idea that computers are extremely patient rocks that will only do exactly what we tell them to. My personal opinion, as far as anyone might care, is somewhere in between. It's not a technological problem, it's an economic one. We are already watching a world develop where machines have taken the place of people. Yes, there are still people maintaining and taking care of the grooming and feeding of these machines, but it's a much smaller percentage of people that were doing that work as little as ten years ago.

Recently, we have seen articles about software developers that have automated themselves out of their jobs. What does that tell us? Does that mean that we are ultimately reaching a point where our software is outstripping our ability to contribute? I don't think so. I think in many cases we might have reached a point to where a machine can replace a person who has ceased to look for broader and greater questions. Likewise, is it possible for machines to replace all manual testing? The answer is yes if we are just looking at the grunt work of repetition. The answer is more nuanced if we ask "will computers be able to anticipate the exploration sense and think of new ways to look for more interesting problems?" Personally, I would say "not yet, but don't count the technology out". It may be a few decades, maybe more, but ultimately we will be replaced if we stop looking for wider and more interesting problems to solve.

We focus on Deep Blue beating the grand master of Chess, Alpha Go beating the grand master of Go, and Watson beating Ken Jennings in Jeopardy (not just beating but being so much faster to get to the buzzer that Ken never got the chance to answer). Still, is that learning, or is that brute force and speed? I'd argue that, that this point, it's the latter, but make no mistake, that's still an amazing accomplishment. If machines can truly learn from their experience and become even more autonomous in their solutions, then yes, this can get to be to be very interesting.

Machine Learning is in the process of re-inventing how we view the way that cars are driven and how effective they can be. Yes, we still hear about the accidents and they are capitalized on, but in that process, we forget about the 99% of the time that these cars are driving adequately or exceptionally, and in many cases, better than the humans they are being compared to. In short, this is an example of a complex problem that machines are tackling, and they are making significant strides.

So how does this relates to those of us who are software testers? What does this have to do with us? It means that, in a literal brute force manner, it is possible for machines to do what we do. Machines could, theoretically, do exhaustive testing in ways that we as human beings can't. Actually, let me rephrase that... in ways that human being's wont. 

The Dora Project is an example of bots doing testing using very similar methods as humans. Granted, Dora is still quite a ways away from being a replacement for human-present testing. Make no mistake, though, she is catching up and learning as she goes. Dora goes through the processes of planning, exploring, learning, modeling, inferencing, experimenting, and applying what is learned on future actions. If that sounds like what we do, that's no accident. Again, I don't want to be an alarmist here, and I don't think Tariq is trying to be an alarmist here either. He's not saying that testers will be obsoleted. He's saying that people who are not willing to or aren't interested in jumping forward and trying to find those next bigger problems, those are the people that probably should be concerned. If we find that we are those people, then yes, we very well probably should be concerned.

Monday, October 8, 2018

How Do We Fix Test Automation? - a #pnsqc Live Blog


Well, this should be interesting :).

OK, I'm poking a little fun here because I know the presenter quite well. I met Matt Griscom at a Lean Coffee event in Seattle when I was speaking at ALM Forum in 2014. At the time he was talking about this idea of "MetaAutomation" and asked if I would be willing to help review the book he was working on. I said sure, and now four years later he's on his Third Edition of the book and I've been interested in seeing it develop over the years.

Matt's talk is "How to Fix “Test Automation” for Faster Software Development at Higher Quality"

So first, let's get to the first premise. Test Automation is broken? I don't necessarily disagree, I definitely think that there's a lot of truly inefficient automation, but Matt's going farther and saying that it's genuinely broken. We write tests that don't tell us anything, we see a green light and we move forward and we don't even really know if the automation we are doing is even doing what we intend it to do. We definitely notice when it's broken, or the test fails, but how many of our tests that are passing are actually doing anything worth talking about?

In Matt's estimation, this is where we can do better, and he has the answer... well, he as an answer, nobody has THE answer just yet, unless that answer is "42" (just seeing who is paying attention ;) ). This is where Matt's method of MetaAutomation comes in. His approach obsoletes the need to automate manual tests, as in it would be caught in the code if implemented. Matt's emphasis is "if it doesn't find bugs, don't bother".

The idea is that the code includes self-documenting checks that show where issues are happening and communicate at each level. We shouldn't look at these as being unit tests, as they go into error checking at an atomic level where Meta Automation might be overkill but outside of atomic unit tests, it's meant to be useful to everyone from Business Analysts and Product Owners to software testers and developers. The real promise of Meta Automation, according to Matt, is that QA should be the backbone of communication and Meta Automation helps make that all happen. Bold statements, but to borrow from Foodstuff/Savor's Annie Reese.... "Meta Automation.... what is it?!"

Basically, MetaAutomation comes down to a Pattern Language that can be summed up in the following table (too hard to write and I don't want to mess this up):

Click on image to see full table in larger type

Here's a question to consider... "In Quality Automation, what would you like your software team to do better?" You have one minute. No, seriously, he asked that question, set a timer and asked us to talk amongst ourselves ;). A tells B, then B tells A.

What did we come up with?

It would be nice to improve visibility on tests that don't run due to gating conditions.
It would be nice to improve the understanding of what tests are actually doing and how they do it.
It would be good to get a handle on the external dependencies that may mess up our tests.
It would be great if the testers and developers could talk to each other sooner.

Granted, this is a tough topic to get to the bottom of in a blog post of a single session, but hey, if you are interested in hearing more, check out http://metaautomation.net/ for more details.



Efffective CI - a #pnsqc live blog

Hi everyone! Miss me ;)?

My talk is finished and on the whole, I think it went well.

As a paper reviewer, I get the change each year to help proofread and make suggestions for papers so that they can be published in the proceedings. This talk was one of the ones I reviewed, so I definitely had to see the presentation.


Ruchir Garg is with Red Hat and his talk was focused on methods to maximize the chance of success for incoming merge requests. His focus is on creating automated tests that can help ensure prior to CI that the code meets the guidelines decided by the team (if there are enough tests, references the feature it adds or what specifically it fixes, maintains compatibility, etc.). If the criteria are passed, then the tool allows the job to progress.  If it doesn't, it stops it before the Merge Request takes place.

In reality, this isn't test automation. It is more accurately described as process automation. It helps to enforce the company's criteria for what needs to be in place. If it doesn't pass, it never gets sent on to CI. I have to admit, it's an interesting idea :).


Automating Next-Generation Interfaces - a #pnsqc Live Blog



Normally, I'm usually not all that interested in attending what I call "vendor talks" but I made an exception this time because the topic interested me and I've been curious about this topic for awhile.

In "How to Automate Testing for Next-Generation Interfaces" Andrew Morgan of Infostretch covers a variety of devices, both common and emerging. Those of us who are most comfortable with working with web and mobile apps need to consider that there are a variety of devices we are not even interacting with (think bots, watches, voice devices like Siri and Alexa. These require a fundamentally different approach to testing. Seriously, how does one automate testing of Siri short of recording my voice and playing it back to see how well it does?

Additionally, we have a broader range of communications (WiFi, Bluetooth, biometric sensing, etc.). Seriously, how would someone automate testing of my FitBit Surge? How do we test face detection? How about Virtual Reality?

To accomplish this, your project team must be able to successfully pair device hardware capabilities and intelligent software technologies such as location intelligence, biometric sensing, Bluetooth, etc. Testing these systems and interfaces is becoming an increasingly complex task. Traditional testing and automation processes simply don’t apply to next-generation interfaces.

OK, so that's a good list of questions but what's the specifics? What does a bug look like in these devices and interfaces? Many of these issues are around User Experience and the usefulness of the information. If you are chatting with a bot, how long does it take for the bot to figure out what it is you are talking about? Does it actually figure it out in a way that is useful to you? Amazon Alexa has a drop-in feature where you can log into an Alexa and you can interact with it. Can these be abused? Absolutely! What level of security do we need to be testing?

Other things to consider:

- How are we connecting?
- How are we processing images?
- How are we testing location-specific applications?
- Is the feature effectively dealing with the date and time well?
- How do we handle biometric information (am I testing a fingerprint or the interaction with that fingerprint?)

At this point, we are into an explanation of what Infostretch provides and some examples of how they are able to interact with these devices (they have dedicated libraries that can be accessed via REST). Key takeaway is that there are a lot of factors that are going to need to be tested and I'm intrigued at how to start addressing these new systems.



Mobile Testing Beyond Physical Reach - a #PNSQC Live Blog


One of the additions to my reality as of late has been working with Browserstack, which is a company that provides a pool of physical devices as well as emulated devices to do mobile and desktop testing. It's an interesting service in that it allows for a lot of variation in device interactions. Still, while it has a good selection, it certainly doesn't pretend that it has every possible option out there. Plus, those services come at a cost. That's not to say that having a variety of devices doesn't have its own costs, but over time, it may make sense to keep that testing locally. Still, how can we leverage our devices in an efficient way and also leverage our time and attention?

In "Mobile Testing Beyond Physical Reach", Juan de Dios Delgado Bernal is describing a physical setup that he uses as m “Mobile Testing Farms”. By chaining a variety of devices together via an extended powered USB hub, his team is able to work with and automate a variety of devices remotely from a desktop or laptop. In short, the goal is to be able to test smartphones beyond physical reach.

We only sit and open our browser, then just access the Smartphone Test Farm. In this session, we will demonstrate how to use a Smartphone Test Farm (STF) using python automated scripts.

It's easier to interact with emulators but emulators certainly don't take the place of an actual device. Additionally, there are a number of states that a mobile device can be in and emulators are not able to replicate these (or at least, I'm not familiar with how they do this). By having a device farm, the various conditions and states for the device can be readily manipulated, as well as tested in that variety of conditions. Juan is using a variety of Python scripts so that the tests he is running can cycle through the devices in the farm.

Juan demonstrated these scripts in real time (hence why we see him at the computer :) ) and it's intriguing to see the potential and options that are available with a relatively simple and (arguably) reasonably priced apparatus. For my perspective, much of what I look at is focused on web and mobile applications. The examples that Juan is showing actually examine literal phone calls and the quality of the signal and the quality of the call which, frankly, I think is really cool. Its outside of my typical wheelhouse, but it gives me some ideas as to how I might leverage a farm like this. Not sure it's going to cause me to stop using Browserstack but it's definitely worth considering.

Follow the Road to Quality - a #PNSQC Live Blog


Greetings friends! It's been way too long since I've done a live blog series, so you are either lucky or unfortunate but nevertheless, I'm here for you these next couple of days ;).

Let's start with a few personal logistics. I'm excited to be presenting twice at PNSQC this year. First, I will be presenting today around lunchtime about "Future Proofing Your Software". The topic is a misnomer of sorts, as I'm not talking about technologically future proofing your code, but anthropologically so. I'm talking specifically about software and aging, as if we are lucky enough, all of us will grow older and get to deal with all that entails. What makes this talk special this year was that PNSQC invited me to speak on this topic. As I've been focused on this topic for a number of years I am really happy to be presenting it.

My second contribution is a workshop I will be giving on Wednesday called "Building a Testing Framework from Scratch". This is a co-presentation with my friend Bill Opsal (cute side note: Bill and I both live in the Bay Area but to date, with the exception of one day, we have only met up with each other at this conference). Hence we decided we should do something together and this workshop is it. I won't be live blogging about it specifically but I will be talking about it in the days following the conference.

Let's get started with the first talk. Michael Mah is the first speaker and true to form it starts out with a technical difficulty (really, what testing conference would be complete without at least one of these happening. Maybe with it happening at the beginning of the program it's a good omen ;) ). One of the factors that we often miss when we are talking about software delivery is how the business itself actually delivers. We spend a lot of time and attention (deservedly so) to make sure that the software we develop works and that we can deliver it on time with the best quality that we can. Do our businesses do the same Often the answer is "no".  Michael points out that when organizations scale up, they expect that there will be an increase in bugs. Double the people, double the bugs. Seems logical, right? Truth is, what really happens is the bug count goes up exponentially (if you double your team, you often increase your bug count by four times).

Additionally, what often happens is we believe that senior developers are smart, efficient and are "the better developers" as compared to junior programmers. Data shows otherwise. Senior developers often have a higher bug count than their junior collaborators. To bastardize a line from The Eagles, "Did they get tired or did they just get lazy?" In truth, it's a bit of both. There's a corollary with pilots. The incidence of plane crashes goes up with more experienced pilots. Again, it seems we tend to get complacent at a certain point and that is when we are at our most dangerous.

Michael is describing a number of companies that have tried to bring Agile up to a scale that is larger than many smaller Agile teams ever experience. To succeed in those larger environments, planning and execution is critical. You need a roadmap and you need to be aggressive with your backlog. Additionally, you have to know your limitations as an organization and be realistic about those limitations. I've gone through this process with a small company becoming part of a larger company and that larger company itself becoming part of an even bigger company. That adjustment has not always been comfortable and it has taken a different way of thinking and executing to get to the point where we can reliably deliver on sprint goals. At the same time, the more often we do meet those goals, the better we get at determining what is real and sustainable. The key to realize is that it's definitely not a one size fits all option but with time and practice, you get better at achieving your goals and objectives.

Speed is great and if you can do it with high quality, that's also great. The bigger question is "do the desired outcomes of your business also match this speed and quality?" In short, it's great we are making stuff the right way but are we making the right thing? Are the features we are developing providing the maximum value for our organization?