Friday, March 15, 2019

I'm Gonna' Get Close To You - #30DaysOfTesting Testability

Since I don't want to put too many posts in one place, I'll use my catch-up time to mentions some other things going on in the TESTHEAD world while I do this. I don't want to be too gratuitous and self-promoting. Also, thanks to Queensryche to have a song all about monitoring to save my song title titles (say that ten times fast ;) ). OK, their version is about stalking but why quibble ;).

First off, there's a new episode of The Testing Show up over at Qualitest and on Apple Podcasts. This month Matt and I are talking with Lisa Crispin of mabl and Jessica Ingrassellino of SalesForce.org about Advanced Agile and DevOps and how those things are both similar and different. I think we did a good interview this month. Well, I almost always think we do good interviews but I especially enjoyed doing this one so I hope you will go and give it a listen.

Anyway, on with the "Thirty Days of Testability". As you might notice, today is the fifteenth. This is entry five.  Catchup in full effect. You have been warned ;).

What monitoring system is used for your application? Do the alerts it has configured reflect what you test?

Again, we are going to be considering Socialtext as it is currently implemented as a standalone product because that's all I can really talk about. Before this week, I had a partial understanding of what we actually do for this and some holes in that knowledge. I'm fortunate in that I have a couple of great Ops people who are associated with our team. I should also mention that Socialtext can be deployed in two manners. First is a hosted SAAS option and second is a local installation. We leave the monitoring of the local installations to the small percentage of our customers who prefer to do that. The majority of our customers utilize our SAAS option and therefore we host their servers. To that end, we use the following (I'm pretty sure I'm not spilling any trade secrets here, so this should be OK. If this post changes dramatically between when I post it and tomorrow, well, I'll have learned differently ;) ). Anyway:

for Monitoring the systems  (CPU, Disk space, https, http) we use a tool called Nagios.

For monitoring site uptime, https lookup, and time to respond, we use an external app called Alertra.

In addition to those two tools, we also have a variety of hand-rolled scripts that allow us to scan all of our servers looking for specific aspects of Socialtext instances and services to see if there are any issues that need attention. Examples here are things like the IP address, the hostname as viewed by the public and that it is accessible, that we are running certain key services (search, replication, cron, mail, ntp, our scheduler, our search implementation, what version that particular server is running, etc.).

The second part of the question deserves a legitimate answer and that is "Yes" and  "No". Yes, in that some of the alerts map to what we test but "no" in that there's a lot of areas we don't actively test as consistently as we should. The chat with our ops team was definitely enlightening and has given me some ideas of what I can do to improve on that front. What are those things? Well, other than verifying that we are actively doing things that affect and trigger those alerts, I will have to ask that you respect the fact that I am now veering into trade secret territory and I kinda' like the idea of keeping my job ;). 

Who You Gonna' Call? #30DaysOfTesting Testability

Wow, time flies when you are doing actual work and you are trying to get your talk and workshop done and rehearsed for STPCon (seriously if you are coming up to STPCon in Northern California first week of April, come join my talk and workshop or if I'm not your cup of tea at least say "hi" ;) ).

Anyway, on with the "Thirty Days of Testability".  As you might notice, today is the fifteenth. This is entry four. Yeah, I'm going to be playing catchup. You have been warned ;).


Do you know what your top three customer impacting issues are? How could you find out?

This is surprisingly easy for my immediate team. It's less so for my extended team. I have no idea who is coming in at what point here, so I'll say this for those who are new.


Currently, I work for Socialtext. Socialtext used to be a standalone product and for a number of customers, it still is. However, back in 2012, Socialtext was acquired by a company called PeopleFluent. PeopleFluent specializes in HR Tools as their name might indicate. PeopleFluent is also a much larger company by comparison (Socialtext as a standalone group is all of ten people). Last year, PeopleFluent was acquired by Learning Technology Group (LTG) located in the UK and with offices all over the world. Thus, when I have to think about my top three customer impacts, I have to ask clarifying questions. Are we talking about Socialtext? PeopleFluent? LTG? Interestingly enough, since Socialtext is the base platform that many of the PeopleFluent and LTG products run on and interact with, it's entirely possible that Socialtext will not even enter into the top issues of anyone outside of Socialtext and at other times a Socialtext issue can be the number 1 issue for everyone. Still with me? Excellent :).

So to keep things simple, I'll focus on Socialtext standalone and how I would determine what the biggest issues are for us. The simple answer is I can reach out to our secret agent in the field... ok, it's nowhere near that cool. We don't really have a secret agent but we do have a great customer engagement engineer and frankly, a lot of the time that's just as good. I can count on one hand the number of times when I have announced an update on our staging server (read: our personal production server) and not heard a reply from this person of "hey, what's new on staging?" They make it their responsibility to be clear and up to date with every single feature and every option available in the product. They also make a lot of sample accounts and customizations to our product to push the edges of what the product can actually do. If there is any question as to what is a pain point or an issue with a customer, any customer, they are my first point of contact. Sure, we have a CRM and a bug database but the majority of the time, if I really want to see what is happening and what's really important, I know who I am going to call... or bring up in a chat message. I mean come on, this is 2019 after all ;).

Thursday, March 7, 2019

Sponsoring a "New" Book? #30DaysOfTesting Testability

Observations for when you start looking at various book titles related to niche topics in software testing. First is that they can be hard to find. Second, they can be mighty pricy. What is the cost when it comes to learning? True, but there are a variety of factors one has to consider, such as, "OK, I really don't have close to $100 to drop for a specialty book at this exact moment."

Having said that, occasionally one finds interesting things in interesting places and sometimes those things are in the process of percolating. It is in this guide that I have chosen to make my mark for "Day Three" of this challenge... yes, I'm writing about it on "Day Seven", let's not quibble here.


Begin reading a book related to testability and share your learnings by Day 30.

To this end, I've decided to help fund a book in the process of being written (or at least to be written for Leanpub distribution). Said book?



Team Guide to Software Testability


Learn practical insights on how testability can help bring team members together to observe, understand and control customer needs, ensuring fitness and predictability of deliveries.

Now I will have to say up front that I will probably not be able to provide a 100% complete read of this book because the book is literally being written as I've purchased it. However, I will be more than happy to review what has been written and post my findings and actions of it by the end of March. Who knows, perhaps more of the book will be delivered by that time and I'll be able to offer some more details when that happens.
This is what the book says it will ultimately include:
Table of Contents
  • Foreword
  • Introduction
  • Why is testability important
  • What does hard to test feel like
  • What does testable feel like
  • What leads to testability being neglected
  • What is covered in this book
  • How to use this book
  • Feedback and suggestions
  • 1. Use a testability inception deck to visualize current team and system state and create an environment for improvement
  • 2. Adopt testability mapping to expose common smells of hard to test architectures
    • 2.1 Gathering data on poor architectural testability to detect systemic problems
    • 2.2 Low testability architectures contribute to slow feedback and deficient decision making
    • 2.3 Identify the symptoms of poor architectural testability
    • 2.4 Exercise: Measure the impact of testing smells on your architectural testability
    • 2.5 Understand how testable architecture can impact your team’s testing efforts
    • 2.6 Summary
  • 3. Use risk and incident data to remedy architectural design problems which inhibit feedback from testing
  • 4. Adopt ephemeral development environments to diversify testing techniques early and create shorter feedback loops
  • 5. Utilize events and metrics to model risks in production for continuous improvement of your test strategy
  • 6. Adopt incident postmortems to maintain a testability focus as part of your team’s continuous improvement strategy
  • Terminology
  • References
  • About the authors
  • Notes
It seems like a good place to start and I for one like to know I'm helping to fund progress on books I'd like to see be written. Win-Win!!!

Big Log - #30DaysOfTesting Testability Challenge

Anyone who has read my blog for a little while knows that I tend to fit song titles into my blog posts because it's just silly fun I like to do. Do you know how hard it is to find blog titles related to logs ;)? Robert Plant to the rescue (LOL!).

OK, seriously, I'm getting caught up with the 30 Days of Testability Challenge and here's the second checkpoint.

Perform some testing on your application, then open your applications log files. Can you find the actions you performed in the logs?

I hate to tattle on my application but it's not so much that I can't find what I need to in the log files, it's that there are so many log files and situational log files that it's a process to figure out exactly what is being looked at. I'm mentioning this because we need to keep a clear understanding of what we mean when we say "your application". Do we mean the actual application I work with? Do we mean the extended suite of applications that plug into ours? I mention this because, for each of the components that make up our application, there is a log file or, in some instances, several log files to examine.

We have a large log file that is meant to cover most of our interactions but even then, there are so many things that fly past that it can be a challenge to figure out exactly what is being represented. Additionally, there are logs for a number of aspects of our application and they are kept in separate files, such as:

- installation and upgrades
- authentication
- component operations
- third-party plug-ins
- mail daemons
- web server logs
- search engine logs

and so on.

To this end, I have found that using screen, tmux or byobu (take your pick) and splitting one of my windows up into multiple fragments allows me to have a clear look at a variety of log files at the same time so that I can see what is actually happening at any given time. Some logs fly by so fast that I have to look at individual timestamps to see dozens of entries corresponding to a single second, while other logs get updated very infrequently, usually when an error has occurred.

To that end, I'm a little torn as to my preference. Having monster log files to parse through can be a real pain. However, having to keep track of a dozen log files to make sense of the big picture is also challenging. Putting together an aggregator function so that I can query all of the files at the same time and look for what is happening can be a plus but only if they use a similar format (which, unfortunately, isn't always the case).

Based on just this cursory look, what could I suggest to my team about log files and testability?


If we have multiple log files, it would be a plus to have them all be formatted in a similar way:

 - log_name: timestamp: alert_level: module: message

repeated for each log file.

Having an option to gather all log files into an archive and have them archived each day 9or whatever time option makes the most sense).

Make it possible to bring these elements together into the same file and be parsed so as to determine what is happening and if we are generating errors, warnings or providing info messages that can help us to determine what is going on.

Finally, if at all possible, try to make the messages put into the log files as human-readable as we can.


Tuesday, March 5, 2019

What Does Testability Mean To Me?: #30daysoftesting

I've decided that I need to get back into the habit of writing stuff here just for me and anyone else who might happen to want to read it. It's true that any habit that one starts is easy to keep rolling. It's just as easy to stop a habit and literally not pick it up again because of vague reasons. I haven't posted anything since New Year's Eve and that's just wrong. What better way to get back into the swing of things than a 30 Days of Testing Challenge?

This time around the challenge is "Thirty Days of Testability".  As you might notice, today is the fifth. Therefore I'm going to be doing some catchup over the next couple of days. Apologies ahead of time if you are seeing a lot of these coming across in a short time frame :).

So let's get started, shall we?

Day 1: Define what you believe testability is. Share your definitions on The Club.

First, let's go with a, perhaps, more standard definition and let's see if I agree or if I can actually add to it. Wikipedia has an entry that determines overall testability as:

The logical property that is variously described as contingency, defeasibility, or falsifiability, which means that counterexamples to the hypothesis are logically possible.
The practical feasibility of observing a reproducible series of such counterexamples if they do exist. 
In short, a hypothesis is testable if there is some real hope of deciding whether it is true or false of real experience. Upon this property of its constituent hypotheses rests the ability to decide whether a theory can be supported or falsified by the data of actual experience. If hypotheses are tested, initial results may also be labeled inconclusive.

OK, so that's looking at a hypothesis and determining if it can be tested. Seems a little overboard for software, huh? Well, not necessarily. In fact, I think it's a great place to start. What is the goal of any test that we want to perform? We want to determine if something can be proven correct or refuted. Thus, we need to create conditions where a hypothesis can either be proven or refuted. If we cannot do either, then either our hypothesis is wrong or the method in which to examine that hypothesis isn't going to work for us. For me, software testability falls into this category.

One of the aspects that I think is important is to look at ways that we can determine if something can be performed or verified. At times that may simply be our interactions and our observations. Let's take something like the color contrast on a page. I can subjectively say that a light gray text over dark gray background doesn't provide a significant amount of color contrast. Is that hypothesis testable? Sure. I can look at it and say "it doesn't have a high enough contrast." That is a subjective declaration based on observation and opinion. Is it testable? As I've stated it, no, not really. What I have done is made a personal observation and declared an opinion. It may sway other opinions but it's not really a test in the classic sense.

What's missing?

Data.

What kind of data?

A way to determine the actual contrast level of the background versus the text.

Can we do that?

Yes, we can, if we are using a web page example and we have a way to reference the values of the specific colors. Since colors can be represented by hexadecimal values or RGB numerical values, we can make an objective observation as to the differences in various colors. By comparing the values of the dark gray background and the light gray text, we can determine what level of contrast exists between the two colors.

Whether we are using a program that can create a comparison or an application that can print out a color contrast comparison, what we have is an objective value that we can share and compare with others.

"These two colors look too close together"... not a testable hypothesis.

"These two colors have a contrast ratio of 2.5:1 and we are looking for a contrast ratio of 4.5:1 at the minimum" ... that's testable.

In short, for something to be testable, we need to be able to objectively examine an aspect of our software (or our hypothesis), be able to perform a legitimate experiment that can gather actual data, and then allow us to present that data and confirm or refute our hypothesis.

So what do you think? Too simplistic? Am I overreaching? Do you have a different way of looking at it? If so, leave a comment below :). 

Monday, December 31, 2018

And You May Find Yourself

I just realized that this is the ninth installment of this little year-end feature for my blog. I started writing it early in 2010, so that means that it is nearly a decade old. Much has changed and I've done and learned a lot in that time over the past almost nine years. However, I still manage to find a way to come back to this joke and see if the lyrics to Talking Heads "Once in a Lifetime" will line up to my life's experience. Hence the title this go around.

It's that time again, the end of another year. With it a chance to reflect on some of what I've learned, where I've been, what I could do better and what I hope to do going forward.

Some may notice that the blog entries have been fewer here this year. There are a variety of reasons for that, but specifically, I've been writing guest blog posts over at the Test Rail Blog. Thus, I've "found myself" stepping into broader topics, many of them related to software testing and software delivery, accessibility, inclusive design, and automation techniques. One of my most recent entries is here:

Let the Shell Be Your Pal

The Testing Show had an interesting year. While we have scaled back to monthly shows, we have had a variety of interesting topics and broad discussions on software testing, software delivery and quite a bit of coverage of Artificial Intelligence and Machine Learning. In fact, that was the topic of our latest show, so I'd encourage anyone interested to drop in and have a listen:

The Testing Show: Testing with AI and Machine Learning


Last year, I talked about my transition to being a 100% remote worker. This is the first full year that I worked remotely. My verdict? It's mixed to be truthful. On the plus side, I never have to leave my house. On the negative side, that is often a self-fulfilling prophecy. While I was working in Palo Alto having that daily ritual of traveling to a train station, walking and exploring around Palo Alto was a great way to break up my day. Now that I'm at home full time, I really have to remind myself to make those diversions and get up and get out.

This year, in our family, my daughter accepted a full-time mission call for our church. She has been in Sao Paulo, Brazil since early May and she will be returning back home in early November of 2019. Our weekly emails have been a very bright spot and something I look forward to. My son is still in Los Angeles, working with a recording studio and doing a number of site management details so he can live there (seriously, just think about that, how cool is it that my son literally lives in a multi-track recording studio ;) ). He's doing a lot of photographic and graphic artwork for a variety of performers so he's "living the dream". How lucratively? That's always up to interpretation and as you might guess, he's not telling me much (LOL!). Not that I blame him. I remember full well how it felt to be a performer almost thirty years ago. Creatively, I was on cloud nine. Financially, I struggled. I still wouldn't have changed any of those years and I'm pretty sure he feels the same.

On the speaking front, I'm still focusing on Accessibility and Inclusive Design. I've also expanded into developing a workshop devoted to building a testing framework from scratch. Well, building a framework from available parts and connecting them to each other is a more appropriate description. This is an initiative I've worked with my friend Bill Opsal to put together. The feedback from presenting this workshop has been great and I've appreciated the feedback I've received to make it better. I presented it at the Pacific Northwest Software Quality Conference back in October an I will present it again in April at the Software Test Professionals Conference (STPCon). I should also mention that the materials I used to put this workshop together have also been rolled out as a new framework approach at my company. It's neat when my speaking engagements and workshop presentations can filter back into my day to day work :).

If the following looks a little verbatim to last year, it's because the sentiment is the same. My thanks to everyone who has worked with me, interacted with me, been part of The Testing Show podcast as a regular contributor and as a guest, shared a meal with me at a conference, come out to hear me speak, shown support to the Bay Area Software Testers meetup, and otherwise given me a place to bounce ideas, think things through, and be a shoulder to cry on or to just hear me out when I feel like I'm talking crazy. Regardless if you have done that just a little bit or a whole lot, I thank you all.

Here's wishing everyone a wonderful 2019.

Thursday, November 8, 2018

Live Blogging at #testbash Same approach, Different location



Hello everyone and welcome to #testbash San Francisco.


As many of you are familiar, one of the things I actively do when I attend conferences is that I live Blog the sessions I attend. I am doing/have done this at #testbash but instead of those posts appearing on TESTHEAD, they are posted at Ministry of Testing's forum "The Club", specifically in the TestBash San Francisco section.

Please stop by and have a read or several, as there's plenty of posts there :).

Thursday, October 11, 2018

Results of the Install Party - a #pnsqc workshop followup

Yesterday I said I was interested in trying to see how far we could go with trying to solve the install party dilemma. That is where a bunch of people who are sitting in a room try to get the code or application installed so that it can be useful. Often this turns into a long process of trying to determine the state of people's machines, struggle with trying to see why some machines work and some don't, and overcome other obstacles. It's not uncommon to have an hour or so go by before everyone is in a working state, or at least those who can be.

Bill Opsal and I thought that making a sandbox on a Virtual Machine would be a good way to go. By supplying two installers for VirtualBox, we would be able to have the attendees install VirtualBox, set up the virtual machine, boot it and be ready to go. Simple, right? Well...

First of all, while Macs tend to be pretty consistent (we had no issues with installing to Macs yesterday) PC hardware is all over the map. I had a true Arthur Carlson moment yesterday (Station manager of "WKRP in Cincinnati") who famously quoted in an episode, "As God is my witness, I thought turkeys could fly".



Well, in that classic fashion "as God is my witness, I thought all Operating Systems supported 64-bit configurations in 2018".

Oh silly, silly Testhead!!!

To spare some suspense, for a number of participants that had older PC hardware, the option to select a Linux 64 bit guest operating system wasn't even available. Selecting a 32-bit system presented the users with a blank screen. Not the impression I wanted to make at all. Fortunately, we had a lot of attendees that were able to load the 64 bit OS without issue. Some other details I hadn't considered, but we were able to overcome:

- Hyper-V configured systems don't like running alongside VirtualBox, but we were able to convert the .vdi file to a .vhd file and import the guest OS into Hyper-V

- one of the participants had a micro book that had 2 GB of RAM for the whole system. That made setting up the guest with enough space to run in a realistic way to be difficult.

Plus one that I hadn't considered and couldn't... one attendee had a Chromebook. That was an immediate "OK, you need to buddy up with someone else".

In all, we had about eight people out of the 28 participants unable to get the system working for them. By the time we got everyone sorted, settled and we felt sure we could continue, 30 minutes had elapsed. That's better than the hour I'd routinely experienced, but we had what is, to me, an unacceptable level of people who couldn't get their systems to work.

Through talking with other workshop facilitators, we all tried a variety of options and one that I think will likely have to be the one I use going forward is the "participant install prerequisite" which one of the instructors instituted. He encouraged all of the participants to contact him before the course started and make sure they could install the environment. If they couldn't they would work out what would be needed for them to be able to do so. While this might take more time for all involved prior to the workshop, it would be balanced by the fact that all attendees were confirmed ready to go at the start of the workshop. My goal was to speed up that adoption by using a sandbox environment that was all set up. It was partially successful but now I know there are other variables that I need to pay closer attention to. Good things to keep in mind for next time.

Wednesday, October 10, 2018

Lifting Radio Silence - Building a Testing Framework from Scratch(*) at #PNSQC

Last year, my friend Bill Opsal and I proposed something we thought would be interesting. A lot of people talk about testing frameworks but if you probe deeper, you realize that what they are actually after is an end to end solution to run their tests. More times than not, a "testing framework" is a much larger thing than people realize or at least what they are envisioning is a larger thing.

Bill and I started out with the idea that we would have a discussion about all of the other elements that go into deciding how to set up automated testing, as well as to focus on what a framework is and isn't.

The net result is the workshop that we will be delivering today (in about three hours as I write this).



We will be presenting "Building a Testing Framework from Scratch (*)". The subtitle is "A Choose Your Own Adventure Game". In this workshop, we will be describing all of the parts that people tend to think are part of a testing framework, how essential they are (or are not), and what you can choose to do with them (or choose to do without them). Additionally, we are giving all participants a flash drive that has a fully working, albeit small, testing framework with plenty of room to grow and be enhanced.

OK, so some of you may be looking at the title and seeing the asterisk. What does that mean? It means that we need to be careful with what we mean by "From Scratch". When Bill and I proposed the idea, it was from our impression of "starting with nothing and going from there" and that is what we have put together. Not being full-time programmers, we didn't realize until later that that could also be interpreted as "coding from the ground up". To be clear, that is not what this is about. Neither Bill or I have the background for that. Fortunately, after we queried the attendees, we realized that most were coming to it from the perspective of our intended example. We did have a couple who thought it was the latter and gave them the option of finding a workshop that would be more appropriate for their expectations ;).

In the process, we also agreed we would do our best to try to overcome another challenge that we had experienced in workshops for years; the dreaded "install party". That's the inevitable process of trying to get everyone to have the software running on their systems in as little time as possible. This has been a long-running challenge and workshop coordinators have tried a variety of ways to overcome it. Bill and I decided we would approach it in the following manner:


  1. Create a virtual machine with all code and examples, with a reference to a GitHub repository as a backup.
  2. Give that Virtual machine to each participant on a flash drive with installers for VirtualBox.
  3. Encourage each participant to create a virtual machine and attach to the virtual disk image on the flash drive.
  4. Start up the machine and be up and running.
Today we are going to see how well this goes with a room of twenty-eight people. We will test and see if we are successful (for science!).

Tomorrow and in the coming days, I will share the results of the workshop, the good, bad, and ugly that we witnessed (hopefully much of the first but if we get some of the second or third I want to see how we can do better), as well as some of the decisions we made in making the materials that we did. We hope you will join us :).

Taking My Own Advice - A New Look for TESTHEAD

One of the comments that I made during my talk on Monday was that you could go to great lengths, make your site Accessible, pass all of the WCAG recommendations and still have an experience that was less than optimal. That point was driven home to me this morning by a message that a reader really enjoyed the material but that the white on black text was hard for them to read and that it was too small (even though it was set up to be in compliance).

Therefore, for the first time in many years, I stepped back, reconsidered the blog aesthetics vs the blog's usefulness and I redid everything.


  • The white on black look is gone.
  • The contrast level has been pumped up (I may do some more tweaking on this).
  • The default font is larger.
  • I will have to go back and check the images to make sure that the tags are still there, but the goal is that every image has an alternate description.


My goal in the next few weeks is to re-evaluate this change and then ratchet up the WCAG 2 coverage.

In other words, I ask you all to "pardon the dust" as I reset the look and feel of my home away from home. As always, I appreciate feedback and suggestions for making my words and message as available to all as possible :).