Friday, March 15, 2019

I'm Gonna' Get Close To You - #30DaysOfTesting Testability

Since I don't want to put too many posts in one place, I'll use my catch-up time to mentions some other things going on in the TESTHEAD world while I do this. I don't want to be too gratuitous and self-promoting. Also, thanks to Queensryche to have a song all about monitoring to save my song title titles (say that ten times fast ;) ). OK, their version is about stalking but why quibble ;).

First off, there's a new episode of The Testing Show up over at Qualitest and on Apple Podcasts. This month Matt and I are talking with Lisa Crispin of mabl and Jessica Ingrassellino of SalesForce.org about Advanced Agile and DevOps and how those things are both similar and different. I think we did a good interview this month. Well, I almost always think we do good interviews but I especially enjoyed doing this one so I hope you will go and give it a listen.

Anyway, on with the "Thirty Days of Testability". As you might notice, today is the fifteenth. This is entry five.  Catchup in full effect. You have been warned ;).

What monitoring system is used for your application? Do the alerts it has configured reflect what you test?

Again, we are going to be considering Socialtext as it is currently implemented as a standalone product because that's all I can really talk about. Before this week, I had a partial understanding of what we actually do for this and some holes in that knowledge. I'm fortunate in that I have a couple of great Ops people who are associated with our team. I should also mention that Socialtext can be deployed in two manners. First is a hosted SAAS option and second is a local installation. We leave the monitoring of the local installations to the small percentage of our customers who prefer to do that. The majority of our customers utilize our SAAS option and therefore we host their servers. To that end, we use the following (I'm pretty sure I'm not spilling any trade secrets here, so this should be OK. If this post changes dramatically between when I post it and tomorrow, well, I'll have learned differently ;) ). Anyway:

for Monitoring the systems  (CPU, Disk space, https, http) we use a tool called Nagios.

For monitoring site uptime, https lookup, and time to respond, we use an external app called Alertra.

In addition to those two tools, we also have a variety of hand-rolled scripts that allow us to scan all of our servers looking for specific aspects of Socialtext instances and services to see if there are any issues that need attention. Examples here are things like the IP address, the hostname as viewed by the public and that it is accessible, that we are running certain key services (search, replication, cron, mail, ntp, our scheduler, our search implementation, what version that particular server is running, etc.).

The second part of the question deserves a legitimate answer and that is "Yes" and  "No". Yes, in that some of the alerts map to what we test but "no" in that there's a lot of areas we don't actively test as consistently as we should. The chat with our ops team was definitely enlightening and has given me some ideas of what I can do to improve on that front. What are those things? Well, other than verifying that we are actively doing things that affect and trigger those alerts, I will have to ask that you respect the fact that I am now veering into trade secret territory and I kinda' like the idea of keeping my job ;). 

Who You Gonna' Call? #30DaysOfTesting Testability

Wow, time flies when you are doing actual work and you are trying to get your talk and workshop done and rehearsed for STPCon (seriously if you are coming up to STPCon in Northern California first week of April, come join my talk and workshop or if I'm not your cup of tea at least say "hi" ;) ).

Anyway, on with the "Thirty Days of Testability".  As you might notice, today is the fifteenth. This is entry four. Yeah, I'm going to be playing catchup. You have been warned ;).


Do you know what your top three customer impacting issues are? How could you find out?

This is surprisingly easy for my immediate team. It's less so for my extended team. I have no idea who is coming in at what point here, so I'll say this for those who are new.


Currently, I work for Socialtext. Socialtext used to be a standalone product and for a number of customers, it still is. However, back in 2012, Socialtext was acquired by a company called PeopleFluent. PeopleFluent specializes in HR Tools as their name might indicate. PeopleFluent is also a much larger company by comparison (Socialtext as a standalone group is all of ten people). Last year, PeopleFluent was acquired by Learning Technology Group (LTG) located in the UK and with offices all over the world. Thus, when I have to think about my top three customer impacts, I have to ask clarifying questions. Are we talking about Socialtext? PeopleFluent? LTG? Interestingly enough, since Socialtext is the base platform that many of the PeopleFluent and LTG products run on and interact with, it's entirely possible that Socialtext will not even enter into the top issues of anyone outside of Socialtext and at other times a Socialtext issue can be the number 1 issue for everyone. Still with me? Excellent :).

So to keep things simple, I'll focus on Socialtext standalone and how I would determine what the biggest issues are for us. The simple answer is I can reach out to our secret agent in the field... ok, it's nowhere near that cool. We don't really have a secret agent but we do have a great customer engagement engineer and frankly, a lot of the time that's just as good. I can count on one hand the number of times when I have announced an update on our staging server (read: our personal production server) and not heard a reply from this person of "hey, what's new on staging?" They make it their responsibility to be clear and up to date with every single feature and every option available in the product. They also make a lot of sample accounts and customizations to our product to push the edges of what the product can actually do. If there is any question as to what is a pain point or an issue with a customer, any customer, they are my first point of contact. Sure, we have a CRM and a bug database but the majority of the time, if I really want to see what is happening and what's really important, I know who I am going to call... or bring up in a chat message. I mean come on, this is 2019 after all ;).

Thursday, March 7, 2019

Sponsoring a "New" Book? #30DaysOfTesting Testability

Observations for when you start looking at various book titles related to niche topics in software testing. First is that they can be hard to find. Second, they can be mighty pricy. What is the cost when it comes to learning? True, but there are a variety of factors one has to consider, such as, "OK, I really don't have close to $100 to drop for a specialty book at this exact moment."

Having said that, occasionally one finds interesting things in interesting places and sometimes those things are in the process of percolating. It is in this guide that I have chosen to make my mark for "Day Three" of this challenge... yes, I'm writing about it on "Day Seven", let's not quibble here.


Begin reading a book related to testability and share your learnings by Day 30.

To this end, I've decided to help fund a book in the process of being written (or at least to be written for Leanpub distribution). Said book?



Team Guide to Software Testability


Learn practical insights on how testability can help bring team members together to observe, understand and control customer needs, ensuring fitness and predictability of deliveries.

Now I will have to say up front that I will probably not be able to provide a 100% complete read of this book because the book is literally being written as I've purchased it. However, I will be more than happy to review what has been written and post my findings and actions of it by the end of March. Who knows, perhaps more of the book will be delivered by that time and I'll be able to offer some more details when that happens.
This is what the book says it will ultimately include:
Table of Contents
  • Foreword
  • Introduction
  • Why is testability important
  • What does hard to test feel like
  • What does testable feel like
  • What leads to testability being neglected
  • What is covered in this book
  • How to use this book
  • Feedback and suggestions
  • 1. Use a testability inception deck to visualize current team and system state and create an environment for improvement
  • 2. Adopt testability mapping to expose common smells of hard to test architectures
    • 2.1 Gathering data on poor architectural testability to detect systemic problems
    • 2.2 Low testability architectures contribute to slow feedback and deficient decision making
    • 2.3 Identify the symptoms of poor architectural testability
    • 2.4 Exercise: Measure the impact of testing smells on your architectural testability
    • 2.5 Understand how testable architecture can impact your team’s testing efforts
    • 2.6 Summary
  • 3. Use risk and incident data to remedy architectural design problems which inhibit feedback from testing
  • 4. Adopt ephemeral development environments to diversify testing techniques early and create shorter feedback loops
  • 5. Utilize events and metrics to model risks in production for continuous improvement of your test strategy
  • 6. Adopt incident postmortems to maintain a testability focus as part of your team’s continuous improvement strategy
  • Terminology
  • References
  • About the authors
  • Notes
It seems like a good place to start and I for one like to know I'm helping to fund progress on books I'd like to see be written. Win-Win!!!

Big Log - #30DaysOfTesting Testability Challenge

Anyone who has read my blog for a little while knows that I tend to fit song titles into my blog posts because it's just silly fun I like to do. Do you know how hard it is to find blog titles related to logs ;)? Robert Plant to the rescue (LOL!).

OK, seriously, I'm getting caught up with the 30 Days of Testability Challenge and here's the second checkpoint.

Perform some testing on your application, then open your applications log files. Can you find the actions you performed in the logs?

I hate to tattle on my application but it's not so much that I can't find what I need to in the log files, it's that there are so many log files and situational log files that it's a process to figure out exactly what is being looked at. I'm mentioning this because we need to keep a clear understanding of what we mean when we say "your application". Do we mean the actual application I work with? Do we mean the extended suite of applications that plug into ours? I mention this because, for each of the components that make up our application, there is a log file or, in some instances, several log files to examine.

We have a large log file that is meant to cover most of our interactions but even then, there are so many things that fly past that it can be a challenge to figure out exactly what is being represented. Additionally, there are logs for a number of aspects of our application and they are kept in separate files, such as:

- installation and upgrades
- authentication
- component operations
- third-party plug-ins
- mail daemons
- web server logs
- search engine logs

and so on.

To this end, I have found that using screen, tmux or byobu (take your pick) and splitting one of my windows up into multiple fragments allows me to have a clear look at a variety of log files at the same time so that I can see what is actually happening at any given time. Some logs fly by so fast that I have to look at individual timestamps to see dozens of entries corresponding to a single second, while other logs get updated very infrequently, usually when an error has occurred.

To that end, I'm a little torn as to my preference. Having monster log files to parse through can be a real pain. However, having to keep track of a dozen log files to make sense of the big picture is also challenging. Putting together an aggregator function so that I can query all of the files at the same time and look for what is happening can be a plus but only if they use a similar format (which, unfortunately, isn't always the case).

Based on just this cursory look, what could I suggest to my team about log files and testability?


If we have multiple log files, it would be a plus to have them all be formatted in a similar way:

 - log_name: timestamp: alert_level: module: message

repeated for each log file.

Having an option to gather all log files into an archive and have them archived each day 9or whatever time option makes the most sense).

Make it possible to bring these elements together into the same file and be parsed so as to determine what is happening and if we are generating errors, warnings or providing info messages that can help us to determine what is going on.

Finally, if at all possible, try to make the messages put into the log files as human-readable as we can.


Tuesday, March 5, 2019

What Does Testability Mean To Me?: #30daysoftesting

I've decided that I need to get back into the habit of writing stuff here just for me and anyone else who might happen to want to read it. It's true that any habit that one starts is easy to keep rolling. It's just as easy to stop a habit and literally not pick it up again because of vague reasons. I haven't posted anything since New Year's Eve and that's just wrong. What better way to get back into the swing of things than a 30 Days of Testing Challenge?

This time around the challenge is "Thirty Days of Testability".  As you might notice, today is the fifth. Therefore I'm going to be doing some catchup over the next couple of days. Apologies ahead of time if you are seeing a lot of these coming across in a short time frame :).

So let's get started, shall we?

Day 1: Define what you believe testability is. Share your definitions on The Club.

First, let's go with a, perhaps, more standard definition and let's see if I agree or if I can actually add to it. Wikipedia has an entry that determines overall testability as:

The logical property that is variously described as contingency, defeasibility, or falsifiability, which means that counterexamples to the hypothesis are logically possible.
The practical feasibility of observing a reproducible series of such counterexamples if they do exist. 
In short, a hypothesis is testable if there is some real hope of deciding whether it is true or false of real experience. Upon this property of its constituent hypotheses rests the ability to decide whether a theory can be supported or falsified by the data of actual experience. If hypotheses are tested, initial results may also be labeled inconclusive.

OK, so that's looking at a hypothesis and determining if it can be tested. Seems a little overboard for software, huh? Well, not necessarily. In fact, I think it's a great place to start. What is the goal of any test that we want to perform? We want to determine if something can be proven correct or refuted. Thus, we need to create conditions where a hypothesis can either be proven or refuted. If we cannot do either, then either our hypothesis is wrong or the method in which to examine that hypothesis isn't going to work for us. For me, software testability falls into this category.

One of the aspects that I think is important is to look at ways that we can determine if something can be performed or verified. At times that may simply be our interactions and our observations. Let's take something like the color contrast on a page. I can subjectively say that a light gray text over dark gray background doesn't provide a significant amount of color contrast. Is that hypothesis testable? Sure. I can look at it and say "it doesn't have a high enough contrast." That is a subjective declaration based on observation and opinion. Is it testable? As I've stated it, no, not really. What I have done is made a personal observation and declared an opinion. It may sway other opinions but it's not really a test in the classic sense.

What's missing?

Data.

What kind of data?

A way to determine the actual contrast level of the background versus the text.

Can we do that?

Yes, we can, if we are using a web page example and we have a way to reference the values of the specific colors. Since colors can be represented by hexadecimal values or RGB numerical values, we can make an objective observation as to the differences in various colors. By comparing the values of the dark gray background and the light gray text, we can determine what level of contrast exists between the two colors.

Whether we are using a program that can create a comparison or an application that can print out a color contrast comparison, what we have is an objective value that we can share and compare with others.

"These two colors look too close together"... not a testable hypothesis.

"These two colors have a contrast ratio of 2.5:1 and we are looking for a contrast ratio of 4.5:1 at the minimum" ... that's testable.

In short, for something to be testable, we need to be able to objectively examine an aspect of our software (or our hypothesis), be able to perform a legitimate experiment that can gather actual data, and then allow us to present that data and confirm or refute our hypothesis.

So what do you think? Too simplistic? Am I overreaching? Do you have a different way of looking at it? If so, leave a comment below :).