Sunday, March 31, 2019

Book Review: Team Guide to Software Testability: a #30DaysOfTesting Testability #T9y Entry

Woohoo!!! Day 30 completed on March 31! Truth be told, this was a bit much and I have no one to blame but myself for how this turned out. Still, I feel like I learned a lot and covered a lot of ground. Some of it felt familiar but there was quite a bit of new information and perspectives I had a chance to look at and think about how to implement with my team. It's time to bring the formal "30 Days of Testability" to a close and to do that, I'm going to review a book that, as I came to the Ask Me Anything post, made me realize that the sudden availability of this book wasn't an accident, as one of the authors of the challenge, Ash Winter, was also one of the writers of this book. How convenient :)!

What book did you choose on Day 3 and what did you learn from it?


The book I chose was "Team Guide to Software Testability" by Ash Winter and Rob Meaney.

This version was published with Leanpub on 2019-03-08. Again, mighty convenient :)! While the book is listed as being 30% completed, there is plenty to consider and chew on with regards to testability for apps and navigating how to approach testability for you, your team and your applications. To borrow a line from the Introduction:

"We want to show that testability is not only about testers and, by extension, not solely about testing. It is almost indistinguishable how testable your product is, from how operable and maintainable
your product is. And that’s what matters to your (organisation’s) bottom line, how
well your product fits your customer’s needs."

Testability goes hand in hand with predictability. When an application responds in a predictable manner, we are able to interact with better certainty that we will see the results we expect. That, in turn, helps inform the testability or lack thereof of our applications.

Testability Mapping allows users and teams to get a better feel for the areas of their application that are working well and those that still need to be worked on. When we have a low understanding of the underlying testability architecture, we often struggle with anything approaching effective testing. To help address that, it's important to take stock of the testability of our applications and see how far afield that might take us (applications are rarely monolithic today, they have dependencies and other components that may or may not be obvious.

Our testing environments should be set up and configured in a way that allows us the maximum understanding of its underpinnings. Doing so lets us get started with testing and receiving meaningful feedback from the application. Paying attention to these environments and periodically addressing the testability helps make sure that complacency doesn't come into play. Environments are not static, they grow and develop and the technologies that are good for one period of time may be inadequate later.



Bottom Line:

Even at 30% complete, there is a lot of good information in this book. Is it worth purchasing as it currently is, with the idea that more will arrive over time? I say "yes". If the idea of helping your team develop a testability protocol sounds exciting and necessary, there is a lot to like in this book. Check it out!!!


Grinding Halt: a #30 DaysOfTesting Testability #t9y Entry

So close to the end! Just a couple more to go and I'll be able to call this challenge "surveyed". I won't really be able to call it "done" or fully "completed" until all of these aspects are put into place and we've moved the need in regard to their being examined. Still, this has been an active month, with a bit of an overly-ridiculously active final week (alas, nobody to blame for that but me). Nevertheless, for those who want to play along, check out the "30 Days of Testability" checklist and have some fun with me.

Do you know which components of your application respond the slowest? How could you find out?

This is a tricky question as there are different levels of what could be causing slowness. In the first instance, there are native components within our application itself, as in not those that we consume by other groups through microservices. Honestly, the ability to render full wiki pages as widgets with all of their formatting makes for some interesting interactions, some of them take time to pass along. As we can render some seriously complex HTML and CSS in these pages, to then have that displayed as a widget and then to have that displayed in a Responsive interface, it just takes time and in certain instances yes, it can be felt.

Other areas are a little harder to define but fortunately, we do have a way to determine how long it takes. Our application has a scheduler that runs in the background and every major interaction gets logged there. Want to know how long it takes to process ten thousand users and add them to the system? I can look that up and see (hint: it's not super quick ;) ).

The other areas that are challenging are where we consume another product's data via microservices to display that information. This isn't so much an issue of fast vs. slow as it is an issue of latency and availability. Sometimes there are things beyond our control that makes certain interactions feel "laggy". On the plus side, we have similar tools we can use to monitor those interactions and see if there are areas we can improve the system and network performance.

Watching the Detectives: a #30DaysOfTesting Testability #T9y Entry

I realize I probably should feel a little bit shameless right now at the silly puns I use for many of these titles. I should... but I don't ;). Besides, we are almost done and next week you can watch me shift gears and Live Blog about STP-Con, so just bear with me for a couple more "30 Days of Testability" posts :).


Pair with an internal user or customer support person and explore your application together. Share your findings on The Club.


This is actually a pretty recent experience for me in that through our LTG merger I was acquainted with a new manager, a new overall test director, a new VP of Engineering, and a CTO all at the same time. By interacting with each of these people, I've had the chance to show what Socialtext is, what it isn't, and what we need to do so that we can get all of our moving parts to work in context together. Each interaction helped me see both the areas that people understood about our product and areas where there were some gaps in understanding or context.

Because of the nature of how we configure our product suite (this is going outside of Socialtext now) we had several people discussing our product (the broader PeopleFluent/LTG product offerings in this case) and when it clicks with people exactly what our product does and how it does it, there's this smile and a reckoning of what goes where and why. Don't get me wrong, sometimes there are other emotional reactions other than smiles but those are thankfully the more common ones. Additionally, it's also neat to see what other arms of our broader product do and how they interact with and integrate with our platform. Consider this a strong "two thumbs up" to demoing and looking t as much of your product with as many people from as many business interests as you can. My guess is you will learn a lot from those interactions. I certainly have and will most likely continue to :).

Stop Me If You Think That You've Heard This One Before: a #30DaysOfTesting Testability #T9y Entry

Hah! I 've finally found a way to work that title into a TESTHEAD post (LOL!). Seriously, though, I'm hoping that after this flurry of posts, you all still like me only slightly less than you used to... OK, enough of that, let's get back to the "30 Days of Testability", shall we?

Use source control history to find out which parts of your system change most often. Compare with your regression test coverage.

I already know the answer to this since it's been a large process. Our most changed code is literally our front end. The reason is straightforward. We are redesigning it so that it will work as a Responsive UI. that means everything is getting tweaked around the front end. Our regression testing, therefore, is in the spin cycle. It's getting majorly overhauled. Our legacy interface, on the other hand, is doing well and will still be there for those who choose to use it, so that is adding an exciting challenge as well.

the biggest challenge I am personally facing is that the tests we have for our legacy interface are solid and they work well, but they are almost totally irrelevant when it comes to our Responsive interface. the IDs are different, the rendering code is different, the libraries that are used are different. the workflows are similar and in many ways close to the same but don't quite lend themselves to a simple port with new IDs. Thus I'm looking at the changes we are making and figuring out how we can best automate where it makes sense to. Needless to say, it's never dull.


Told You So: a #30DaysOfTesting Testability #T9y Entry

Well, here we are, home stretch. Almost all the way through the "30 Days of Testability" Challenge. there are several of these I haven't done yet and I'm sorely tempted to go back and do them retroactively. Yeah, let's get this one finished first before I demonstrate I've completely lost my mind ;).

Relationships with other teams affect testability. Share your experiences on The Club.

When we discuss just Socialtext, we are actually very small. When discussed within PeopleFluent, there are many more groups and departments with products that interact with us. Extend out to LTG and that number grows even more. Put simply, we have seven or so different business units and about twice that many more distinct product items that interact with Socialtext. Thus, yes, we are very aware that our product may work flawlessly (haha, but work with me here) and if we cannot show another business unit's product in ours, we're just as broken as they are.

Recently, we have focused on greater communication and interaction with a variety of team leads so that we can discuss how our product interacts with theirs and how we can simplify/streamline approaches to help make the interaction smoother. Primarily this is done through microservices, so that has been a recent uptick in our focus and attention for testing on all fronts.

One of the ways that I try to help increase the level and ability to enable better testability is with my install and config project. To that end, I try to see how many of the components of other business units I can get to install and run on a given appliance and be useful. As our platform is the chassis that everything else rides along in, we are the ultimate consumer and presenter. To that end, any and all options I can configure and very]ify are working at the same time helps with that aim. 

Saturday, March 30, 2019

Fear of the Unknown: a #30DaysOfTesting Testability #T9y Entry

Last one for today. My plan is to finish up the final entries tomorrow and thus finish this "30 Days of Testability" challenge on time :).

Ask your team if there are any areas of the system they fear to change. How could you mitigate that fear?

I've talked a bit about this and again, one has to be careful not to tattle to much on one's company. Again, I don't think this would be much of a surprise to anyone so I think I'm on safe ground here. As I've stated before, our company developed the original version of our product in 2004. At the time, it was written primarily in Perl. That was a skill that was prevalent at the time and we had a lot of the best Perl development capability on staff. Over the past fifteen years, that has changed and the prevalence of Perl has diminished. Additionally, the number of people proficient in Perl on our engineering team has shrunk while newer technologies are more in demand and we are staffing for those demands.

What this means is there are some areas of code that are legacy and, while they work well, there is a genuine concern that making changes to it will be difficult to maintain and there is also real uncertainty as to what the effects of making changes are. To that effect, we have taken a different approach of modernizing components with newer languages and shifting over to those newer components wherever possible. This allows us to slowly shrink down the dependencies on those older modules and lessen their footprint. At some point, we will reach a minimum where we will have to say "OK, we have cut this down as far as we can go and now we need to go this last mile." that is an ongoing process and one that will probably take years to fully complete.

Time in the Trenches: a #30DaysOfTesting Testability #T9y Entry

Ahhh, today's topic (well, this numbered topic, I'm not actually doing it on the designated day) is one that is near and dear to my heart, too. I think that many are the software testers who have also had some tenure doing tech support in either an official or secondary capacity. As this is another entry in the "30 Days of Testability" challenge, feel free to follow along and try out the day's exercises for yourself :).

What could you learn about your application’s testability from being on call for support? This eBook could help you get the most out of taking support calls.

The answer is "a great deal" and this comes from several years of personal experience. Customer Support engineers have a special kind of testing skill if they have been at it any length of time. It's what I refer to as "forensic testing" and many support engineers treat each call like an active crime scene. The best of them tend to be really quick at getting necessary information and if at all possible, getting to the heart of the matter fast and being able to retrace steps necessary to recreate a problem.

That was a skill that I found very helpful not just in the ability to find and confirm customer reported bugs but to also help me understand the various pain points that customers deal with. Getting into the customer's frame of reference and being able to appreciate the challenges they are facing can quickly help orient our everyday testing efforts. after time, we get a much clearer view of what matters to them.

If your support engineer isn't involved in an active firefight, ask them if they'd mind you shadowing them for a bit and listening in on their calls or working through an active issue. as one who has been both observer and active support personnel, I can assure you that you will learn a great deal regardless of your testing acumen and experience.

Digging in the Dirt: a #30DaysOfTesting Testability #T9y Entry

Coming up on the home stretch. Thanks to all who are reading these whenever you might be. For your own scorecard, go to "30 Days of Testability" and you can follow along :).

Share an article about application log levels and how they can be applied.

Ah, log files. I love them. I hate them. I really can't live without them. They are often a mess of information, and not a super glamorous topic but definitely worth talking about.


I found Erik Dietrich's article "Logging Levels: What They Are and How They Can Help You" to be interesting. I like his comment that logging can range everywhere from:

"Hey, someone might find this interesting: we just got our fourth user named Bill."

to

"OH NO SOMEONE GET A FIRE EXTINGUISHER SERIOUSLY RIGHT NOW."

As someone with an application that is already logging heavy, I appreciate the ability to not just have various logging levels, but to have a method to dynamically set them. At the moment (again, don't want to tattle but I doubt we're the only ones dealing with this) our logging can be dynamically set but it typically requires a restart of the application so that the logging changes will be picked up by all of the subsystems.

I think it would be cool to make a little interface that would allow me to have a dashboard specific with each log that I could view and then either with selecting a radio button and sending a POST command, modify that individual log file and to a level that I'm interested in. Hmmm, may have just found a project to dig into ;). 

Built for Speed: a #30DaysOfTesting Testability #T9y Entry

Is anyone else excited about the fact that the Stray Cats are touring this year? Just me? Feels really strange to see how old some of my heroes and contemporaries in the music world are. That means that I must be... (shakes head vigorously)... oh no, we're not going down that road right now. NOPE!!!

So, more "30 Days of Testability"? Fantastic!

How long does it take to set up a new test environment and start testing. Could this be faster?

This is a passion project for me as I have been actively working on a way to speed this up for years. We deploy our software to a cloud-based server(s) (depending on the configuration it can be several) and for that purpose, I prefer testing with a device(s) that matches that environment. In our world, we call these installations and configurations "appliances" so when I talk about an install, an appliance could mean a single machine or multiple machines connected together to form a single instance. The most common is an all in one appliance, meaning a single machine with everything on it.

I have a project that I have worked on for a couple of years now that has built up on all of the things that I normally do to set up a system for testing. For any software tester that wants to get some "code credit in the source repo, this is actually a really good project to start with. Much depends on the system you are working with but if you are developing and deploying a Linux based application, there is a huge amount that can be done with native shell scripts. Tasks such as:

- setting up host details for environments,
- provisioning them for DNS
- setting up security policies
- configuring networking and ports
- downloading necessary libraries and dependencies
- standard installation and configuration commands for an application
- setting up primary databases and populating them with starter data

all of these can be readily set up and controlled with standard shell scripts. With a typical appliance setup, there are three stages:

- machine setup and provisioning so it can respond on the network
- basic installation of the application
- post-installation steps that are heavy on configuring the appliance and importing starter and test data

One of the nicer things to be able to talk about at standup or retrospectives is when someone mentions an item to make part of installation or setup and often I can just say "oh, grab the appliance setup repo, I have that taken care of" or "hey, that is a good idea, let me drop that into the appliance setup repo".

The speed of setup is something I'm a little bit obsessed with. I frequently run timing commands so I can see how long a given area takes to set up and configure and my evergreen question is "OK, am I doing this in an efficient enough way? How can I shave some time off here or there?" It's also fun when someone asks "hey, how long will it take to get a test environment set up?" and I can give them an answer to within forty-five seconds, give or take ;).

Could I make setup faster with containers or other tools? I certainly can but since we are not at this point deploying containers for customers, that's a question for another time :).

You Have One Job: a #30DaysOfTesting Testability #T9y Entry

It's the weekend, and I am nine posts away from finishing this challenge. Let this be a lesson to everyone out there, don't let time get away from you. Just because you can write ten blog posts in a day doesn't mean you should. Live blogging is an exception.

My rules, I made them up ;).

Anyway, more "30 Days of Testability". Let's do this.

Unit tests provide insight into an applications testability. Pair with a developer to explore some unit tests.

We have had a longstanding rule that done isn't really done until unit tests are in place. We have a variety of work styles. Some do their unit tests first as a "Test Driven Development" approach and others do them afterward. For those who do the former, they've said that it doesn't save them time up front but it helps them stay focused and they reap benefits down the road. for those who don't do the TDD approach, they typically say that it's mostly because of how they think. They prefer to tinker with ideas and see where they will lead first and that TDD messes with that. Who is right? Probably both of them. Regardless, no stories cross the line without unit tests so they get done one way or another :).

As for myself, I've been experimenting with two unit test runners for the purpose of a new automation framework approach. as we are looking at Cucumber JVM, I've been working with both JUnit and TestNG. Both are interesting in what they do and how they glue things together. I find myself thinking in the JUnit frame of mind mainly because I've used it several times over my career, while TestNG is a newer approach. We also have a variety of unit test frameworks in pay for our front end code (Jasmine and Protractor being two examples). I definitely value looking through them as they give me a clearer idea as to what each method and module is capable of handling. I especially like looking into the error handling, though I wonder if my teammates feel the same way ;).

In any event, if you want to give yourself a boost in understanding your application, unit test reading and discussing is a pretty good approach. 

Friday, March 29, 2019

The Landscape Is Changing: a #30DaysOfTesting Testability #T9y Entry

Yay. two-thirds of the way done and I have a weekend to finish up. Daunting but I think doable :). Head over to "30 Days of Testability" to play along.

Think about what’s currently stopping you from achieving higher testability. Share your findings on The Club.

Am I being a spoilsport by writing everything here? I do use the hashtag so hopefully, that helps. I will follow up and put the findings on the threads after I finish. In any event, what are the things that are preventing or hindering higher testability? I think I have answered that already but here's a short review.

- An expansive system that can be configured in a large variety of ways.
- Widgets that are content generation, content representation and meta-data specific.
- Microservices that connect to a lot of different things and groups that I need to get a better understanding of.
- Aging components that are in the process of being updated.

Again, I don't think any of those are terribly controversial and are probably similar to a lot of organizations. The good thing is that the more exposure I get to the various components and the more I work with them, the clearer the issues become and the better I can speak to them. You'd think that a platform that I've been working for 6 1/2 years I'd probably know very well but I learn new stuff all the time. Hey, beats the alternative ;).

When You Don't See Me: a #30DaysOfTesting Testability #T9y Entry

All right, almost to the 2/3rd's mark. 11 more entries to go. To play along at home, feel free to follow along with the "30 DaysOf Testability" Challenge.

Your dependencies can constrain your testability. Head over to The Club to visualize your applications boundaries.


Again, there's a limit to what I can share in a public blog, but this was indeed useful to see what aspects could be limiting, especially since the application itself isn't a monolith but ties into several other libraries, secondary tools, web servers, search technologies, Javascript libraries (yep, we have a few) and interaction with other tools and apps that our broader company offers and how we communicate with those apps. Long story short it's not trivial.

The microservices architecture that we have is really interesting and occasionally infuriating. It adds to the complexity of testing when there are external groups that control what is being displayed and how it looks or if it's even available. Thus we don't have a comprehensive test infrastructure in place to deal with it all yet. I feel confident we will get there at some point, but again, it's not a trivial challenge in the slightest.

Our Lips Are Not Sealed: a #30DaysOfTesting Testability #T9y Entry

Quick question... who knows that this song was co-written by Terry Hall with Jane Wiedlin? Also, how many out there like the Fun Boy Three version better? Just me? OK, then ;).

This entry might feel strangely specific, but that's literally because we just did our retro and these were top on my mind, so lucky you, here are three peeves I'm trying to cope with (I'm not going to completely tattle but they do work into some areas I'm hoping to resolve shortly).

Click the following link for the full "30 Days of Testability" list.

Note the top 3 challenges you have while testing and raise them at the team retrospective.


1. Struggling to deal with an aging platform. I've talked about this in the past and I don't think this is any type of surprise, but when Socialtext was initially developed, it was written in Perl. For a time, I think that Socialtext may have had some of the premier Perl programming talents in the world (that's not hyperbole, the fact that Audrey tang was our principal engineer actually makes that true). However, in the past few years, Perl has lost a lot of its position as a go-to language and turnover has made it so that the Perl expertise we used to have is significantly less. Thus we are migrating to newer languages and infrastructures. It's going to take awhile.

2. Mobile testing in an effective way. This is my current struggle and one that I am actively investigating. I have a host of mobile devices that I am currently using but they are aging and we don't have the budget to replicate my setup for every tester. I actually like BrowserStack as a solution. I'm in the process of lobbying for it since it has a large and ever-expanding battery of devices available without having to dedicate to physical resources.

3. Making naming conventions for modules more consistent.  This is mainly for automation purposes and to the engineering team's credit, it's something we've done well with new development. However, there's still a fair bit of older code that is in need of updating, so that's n area I' like to see us spend some time with (and yes, I'm willing to help roll up my sleeves to make that happen :) ).

There are other things I could mention but again, I like my job and thus there's a limit to stuff I can talk fully openly about. 

Three of a Perfect Pair: a #30DaysOfTesting Testability #T9y Entry

Goofy wordplay aside, I had a strong appreciation for the Mark 2 version of King Crimson, of which this title was the last iteration (Discipline was my favorite album from this lineup. Yes, I'm sure you are totally thrilled to know about that ;) ).

OK, less prattle, more "30 Days of Testability".

Pair with a developer to see if they can improve something that you find difficult or time consuming to test. We have a handy guide from Lisa Crispin on Pairing with Developers: AGuide for Testers that’s worth a read. 

I'm fortunate in that I have developers that are willing to pair and look at stuff with me. As we have G-Suite as our main communication platform among the team, it's easy to demo stuff and talk out what is being tested. We are a small team and technically speaking, with management out of the equation, our engineering team is equally balanced between programmers and testers. Thus it's not an imposition or overly burdensome to get together and pair with developers.

Lisa article is a great resource and while I have little to add to it specifically, I can share a few things that I have found help considerably with pairing with developers.

1. Have a specific goal for the session: By developing a very specific charter around an area that is either difficult to test or that may require more knowledge that the developer has, being super specific really helps with this process. Often when I approach and have a question that has very clear parameters (or as clear as I can make them) I'm much more likely to be able to block out time with them. More times than not, we range farther than that specific area because we're into it and we're both able to get more done together than separately (yeah, happens a lot :) ).

2. Block out a specific time interval: set an appointment, set the desired time (about an hour is my usual suggestion, no more than two). We frequently blow past the single hour time set aside but we're usually good about heeding the two-hour limit. Beyond two hours we tend to lose focus but up to that point is usually very effective.

3. Do your homework in advance: This goes with number 1, but if I'm going to try to understand something that's going on in the code, it helps for me to have reviewed it first (if possible). Greenfield development efforts don't always allow for that but since our current efforts are mostly focused on revamping existing functionality, there's plenty of opportunities for me to read up and understand the parameters the developers are working with. I may not understand all of it (frequently I don't) but at least I am ready to start sessions with a list of questions.

Welcome to the House of Fun: a #30DaysOfTesting Testability #T9y Entry

I've been listening to Madness the last few days as they were a beloved band to me when I was a teenager. For those curious as to why I've been on an 80s ska/new wave reminiscence as of late, it's because I've been reminded yet again that another of my teenage heroes has left us. Roger Charlery aka "Ranking Roger" of The (English) Beat and General Public, passed away March 26, 2019.

Anyway, I like the cheeky title as it's a phrase I often use when I give demos, though the topic of the song is decidedly different (I'll leave it to curious readers to figure that one out ;) ).

Alright, more "30 Days of Testability" cheekiness below.

Conduct a show and tell of the latest features in your application for staff outside your immediate team. Capture their feedback and share with your team.

I do this every release. It's one of my jobs as the release manager. Every release we have historically gathered together a cross-section of the broader engineering, sales, and support teams so that we can show them the new stuff going out with our next release.

Additionally, we have as part of our "definition of done" to demo any feature we are working on to our product owner and/or our chief sales engineer. This is a great time to walk through the feature, show as many parameters as we can and to listen to their questions. Much of the time it's a fairly straightforward "show and tell" but every once in a while we have some deeper discussions. These longer conversations often serve to point out that the new feature we are demonstrating may have some follow-on stories to consider. Often, it's a chance to see and determine how well we have answered the product owners/representatives expectations so that we can adjust accordingly in the future. Needless to say, I'm a fan of this approach and strongly encourage it :).


Write it on Your Hand: a #30DaysOfTesting Testability #T9y entry

The Steve Martin reference yesterday was not a song so that broke my streak. Back to song lyrics/titles and a chance to mix Marvelous 3 with my blog. Can't pass that up. Anyway, more "30 Days of Testability" to savor. Please try to contain your excitement ;).

Share a blog post that you found interesting related to testability. Don’t forget to use the hashtag #30DaysofTesting if you share on Twitter.

Technically I've already done this as I shared Alan Richardson's blog post about Testability versus Automatability in my previous post "A Touch of Evil". I could say "hey, already done!" but what's the fun in that? Thus I'm going to go in a different direction today. It's a bit old now but I still think it has some good points to it.

Guide: Writing Testable Code

Granted this post is associated with coding practices and ways to make the code better suited for unit testing but there's still a lot of good things to consider here from a non-coders perspective:

- are we putting flow control in constructors? If so, why?
- are we passing objects that get defined but never used?
- are we consistent with the objects we are passing?
- can we follow the flow easily for what a method is doing?
- does a single method do too much?

this article covers many more situations but the real takeaway is that these coding warning signs are also things that we as testers can consider when we have testability conversations.

Let's Get #t9y: A #30DaysOfTesting Testability Entry

OK, I'm a dork. Don't know why I didn't think of this until now, but here's a cute way to look at testability. Leaning on the decomposing post of earlier:

"Want to think about "Testability"? Make it "T9Y" (tiny)!"

Yes, I realize that #t9y is already associated with "Terminology" but I'm still going to suggest we testers make this a thing ;). All right with that little piece of silliness out of the way, let's see what "30 Days of Testability" goodness we have on tap this go around.

Watch the Testability Ask Me Anything on the Dojo. Post any additional questions on The Club.

First of all, props to Ash Winter for being available to ask questions and thanks to Vernon Richards for hosting the event. A lot of ground covered and impossible to give just a basic rundown here (seriously, go listen).

There are a fair amount of areas that are worthwhile to look at when considering testability. Risk assessment, the "hookiness" of an application, the variety of ways to access information, the usability of an application and ways to recognize the "smells" associated with each of those areas were all covered in detail and enjoyably so.

There were also a list of about two dozen questions that didn't get answered and from that list, truth be told, I'm hard-pressed to add any additional ones. If I had to pick from the list as to questions I'd personally like to have answered or consider in greater detail they would be:

  • How would you approach the business to convince them they need to focus on testability from the start of a project?
  • In a high functioning development team automated testing is often viewed as not required or “secondary”, how do you overcome this bias?
  • What "ensures" testability?
  • How do you respond to “testability is the tester’s problem”? Especially combined with a reluctance to insert test-hooks to a product because “It’s not the product, and we don’t want to have test-only features that will increase our overhead”.
  • What steps do you take to align expectations on what all the “-ilities” you’re talking about, are?
  • How would you prioritize Testability?
I think I will come back and look at these at the end of this 30 Days challenge and see if I have developed answers for them :).

The Press Darlings: a #30DaysOfTesting Testability Entry

Quick question. Do you find it easier to deal with self-referential data or with data that is associated with something personable and memorable? How do you help to make both possible? See below for some of my techniques and comment if you feel there are better ways to do this. Anyway, more "30 Days of Testability" chewiness follows :).

Also, I've been reminiscing lately with Adam and the Ants, a beloved New Romantic post-punk band of my teenage years. They only released three full albums and an EP but they were a lot of fun. Adam Ant also went solo later but truth be told, I prefer his earlier stuff. Be that as it may...


Find out how test data is populated in your system. How could it be improved? You can watch Techniques for Generating and Managing Test Data by Omose Ogala for some ideas to get you started.


This is an area that I spend a lot of time dealing with. I think I'm on safe ground describing these details as its' not a huge secret what Socialtext uses. First, it helps to understand what Socialtext is in a general sense and then in a more abstract sense. In it's most basic form, Socialtext is a collaboration platform. It makes it possible to work on a lot of stuff. At its core is a wiki. A wiki is a way of editing text rapidly. To that end, Socialtext uses an editor called CKEditor, which is used in a variety of applications (it's open source). Atop of that, we leverage a lot of details about the documents created with the wiki (text and spreadsheet) so that we can share that information. That information is displayed in a variety of ways, most notably through assignable modules we call widgets. Those widgets can be created and combined in a variety of ways to make Dashboards at varying levels (personal, group, or account level). Those dashboard widgets can contain the content from a document or documents itself, or it can be composed of meta-data from the documents (such as who uses what, who has commented on what, who has revised something in the system, etc.).

At the basic level, everything is associated with an account, so often the most effective method to load test data (as well as to protect it and to use it from a known starting point) is to import an account that is already set up with the information we want to use. I actually have several of these and each is set up to help me deal with a variety of issues. I have accounts created for Localization, Responsive Design, Accessibility & Inclusive Design, and Large Customer simulation. Additionally, I have data that deals with the components of our system so that I don't have to constantly reconfigure those elements (that includes text examples, HTML and Markup formatting, videos, user details, language preferences, etc.). The key here is that I try to limit the use of test data that tries to be all things to all circumstances. While it can be helpful to include a lot of details in one place, it can also complicate the situation in that there is "too much of a good thing".

Another way that I try to keep test data useful and fresh is that I determine the methods that can best generate the data that I use and help me to keep track of everything in a noticeable way. One of those methods is that I have general and specific scenarios. When I do tests with large numbers of users I generate that data with a tool called "Fake Name Generator". This has been my go-to tool for more than a decade and it provides both individual details I can call up one at a time to use, or I can get bulk downloads with tens or even hundreds of thousands of users (the system limits you to 100,000 users for any given request, but over time, it is possible to generate several hundred thousand or even millions of users).

Still, there are times that I want to look at the way that data relates to others in a more personal way. There are several methods for this but the one I enjoy using is I take my favorite Manga or Anime series, collect the characters, populate dossiers for each of the people in the cast and then I create accounts with those people. The reason? I know those stories so if I see people that shouldn't be "mixing" I can immediately identify that. The downside to this is that not every member of my team is familiar with these stories so what's obvious to me may invite a variety of questions (that and the fact that my user database ends up being overwhelmingly Japanese names rendered in Romaji ;) ).

So what can we do better? I think there's a way that we can make data that is self-referential, less niche specific and more easily relatable to a broader audience. I think FakeName Generator can help with that but it also requires a bit of pruning from time to time to make sure that the richness of the data doesn't itslf cause problems or allowing bugs to "hide in plain sight". To that end, meaningful personas that the whole team can understand, vote on and share would be a major plus.


Decomposing Composers: a #30DaysOfTesting Testability Entry

Wow, it's almost the weekend and then next week will be STP-Con. More to the point, by the close of Sunday, I will be out of days in March. Ugh, why did I wait so long to get to these?

Frequent readers, please don't chime in. I know that you know exactly why ;).

Anyway, best get a move on, these things aren't going to write themselves, after all. More "30 Days of Testability".

Decomposability is an important part of testability. Complete our circuit breakers testing exercise over on The Club!

For the full description of this, head over to the Circuit Breakers exercise over on the Ministry of Testing's site. In it, there is a recommendation to read two additional resources:

The general idea behind a circuit breaker is that new code can be introduced and tested without removing older code. Think of it as a parallel circuit with a trigger. We want to use the new code, but what if something goes wrong? Do we just want to accept the failure or do we want to be able to revert to the older code in that instance? If we want to use the older code, then that's where the circuit breaker comes into play. If it's thrown, the older code is run instead of the newer code.

The idea behind the science experiment is that there is an A/B test performed. The new code is run, old code is run and the results are compared. If all goes well, the old code will be marked for deletion or review for being removed. If there are any issues, then the new code will be flagged and the programmers can review it.

Choose an area of an application of your choice to decompose. Identify where you might add circuit breakers to this system. Share your reasoning on the Club.

We have been in the midst of a Responsive UI redesign for the past few months. To that end, there is a lot of new feature code that allows for how certain elements are going to be displayed. In the event that the code in question isn't present, those elements (we call them widgets on our product parlance) will likely not display correctly. For this purpose, it would be helpful; to have a way to see if a widget being applied will appear correctly in legacy vs. responsive UI. Sure, we could load each widget and see, but wouldn't it make more sense to create a circuit breaker that would alert a user that the widget in question doesn't have the code block necessary for it to appear in responsive UI and thus alert the user (or the test) that that particular widget won't be usable yet?

So why would this be helpful? Basically, it comes down to what we want to spend time on in a given sprint but not have to retool everything so that tests that pass with one widget will fail with another simply because the code for responsive UI hasn't been implemented yet. Since we have dozens of widgets, making a change so that either an A/B comparison can be made ("hey, I see that this widget you are looking to load doesn't have the block of code that indicates it will work in Responsive UI, let's flag that. It's not a failure but it's something we want to have an indicator that work is still required for.") or we put in a circuit breaker ("Whoa! This widget doesn't have the code for responsive, so load the legacy widget.").

Both are interesting approaches. I shall play with these some more going forward :).

Thursday, March 28, 2019

Inclusive Meta Paradox Frameworks: A Little Shameless Self Promotion

I realize I'm terrible at promoting myself and the things that I'm doing. Having said that, I do want to encourage everyone to see what I'm up to and with that, I'm sharing a podcast I recorded with Mark Tomlinson for STPRadio.

Listen to "STPCON Spring 2019 Michael Larsen on Inclusive Meta Paradox Frameworks" on Spreaker.

You know the old saying "When the going gets weird, the weird turn pro?" Well, if you don't you do now :). Seriously, I love this title. Thank you, Mark, this is great.

Also, for those of you who are intimately familiar with my editing style on "The Testing Show", you may think that I am always smooth and flawless in my delivery, without any wasted breaths. Yep, it's true, no one on The Testing Show breathes... kidding, but now that I've planted that little seed in your head, I'll be the next time you listen to an episode you'll be subconsciously dwelling on that ;). My point is, Mark keeps it real and whatever was said as it was said is there in real time, so if you are curious as to how I really sound when I'm interviewed, here's your chance.

In this podcast, I talk about my workshop around "building a framework from scratch" (and yes after I finish this presentation I am going to start unpacking it and posting it here) as well as my talk on Accessibility and Inclusive Design and how they can be used to help Future Proof software.

If you will be at STPCon and you will be in my presentations, here's a taste of what to expect. If not, well, you get that anyway just by listening. Have fun and if you like the podcast, tell a friend about it, please.

Wednesday, March 27, 2019

Read Between the Lines: a #30DaysOfTesting Testability Entry

I feel like I'm on a roll here. Let's see if we can keep this 30 Days of Testing going :).


Get access to your source control and find the active branches for your application. Has anything changed that you didn’t consider?

Our software is hosted on our own GitLab server, so there are a variety of branches that we test with. We have been focusing on specific Sprint branches, so it makes it easy to see what was introduced in any given sprint. Since we have been focusing on a Responsive redesign of our product, there has been a significant push in changing the way that components are displayed in a variety of devices and user agents. Without getting into too many details, yes, there are a variety of updates that, by looking at the actual code commits, are in areas that I am testing but had changes that affect areas I hadn't specifically considered to look at.

The cool thing here is that I can see what the specific changes are and what components the changes will affect. With this knowledge, I can see if there are avenues I can use to do additional testing. Also, when I notice that some areas have been changed and ID's have been modified or added, it gives me additional areas to consider and see if there are similar areas that could be modified. Sorry to say that I can't really go into any additional depth because, as the usual sly comment entails "I could tell you, but then I'd have to kill you!" No, that's not true, but really, if I tell you what changes are being made I could certainly endanger my employment and, yeah, I'm not doing that.

Walking in Your Footsteps: a #30DaysOfTesting Testability Entry

OK, this gets me to a third of the way through the challenge. Fortunately, this is a relatively easy entry as I get to talk about other people doing great stuff and I'm glad to oblige. With that, let's keep this 30 Days of Testability Party going, shall we?

Follow and share three people on LinkedIn/ Twitter who regularly talk about testability.

Sweet, I get to brag up some people. Always fun.

1. Alan Richardson (Alan's LinkedIn, Twitter: @EvilTester)

If you read my previous entry about sharing a video, this name should be familiar. He also has a pretty extensive blog and a lot of learning resources via his own books and via YouTube.

2. Maria Kedemo  (Maria's LinkedIn,   Twitter: @mariakedemo)

Maria covers a lot of ground and has excellent content that she frequently shares. As to the topic at hand, check out her "Testability Awakens" article she wrote for Testing Trapeze.

3 Jim Hazen (Jim's LinkedIn,  Twitter: @JimHazen4u)

Jim has a book called "Before The Code: First Steps to Automation in Testing" that's worth a look. I highly suggest you give it a read.



That Voodoo That You Do: a #30DaysOfTestingTestability Entry

Hopefully, you are having some fun with me as I keep this going. This past week was a bit much leading into my daughter performing High School Musical (her last high school drama production) so there was a lot of activity around that so I'm going to be a bit aggressive with the remaining days of March so I can rightly finish the 30 Days of Testability challenge (or at least get as close to finishing on time as I possibly can).

With that...

In your next story kick off session ask, ‘how are we going to test this?’ Share the test ideas and techniques that are suggested.

This was a fun experiment in that I did get to do this for a specific feature that was getting a Responsive update. There were a variety of features that hadn't been looked at as thoroughly in recent efforts (even before the responsive updates) so I got to ask this question quite a bit. The good news is my Engineering team is totally cool with me asking these kinds of questions. We did some brainstorming and I made some suggestions of ways we could better examine how the application is working.

- some ideas revolve around adding calls to the API so that I can query and set values without having to fire up a browser and look at a bunch of things on the screen.

- some ideas revolve around actual configuration enhancerments of an application so that I can tweak various things while I'm setting up a test environment.

- some ideas revolver around making sure that elements are named well (whenever possible) and that duplication is minimal. Additionally with that, do anything we can to avoid dynamically generating elements. If that's not possible, let's at least make sure that we have a mechanism in place to easily discover those values and utilize them.

The biggest benefit I find with these conversations is that the development team and I can get on the same page much more quickly. In addition, it also allows me to see a bit of writing under the board and better understand the app a layer at a time. That's a good outcome for everyone, if you ask me :).

A Touch of Evil: a #30DaysOfTesting Testability Entry

Do any of you have any idea how long I have waited to be able to use Judas Priest's "Painkiller" ballad in a blog title? Way too long, that's how long ;). With much thanks to Alan Richardson (i.e @EvilTester) for so very often having just what I need right when I need it.


Share a video about testability with your testing peers at your company.



What is Testability vs Automatability? How to improve your Software Testing.
there are some very cool concepts in this video and hey, there's a supporting blog post to go with it. Yay Alan!!!

So what are the takeaways here?

Just because you can automate something doesn't necessarily means it's testable. Automatability means that something else can interact with it. Just because that is capable to be done doesn't necessarily mean that it's going to be a good experience for a user.

Alan's TL;DR drives this point home very effectively: "Testability is for humans. Automatability (Automatizability) is for applications."

Testability can add a number of options to a program and can help to make a system more enjoyable for a user. It may help with making an application more automatizable (great word, Alan, I'm adding that one to my lexicon) but it doesn't guarantee that it will.

In any event, the video and accompanying blog post are excellent. Check them out :).

Is it in the Playbook?: a #30DaysOfTesting Testability Entry

Here's another entry for the "30 Days of Testing" Testability challenge. Let's keep this ball rolling.

Find out if your application has an operational manual or runbook? How can you test this?

Put simply, yes, it does. More to the point, I've been one of a few people that has actually updated this documentation with new feature information as well as release notes. To that end, I absolutely have familiarity with what goes in it, how to put content in there and how to look for details about how our software works and what options are available to me.

Having said that, our software has a fairly high level of complexity and thus our documentation needs to reflect that. While I will not say that I look at every single help page, I do look at it from a macro level as I'm the one who builds those help files.

What I do try to do is look at the top level details and traverse them the way I imagine that an everyday user of our product might. What could this tell me?

First and foremost, it tells me what we are telling our customers as to how we should be doing something. That's an important avenue to look at and consider while I am testing because that's what we intend our customers to be focusing on doing to complete their workflows. How often do I do this and determine that I have a different or better way of performing a task? Surprisingly often, as I have come to see. Of course, part of that is the fact that I know the administrator interface and many times using that is *way* more effective than performing the in-app methods. However, since our customers do not have access to that level of interaction (well, most don't) I need to remind myself to either go with the published methods or make a case that, maybe, our backdoor methods probably need to be considered as a way to let our customers do things, too.

To borrow from Red and Overly Sarcastic Productions... "So... yeah!"

Frame By Frame: A #30DaysOfTesting Testability Entry

I'm going to have to get aggressive if I'm going to make the end of month deadline. therefore, I'm going to try to be more specific with these posts and discuss them in a way that will allow me to be more targeted and to post more entries. It's becoming a matter of pride now! Anyway, more "Thirty Days of Testability".

Explore the output of your applications analytics, how can they help guide your testing?

I'll take a two-pronged approach with this. I'll talk a bit aboiut my company's product and a bit about what I use on my own TESTHEAD blog.

With my company, we have two applications we use and monitor for Analytics. One is more for overall HTTP traffic and demographics (such as what pages get hit the most, what browsers are used the most and what time of days do people most use the application, as well as how many parallel connections are we maintaining at any given time. that helps me define what I should prioritize in my testing as well as what level of load and other parameters I might want to consider.

The second tool we use is based on feature analytics, as in "what features in our product are our customers actually using?" We've had a few times in the past where we had a request to get something implemented and that implementation was everything to getting the deal. From there, it often meant we would maintain and keep a feature running which was important for a small set of users but meant a big part of our business. Sometimes though, we would discover that an organization would demand something, later for us to discover that their adoption rate of that feature was low to non-existent. That often meant there would be follow-up discussions. Those discussions would help us to decide if the issue was worth us keeping the feature. Perhaps we weren't making a compelling enough story for why it would be important for the organization wanting to use it. Alternately, we also often decided that the feature had such low adoption that turning it off would have little overall effect on the use of the product. We would then gracefully deprecate that feature.

In short, getting familiar with your product's analytics can tell you a lot about what is being used but also what isn't.

Friday, March 15, 2019

I'm Gonna' Get Close To You - #30DaysOfTesting Testability

Since I don't want to put too many posts in one place, I'll use my catch-up time to mentions some other things going on in the TESTHEAD world while I do this. I don't want to be too gratuitous and self-promoting. Also, thanks to Queensryche to have a song all about monitoring to save my song title titles (say that ten times fast ;) ). OK, their version is about stalking but why quibble ;).

First off, there's a new episode of The Testing Show up over at Qualitest and on Apple Podcasts. This month Matt and I are talking with Lisa Crispin of mabl and Jessica Ingrassellino of SalesForce.org about Advanced Agile and DevOps and how those things are both similar and different. I think we did a good interview this month. Well, I almost always think we do good interviews but I especially enjoyed doing this one so I hope you will go and give it a listen.

Anyway, on with the "Thirty Days of Testability". As you might notice, today is the fifteenth. This is entry five.  Catchup in full effect. You have been warned ;).

What monitoring system is used for your application? Do the alerts it has configured reflect what you test?

Again, we are going to be considering Socialtext as it is currently implemented as a standalone product because that's all I can really talk about. Before this week, I had a partial understanding of what we actually do for this and some holes in that knowledge. I'm fortunate in that I have a couple of great Ops people who are associated with our team. I should also mention that Socialtext can be deployed in two manners. First is a hosted SAAS option and second is a local installation. We leave the monitoring of the local installations to the small percentage of our customers who prefer to do that. The majority of our customers utilize our SAAS option and therefore we host their servers. To that end, we use the following (I'm pretty sure I'm not spilling any trade secrets here, so this should be OK. If this post changes dramatically between when I post it and tomorrow, well, I'll have learned differently ;) ). Anyway:

for Monitoring the systems  (CPU, Disk space, https, http) we use a tool called Nagios.

For monitoring site uptime, https lookup, and time to respond, we use an external app called Alertra.

In addition to those two tools, we also have a variety of hand-rolled scripts that allow us to scan all of our servers looking for specific aspects of Socialtext instances and services to see if there are any issues that need attention. Examples here are things like the IP address, the hostname as viewed by the public and that it is accessible, that we are running certain key services (search, replication, cron, mail, ntp, our scheduler, our search implementation, what version that particular server is running, etc.).

The second part of the question deserves a legitimate answer and that is "Yes" and  "No". Yes, in that some of the alerts map to what we test but "no" in that there's a lot of areas we don't actively test as consistently as we should. The chat with our ops team was definitely enlightening and has given me some ideas of what I can do to improve on that front. What are those things? Well, other than verifying that we are actively doing things that affect and trigger those alerts, I will have to ask that you respect the fact that I am now veering into trade secret territory and I kinda' like the idea of keeping my job ;). 

Who You Gonna' Call? #30DaysOfTesting Testability

Wow, time flies when you are doing actual work and you are trying to get your talk and workshop done and rehearsed for STPCon (seriously if you are coming up to STPCon in Northern California first week of April, come join my talk and workshop or if I'm not your cup of tea at least say "hi" ;) ).

Anyway, on with the "Thirty Days of Testability".  As you might notice, today is the fifteenth. This is entry four. Yeah, I'm going to be playing catchup. You have been warned ;).


Do you know what your top three customer impacting issues are? How could you find out?

This is surprisingly easy for my immediate team. It's less so for my extended team. I have no idea who is coming in at what point here, so I'll say this for those who are new.


Currently, I work for Socialtext. Socialtext used to be a standalone product and for a number of customers, it still is. However, back in 2012, Socialtext was acquired by a company called PeopleFluent. PeopleFluent specializes in HR Tools as their name might indicate. PeopleFluent is also a much larger company by comparison (Socialtext as a standalone group is all of ten people). Last year, PeopleFluent was acquired by Learning Technology Group (LTG) located in the UK and with offices all over the world. Thus, when I have to think about my top three customer impacts, I have to ask clarifying questions. Are we talking about Socialtext? PeopleFluent? LTG? Interestingly enough, since Socialtext is the base platform that many of the PeopleFluent and LTG products run on and interact with, it's entirely possible that Socialtext will not even enter into the top issues of anyone outside of Socialtext and at other times a Socialtext issue can be the number 1 issue for everyone. Still with me? Excellent :).

So to keep things simple, I'll focus on Socialtext standalone and how I would determine what the biggest issues are for us. The simple answer is I can reach out to our secret agent in the field... ok, it's nowhere near that cool. We don't really have a secret agent but we do have a great customer engagement engineer and frankly, a lot of the time that's just as good. I can count on one hand the number of times when I have announced an update on our staging server (read: our personal production server) and not heard a reply from this person of "hey, what's new on staging?" They make it their responsibility to be clear and up to date with every single feature and every option available in the product. They also make a lot of sample accounts and customizations to our product to push the edges of what the product can actually do. If there is any question as to what is a pain point or an issue with a customer, any customer, they are my first point of contact. Sure, we have a CRM and a bug database but the majority of the time, if I really want to see what is happening and what's really important, I know who I am going to call... or bring up in a chat message. I mean come on, this is 2019 after all ;).

Thursday, March 7, 2019

Sponsoring a "New" Book? #30DaysOfTesting Testability

Observations for when you start looking at various book titles related to niche topics in software testing. First is that they can be hard to find. Second, they can be mighty pricy. What is the cost when it comes to learning? True, but there are a variety of factors one has to consider, such as, "OK, I really don't have close to $100 to drop for a specialty book at this exact moment."

Having said that, occasionally one finds interesting things in interesting places and sometimes those things are in the process of percolating. It is in this guide that I have chosen to make my mark for "Day Three" of this challenge... yes, I'm writing about it on "Day Seven", let's not quibble here.


Begin reading a book related to testability and share your learnings by Day 30.

To this end, I've decided to help fund a book in the process of being written (or at least to be written for Leanpub distribution). Said book?



Team Guide to Software Testability


Learn practical insights on how testability can help bring team members together to observe, understand and control customer needs, ensuring fitness and predictability of deliveries.

Now I will have to say up front that I will probably not be able to provide a 100% complete read of this book because the book is literally being written as I've purchased it. However, I will be more than happy to review what has been written and post my findings and actions of it by the end of March. Who knows, perhaps more of the book will be delivered by that time and I'll be able to offer some more details when that happens.
This is what the book says it will ultimately include:
Table of Contents
  • Foreword
  • Introduction
  • Why is testability important
  • What does hard to test feel like
  • What does testable feel like
  • What leads to testability being neglected
  • What is covered in this book
  • How to use this book
  • Feedback and suggestions
  • 1. Use a testability inception deck to visualize current team and system state and create an environment for improvement
  • 2. Adopt testability mapping to expose common smells of hard to test architectures
    • 2.1 Gathering data on poor architectural testability to detect systemic problems
    • 2.2 Low testability architectures contribute to slow feedback and deficient decision making
    • 2.3 Identify the symptoms of poor architectural testability
    • 2.4 Exercise: Measure the impact of testing smells on your architectural testability
    • 2.5 Understand how testable architecture can impact your team’s testing efforts
    • 2.6 Summary
  • 3. Use risk and incident data to remedy architectural design problems which inhibit feedback from testing
  • 4. Adopt ephemeral development environments to diversify testing techniques early and create shorter feedback loops
  • 5. Utilize events and metrics to model risks in production for continuous improvement of your test strategy
  • 6. Adopt incident postmortems to maintain a testability focus as part of your team’s continuous improvement strategy
  • Terminology
  • References
  • About the authors
  • Notes
It seems like a good place to start and I for one like to know I'm helping to fund progress on books I'd like to see be written. Win-Win!!!