Friday, January 31, 2014

The State of Software Testing: A Follow-up and My Commentary

For those who may remember, back in December 2013, I encouraged as many testers as possible take part in the "State of Testing" survey being sponsored by Tea Time With Testers, and being collated and curated by Joel Montvelisky. Well, that survey has been completed, and for those interested in seeing all of the results, you can download it from here.

This is, of course, a self selecting group, so the views expressed may or may not be indicative of the broader software testing world, but it does represent the views of those who responded, including me (ETA: or at least intends to; see James comment below). Now that the survey is public, I'm going to share my answers (with some qualifying statements where relevant) and examine how my answers map to the overall report.

First, some caveats. Any survey that attempts to boil things down into data points will "lose something in the rinse cycle". There were a lot of questions where "well, sometimes" and "hmmm, not so often, but yes, I do that from time to time" colored the answers. My clear takeaway from all of this is that I realized I am not "just a software tester". I do a lot of additional things as well. I program (for some definition of programming). I write. I lead. I maintain and build infrastructure. I talk with and advise customers. I build software. I deploy releases. I hack. I do a little marketing here and there. I sell, sometimes. I play detective, journalist, and anthropologist. Much of this will not show up in this survey, but my guess is, a lot of you who answered probably do much the same, and some of that may not be reflected in this survey, either.

First of all, where do I rate in the hierarchy? For years, I was a lone gun, so had you asked me in 2012, I would have said Tester, Test Manager and Test Architect all in one. Today, I am part of a team of testers (most of us with two and three decade long track records). I'm definitely a senior, but I'm at peer level with just about everyone on my team. From time to time we bring in interns and junior team members where I get to mentor them, but much of the time, it's just us. While we have our different approaches and attitudes, I'm confident to say we balance each other well. Sometimes I'm the lead, sometimes I'm led. For us, it works.

Our test team, at this moment, has five people. One is our Test Director, three are Senior level Software Testers (me included), and one is a contractor whose sole responsibility is writing test automation. Our Test Director's title is mostly ceremonial; the four of us all work together to divvy up stories and utilize our expertise, as well as share that expertise with others to broaden our abilities. We do have "personal preference" silos. Our director likes doing a lot of the automation and rapid response stuff. One of our testers has a special knack for mobile testing. Another tester has a great feel for the security side of things. I tend to be the first in line for the architectural and back end stories. During crunch time, it's not uncommon to see the story queue aligning with our personal preference silos; hey, we know what we are good at ;). However, we do take the time to cross train, and all of us are capable, and becoming more so each day, to venture into other avenues.

Our Engineering team, including us testers, at this moment, is fourteen people. That puts a balance of roughly two testers to one programmer, the smallest ratio I've had to date in any organization I've worked at. Of course, for several years, I was the only tester, in organizations with ten to fifteen programmers. This has helped us considerably in that, at any given time, each software tester typically has two stories in play at any given time, possibly three if we count self-directed spikes for learning or infrastructure improvements.

Like most of the respondents, we have an Agile-like team (we bend the "rules" a little here and there, but as far as the core principles of the Agile Manifesto, I think we do pretty well). We have both a co-located and a distributed presence, so being able to communicate quickly is an imperative. We do a lot with shared screens, soft phones in our computers, and a set of IRC channels that sees a lot of traffic. We use our own product as our primary platform for doing as much of our business as possible. If it can't be done in Socialtext, we either find a way to do it there, or seriously consider not doing it at all. Our IRC server is the key exception. That's so we have a means to communicate and stay productive if the worst case scenario happens, and we lose our work environment (hey, it pays to be paranoid ;) ).

Each of us on the team wears different hats when we approach our software testing jobs. We are involved very early with requirements gathering (we practice the Three Amigos approach to story design and development). We all take turns along with the rest of the engineering team with core company functions. Each tester takes a week where they are build master and deployment specialist for our operations environment. Each of us manages a pool of development servers. Each of us is versed in the inner working of Jenkins, our Continuous Integration server, and each of, to varying degrees, writes or enhances test automation, utilize testing tools from a variety of initiatives, and do our best to be "jacks of all trades" while still holding on to our own "personal preference" silos.

We use a broad variety of testing techniques in our team. Exploratory Testing is championed and supported. Our developers use TDD in their design approach. They occasionally perform pair programming, though not exclusively. I actively encourage pair testing, and frequently coordinate with the programmers to work alongside them. Usually this is during the active testing phase, but at times during the programming stage, where I act as navigator and ask a lot of "what if" questions. In short, we are encouraged to use our faculties to the best of our abilities, and to provide essential artifacts, but not become slaves to the process, which I greatly appreciate.

We do two stand-up meetings each day. The first is the Engineering team stand-up, and then we have a dedicated Q.A. team stand-up, led by our Test Director, or one of us if the Test Director is not available. In these meetings, we gauge our own progress as a QA organization, and we look for ways we can improve and up our collective game. Oftentimes that means cross-training, and also working as a team on individual time-sensitive and critical stories.

Out test documentation lives inside of the stories we create. Each of the acceptance criteria is spelled out in the story, our programmers check off when they have finished coding to the acceptance criteria , we check off when we have finished testing for each piece of acceptance criteria, and we use a Notes field for each item of acceptance criteria to describe findings or add additional details. The goal is to have the ability to show what we completed, communicate our findings, and be able to (in as few steps as possible) provide the insight and context necessary for the programmers to fix issues.

We have a vigorous and active automation suite. It's in many ways a home-brewed process. These automation tests run the gamut from unit tests, to full workflow functional tests, and everything in between. Our product is actually used to write our automation, store our automation, and is called by our testing framework to run our automation. We get very meta in our automated tests, and it's been that way for many years. We don't expect every tester be a programmer, but it certainly helps. At this point in time, all of our testers do some level of automation test creation and maintenance. As to the level of our automation, we have a mantra that a story is not finished unless it has both unit tests to cover the defined functionality and QA based automated tests to exercise the workflow. I will not say we have 100% automated coverage, but we have a high percentage of all of our workflows and features automated. 85-90% would be a reasonable guess.

Again, we have one contractor whose sole responsibility is to create automated tests, and the rest of us augment those efforts. Most of our automation is aimed towards a large regression test suite, and our automated tests are treated like source code, just as much as the actual program code. If Jenkins fails on an automated test QA has written, then the build fails, and the programmers need to fix the reason for the failure. If the test is seen as flaky, it's our responsibility as testers (and creators of the automated tests) to fix that flaky test. In addition, we also have a broad suite of tests that we use to help us with exploratory testing as well. Their purpose is tobring us into interesting areas of the code after performing a variety of state changes, and letting us, as active testers, hop off and see what we can find. We often refer to these tests as our "QA Taxi Service".

Our process of managing stories, bugs, feature requests, etc. is again unique to Socialtext. All of our reporting is performed within our product. We have created a custom Kanban board to run inside of Socialtext, and all artifacts for stories reside inside the Kanban. Actually, they reside inside of Socialtext. We have engineered it so that our stories can reference individual pages, charts, SOCIALCALC sheets (our own spreadsheet program that resides in Socialtext), pictures, videos, etc. We make the information available to all as quickly and efficiently as possible. We take what we learn and create how-to guides for everyone to use and share. These guides get updated regularly. Everyone on the team has a responsibility to "green" the how-to guides, and make sure that everyone knows what has changed, what has been learned, and how to use that information to program better and test better.

So that's how my team looks compared to the survey. How about me?

I'm on the high end for experience as far as the survey participants goes, but I'm middle of the road as far as my immediate team is concerned. Our Test Director has been a tester for three decades plus. Our core in-office team of testers are all roughly the same age, and have the same relative years of experience (two decades being the average among us, with just a few years up or down either direction individually). Our test automation contractor has roughly a decade of experience. Though we occasionally get interns and other junior staff, for the most part, we're a pretty seasoned team, and that's rather cool.

As to continuous learning, I use a variety of approaches to learn. Testing blogs, newsgroups, Twitter, various "social" groups that are formed with other people (Miagi-do, Weekend Testing, Meet-ups, etc.) all play into my approach, as well as active blogging of what I learn. My attitude is to learn by whatever means is available.

Looking into the future, I see that there are a lot of areas that I personally want to focus on. I want to get more involved in testing for security. Specifically, I want to get a better practical, nuts and bolts understanding of what that entails. I see a need to boost my performance chops. That means going beyond running a tool that simulates load. My guess is that I'll be going back and doing a lot of re-listening to PerfBytes in the coming weeks and months ;). Automation is perpetually "there", and while every year I say I'm going to get more involved, I finally have a team, and a scope of projects, where that's more than just wishful thinking. More than anything else, I want to see what I can do to find the bottlenecks in what I personally do, and figure out how to minimize them, if not completely eliminate them. I also want to explore ways that I can eliminate waste from the processes I already do, even if they are processes and methods that work pretty well.

As to job security, tomorrow is always subject to change (that whole "past performance is not an indicator of future results"), but I finally feel I'm at a place, and a level of involvement in the testing community, where my potential to find future jobs (should such a thing be necessary) is looking better now than it has at any time in my career history.

So there you go, the TESTHEAD "State of the Software Tester" for January 2014. Have a look at the report (posted here again for convenience ;) ) and ask yourself "where do I stand?". More important, ask yourself how you can strengthen your own "state", and after you figure that out, work on it. Then reach out and help someone else (or a lot of someone elses) get even better and go even farther than they dreamed possible.

No comments: