Thursday, September 1, 2016

This Week on #TheTestingShow: Real Work vs Bureaucratic Silliness

I have come to realize that I am terrible in the shameless self promotion department. I have been actively involved the past several months with producing and packaging "The Testing Show", but I've not been very good about talking about it here. Usually, I feel like I'm repeating myself, or I'm rehashing what I already talked about on the episode in question.

When Matt takes the show on the road, however, I do not have that same issue. This most recent episode of The Testing Show, titled "Real Work vs. Bureaucratic Silliness" was recorded at Agile 2016 in Atlanta. As such, Matt was the only regular panelist there, but he put together a group consisting of Emma Armstrong, Dan Ashby, Claire Moss and Tim Ottinger. For starters, this episode has the best title of any podcast we've yet produced for The Testing Show (wish I came up with it, but it's Tim's title). I think it's safe to say all of us have our share of stuff we like to do, or at least the work that is challenging and fulfilling enough that we look forward to doing it. We also all have our share of "busywork" that is inflicted on us for reasons we can't quite comprehend. Tim puts these two forces on a continuum with, you guessed it, Real Work on one side, and Bureaucratic Silliness on the other.

I think there's an additional distinction that needs to be included in here, and that's when to recognize that some task, behavior, or check point actually is an occupational necessity, and when it is truly Bureaucratic Silliness (BS). In my everyday world, let's just say this tends to grade on a curve. Most of the time, BS comes about because of a legitimate issue at one point in time. Case in point; some years ago, in spite of an extensive automation suite and a robust Continuous Integration and active deploy policy, we noticed some odd things passing through and showing up as bugs. many of these things were not areas we had anticipated, and some of them just plain required a human being to look them over. Thus, we developed a release burn down, which is fairly open ended, has a few comments of areas we should consider looking at, but doesn't go into huge details as to what to specifically do. The burn down is non negotiable, we do it each release, but what we cover is flexible. We realized some time back that it didn't make sense to check certain areas over and over again, especially if no disruption had occurred around those features. If we have automated tests that already exercise those areas, then that likewise makes it a candidate to not have to be eyeballed every time. If a feature touches a lot of areas, or we have done a wholesale update of key libraries, then yes, the burn down becomes more extensive and eyeballing certain areas is more important, but not every release every single time.

Usually, when BS creeps into the system, it's because of fear. Fear of being caught in a compromising situation again. Fear of being yelled at. Fear of messing something up. We cover our butts. It's natural. Once or twice, or until the area settles down or we figure out the parameters of possible issues, makes sense, but when we forget to revisit what we are doing, and certain tasks just become enshrined, then we miss out on truly interesting areas, while we go through BS to the point that we would rather pull our intestines out with a fork rather than have to spend one more minute doing this particular task (yes, that's a Weird Al Yankovic reference ;) ).

Anyway, were I part of the panel, that's part of what I might have brought up for discussion? If you want to hear what the group actually talked about, well, stop reading here and listen to The Testing Show :).