Thursday, October 27, 2016

Adventures in Tautology - What Makes a Tester a Tester?

The latest The Testing Show has been released. Some interesting thoughts and comments have been spinning through my head since we recorded this episode. For a number of organizations, the "tester" as we have long identified them, is going away. In this sense, I mean the dedicated person who focuses on testing as a specific discipline, who gets into the guts of an application, who spends the bulk of their time actually testing the product, independent of any other responsibility. For other organizations, testers are a huge part of their infrastructure, as essential and as non-replaceable as a plumber or an electrician. Why is there a disconnect? Is there a disconnect?

First, a little about my history. I can safely say I have worked for three companies that specifically needed to have dedicated testers. That's not to say the others didn't benefit from having testers, but I'm talking about how they were physically structured, staffed and what they made. Cisco was my first experience in the world of software development, but also hardware development and maintenance. That's where I saw first hand the different definitions of what a tester was.

First, there was the tester on the Engineering side of the house, those of us who worked on IOS (Internet Operating System, not to be confused with the much later appearing iOS from Apple), and on the microcode that resided on system boards. These teams had dedicated testers who spent a lot of time working with early automation frameworks or even just running commands via 'tip' to get access to the consoles of these devices to work with them. We also had quite a few of us who were involved with physically building up test labs, configuring the hardware, bringing in diagnostic tools and doing wide-ranging experiments. In short, we did things that the developers just plain did not have the time to do, at a level that provided a lot of feedback to further development and design, as well as to hunt down problems.

Aside from that, there was also the Manufacturing side of testing, which was a very different animal. For the first nine months that I worked with Cisco as a contractor, I was dotted-line associated with both Engineering and Manufacturing, and I sat in on meetings with both groups. It was here that I really saw the difference. Manufacturing testers were often times better described as "rework specialists", in that they tested, but they also rerouted and resoldered boards so that they could be re-deployed. I confess, I often felt out of my league with this group. It seemed they knew so much more about the bare metal issues than I ever could, and they also knew quality issues they had to address within seconds of looking at a particular board. I still smile when I think of going to talk to my friends Winnie, Jim or Azhar, among several others, and just their quick ability to look at something and say "oh, yeah, I can tell you what's up with that!" This was a skill, a craft, that I would say, honestly, only a handful of the software engineers possessed. This was a knack that was inspiring. I would ultimately work on the Engineering side of the company, and I would often suggest that, if we really wanted to augment our test teams, we really needed these people over in manufacturing to join us. Ultimately, several of them did, and yes, our test teams immensely benefitted from them being there.

The other two companies that required testers were Synaptics and Konami, for different reasons. Synaptics definitely because it was a blending of the software and the hardware (and truth be told, I think I found more issues with product tensile strength or composite construction than I did with software). Konami because of the factors that are so subjective when it comes to gameplay outside of "correct programming" (and the all important "physics tests", which, frankly, I don't wish on anybody). Oh, and in my case, they wanted to release a singing game, so they needed someone who could test and sing really well. Truly one of the most serendipitous jobs of my life :).

All of the other companies that I have worked for, I'd be hard pressed to say we essentially "required" dedicated testers, but again, I think we were well served to have them. Over time, though, as systems moved on to regular builds, incremental development and design, smaller and more frequent releases, and quicker answering to customer feedback, I can see why it would be easy to say "testing is a role and not a full-time profession, and we can do the testing with a smaller team or, even, with a single person" (which many times was me). Often, my testing role would be used, but I would be asked to do other things as well, such as provide second tier support, perform training, work on automation frameworks or, often, just look at the maintenance tasks within an organization. Side note, if you have not heard the Freakonomics episode about "In Praise of Maintenance", may I suggest you do so, as it has excellent commentary that is quite relevant to this discussion.

Since I've left Cisco, I have, generally, worked for smaller organizations. Those organizations have, at times, been subsidiaries to bigger companies, but usually, the group I work with is the entity that I interact with, so for all practical purposes, they are my company. That means I typically work with teams of around a dozen developers, rarely more, and most of the time, I've worked with me as a lone tester or, maybe, with one or two more testers. Most of the time, we are not just testers doing just testing. We cover a variety of other jobs and needs, and frankly, in smaller companies, that's to be expected. It's not uncommon for a tester to also become a de-facto systems administrator, or a triage support person, or a build master. We still test, of course, but also we encourage others to test as well. I'd say it's not so much that testers are going away, but that the role of testing is being seen as necessary in many places and not encompassed by a single person.

Ultimately, what we come to is that Testing as a discipline and as a craft is an essential part of software development. I still hold to the truism that training in testing discipline and philosophy is valuable, and those who do so will be uniquely positioned to benefit from doing that. I'm also saying "don't be surprised if you find that your job responsibilities are not just testing, or that you may even find you are not ultimately called a tester at the end of the day". At this point in time, my job title as defined by HR is "Senior Quality Assurance Engineer". Time agrees with the Senior part. Having to finagle and discover workarounds begrudgingly lets me consider myself an Engineer (I've never really been comfortable with that title because it means something specific in other industries that I have no chance of meeting requirements for), but Quality Assurance is my bag, even if the words are a little tortured. In short, I work in ways that help to encourage quality. I'm on board with that. I'm also on board with the fact that I often bring many other skills to the table that organizations can use, and that those skills can be leveraged to provide value for my company. Many of those skills have been informed by testing, but are not in and of themselves "testing".

To come back to the beginning, I think all of us test and all of us are testers, but there is a benefit to having a group who pays attention to the testing discipline more directly than others might. I'm happy to be part of that group, but I also understand that that alone will not make me valuable to an organization. Being a good tester is important, but if the desire is to work for smaller companies, do not be surprised if you are asked, "what else have you got?"

Thursday, October 13, 2016

Start Making Sense with Sensemaking - a #TheTestingShow Follow-up

One of the primary reasons that my blog is not as frequently updated as in the past is that I have been putting time into producing The Testing Show. Granted, I could do a quick edit, post the audio and be done with it, but we as a team made the decision we wanted to aim for a show that would flow well, be on point, and also have a transcript of the conversation. At first, I farmed out the transcripts, but I realized that there was a fair amount of industry specific stuff we would talk about that I would have to go back and correct or update, and then I'd have to put together the show notes as well, with references and markers. realizing this, I decided it made sense to just focus on the transcript while I was making the audio edits, so I do that as well.

Translation: I spend a lot of time writing transcripts and that cuts into my blogging. My goal is to make sure that The Testing Show is as complete as possible, as on point as possible, and that the words you hear and read are coherent and stand up to repeated listenings. Also, since it's a primary activity that I do within our little testing community, I really need to do a better job highlighting it, so that's what this post is meant to do. Consider it a little shameless self-promotion with a couple of additional production insights, if you will ;).

Occasionally, I get a podcast that tests my abilities more than others, and this week's episode proved to be one of those. We try our best to get the best audio files we can, but sometimes, due to recording live at a conference, or trying to capture a Trans-Atlantic call, we have to deal with audio that is not crystal clear. Often we get background noise that can't be isolated, at least not effectively. We sometimes get audio that is varying levels between speakers (one is loud, the other is soft, and leveling means introducing line noise to compensate for the low volume of a speaker). This time, it was the fact that the audio stream would just drop out mid-sentence, and we'd either have to repeat several times, or we'd lose words at random places. Because of that, this is a more compact show than normal, and that was by necessity. It was also a challenge to put together a transcript; I had to listen several times to make sure I was hearing what I thought I was hearing, and frankly, in some spots, I may still have gotten it wrong.

With that, I want to say that this was an interesting re-framing of the testing challenge. Dave Snowden is a philosopher, writer, and principal creator of the Cynefin Framework. "Cynefin" is a Welsh word that means "haunt" or "abode". In other words, it's the idea that there are things that surround you all the time that can give you clues as to what's going on, but you won't notice it unless you "live in it". There's a lot more to the framework than that, and Dave Snowden talks quite a bit about what it is and how it's been applied to various disciplines. Anna Royzman also joined in on the call and discussed her involvement in using Cynefin, and what it might mean for software testers who want to use the framework and approach with their testing. A caveat. This is a framework that has been used in a variety of places, from applications in government to immigration to counter-intelligence and software development. Testing is a new frontier, so to speak, so much of this is still to be determined and very much in the "alpha" stage.  Anyway, if you'd like to know more, please go have a listen.

Sunday, October 9, 2016

The Humans In the Machine - Talking Machine Learning

This weekend was one of the more interesting Weekend Testing Americas sessions I've hosted. Hurricane Matthew was making itself known and people were dealing with getting in and out of a broad section of the Southeastern United States on Saturday, as well as looking at whether or not their homes were OK. Under those circumstances, I can understand getting together to talk testing may not have been a high priority. We had several people ask to attend, but by the time it started, there were just two of us, Anna Royzman and myself. Anna and I both decided "hey, we're here, Anna's never done a Weekend Testing session before, let's make the most of it" and so we did :).

Our topic this go around was a chance to look at a new feature of LoseIt called SnapIt. The purpose of SnapIt is to take pictures of food items, and based on what the app thinks the picture is, select the food item in question and get a macronutrient breakdown. This is a new feature, so I anticipated that there may well be some gaps in the database, or that we might get some interesting tags to appear. We were not disappointed. In many of the pictures, well known food items were easy to identify (apples, bananas, etc.) and some a little less so (a small pear variety that had a darkish green skin was flagged as Guacamole, which isn't really too far of a stretch, since I could see it interpreting it as a small avocado):





Complex and packaged foods it struggled a little more with, but in those cases, if it has a bar code, most of the time, reading the bar code would deliver the information we needed, so SnapIt was less important in that setting, but it was interesting to see what it flagged things like granola bars or shelled walnuts as.





During the session, we did discover one interesting bug. On my iPhone, if a user takes two pictures, and discards them, and tries to take a third picture, the camera button on the screen appears as a half circle. The bottom of the button is missing. Exit the camera and open it again, and the camera button appears complete again. Shoot two pictures and throw out the two, you will get the half button on the third try.



Outside of the actual testing of SnapIt, we had a pretty good discussion of machine learning in general and the idea that many of the algorithms used for these processes are pretty good, but can often have unintended consequences. The past few weeks, I've been listening to a number of podcasts that have featured Carina C. Zona and her talk "Consequences of an Insightful Algorithm" (talk and slides). She has appeared on both the Code Newbie podcast and the Ruby Rogues podcast, and both treatments made me want to explore this topic further. One comment from Carina's presentation that stood out to me is the idea that "math (algorithms) are objective conceptually, but their implementation hardly ever is, because it's people who create them, and we create algorithms with our prejudices, biases and fallacies intact. In short, we do not see our algorithms for that they are, we see them for who we are (paraphrase of Anais Nin, but you get the point, I hope ;) ).

I'd encourage anyone who wants to get a better understanding of the potential dangers of relying too heavily on machine learning, as well as the human aspects that we need to bring to both coding and testing those algorithms, please check out Carina's talk. For those who want to see some of the terrain that Anna and I riffed on, please feel free to read the chat transcript of the WTA session.

Friday, October 7, 2016

In That Moment of Terror, Evolution Occurs

There has been much of interest in my day job as of late. Much to keep me busy, much to ponder and consider, and definitely wondering "where do we go from here?"

In the course of the past two weeks, we've received the news of two, frankly, heavy departures from my company. One of them is for an amazing reason. Our senior software developer, Audrey Tang, has left us to become a Minister without Portfolio in the Executive Yuan of the Republic of Taiwan. Seriously, how does a software company compete with that? The second is of our Vice President of Engineering, who has been with our company for ten years, and has the deepest institutional knowledge of anyone in our company. With these two departures, I've come to realize that I am now one of a small few who are the senior members of the team. In short, whether my role or job title reflects it, I'm now one of the "sages" of my company. Only four other people have longer tenure, and that's measured by months, not years.

To be honest, some people are fearful. What does this mean? Where do we go? Can we do what's expected? Those are completely legitimate questions, and I find it interesting that a number of people are asking me for my "lay of the land" in this situation. I'm not going to talk "out of class" so to speak, but I am going to share some general thoughts that I think are relevant to anyone in any work capacity, and they match my general philosophy overall.

This is a time of uncertainty, sure. How long will it take for us to replace these two people? I'll argue that they can't be "replaced". We can hire people, or promote people, to fill their roles, but we will never "replace" their drive, genius or quirks that made them effective. My answer is "don't try". No, I don't mean give up, I mean don't try to replace them. Instead, forge ahead with new personalities, new ideas, new modes of genius, and dare I say it, insert yourself into the conversation or situation if it makes sense. First, who's going to stop you, and second, they probably are elated you want to help carry the load.

Evolution doesn't come at times of health, well-being, peace and comfort. It comes at times when there is a crisis, or a threat of extinction. We don't learn when things are going well, we learn when things are going haywire, and we need to solve real problems. Over the past few years, I've determined the best course of action is "We seem to be having some trouble here. How can I help?" I came into my current company as a tester, with testing my sole focus. Upon the death of a co-worker and mentor, I took over their role as Release Manager, and became more familiar with our code base than I likely would have otherwise. I didn't get promoted into that position. I saw there was a hole now, and no one there to do that job, so I decided to figure out how to do it. No one said "No, you can't do that, you have to go through channels". Well, not entirely true, I did have to convince a few people I really did learn enough about what needed to be done to get access to resources necessary, and in that process, I did basically declare "I'm Socialtext's Release Manager", when there was no one official to back me up. What happened? The de-facto became official. I became Release Manager because I declared I would be it, and under the circumstances, there was no one in a position to really object.

Today, I see similar opportunities. If I ever wanted to declare myself as the head of Dev-Ops, or to declare that I am now a supporting software developer, or even help chart architecture decisions, that time is now. However, I will not be able to really do that unless I am also willing to put the time in necessary to show I have both skills and willingness to do it. I'll not pretend that I'll be promoted to VP of Engineering. That's a bit beyond my skill set or desire, and we've already got a new VP of Engineering, but they will feel like a fish out of water for awhile. My plan is to do what I've found to be most helpful, which is to say "you seem to be struggling with some challenges... how can I help?"