Thursday, October 17, 2013

Stop Following Test Scripts and Think: 99 Ways Workshop #91

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #91: Stop following test scripts and think - Stephen Blower


One of my great laments of my earlier software testing career was the fact that we had regimented test plans that had to be explicitly spelled out, supposedly followed to the letter, and repeated with each new version of the software. These were large undertakings, and they often would result in several rounds of review and complaints of "this is not detailed enough".


I would go into our document management system and see other test plans, and very often, I would copy their format or their approach, which was to, effectively, take a requirement from the specification, add words like "Confirm, validate, ensure, determine" or some other term that meant "turn a requirement into a "test case" with the least amount of effort possible. When I did this, I got my test plans approved fairly quickly.


My lament over this is the fact that, even though we did a lot of documentation, we rarely followed these tests in this manner, and most of the interesting things we found were not actually found following these as defined tests. Don't get me wrong, there is a time and a place for having a checklist. When I was working for a game developer in the early 00s, the concept of the Technical Resource Checklist (TRC) was mandatory, and the test cases needed were extremely specific (such as a title screen cannot display for longer than 60 seconds without transitioning to a cut-scene video). Sure, in those cases, those checks are important, they need to meet the set requirements, and they have to be confirmed. Fortunately (or unfortunately, depending on your point of view) most software doesn't have that level of specificity. It needs to meet a wide variety of conditions, and most of them will simply not fit into a nice step by step recipe that can be followed every time and find interesting things.


Workshop #91: Take a current "scripted" test plan that you may have, and pull out several key sentences that will inform what the test should do. Create a mission and charters based on those sentences. From there, set a time limit (30 minutes) and explore the application using those questions as guidelines.


This is going to look like an advertisement for Session Based Test Management. That's because it is. Based on writing by James and Jon Bach, as well as tools like Rapid Reporter and others, I have become a big fan of this approach and consider it to be a valuable alternative to the scripted, ironclad test cases. In truth, I don't care how many test steps you have devised. What I care about is that the requirement for  given story have been examined, explored, considered and that we, as testers can say "yes, we look good here" or "no, we have issues we need to examine".


In my current assignment at Socialtext, we use a story template that provides acceptance criteria, and that criteria can be brief or voluminous, depending on the feature and the scope. We also use a Kanban system and practice what is called "one piece flow" when it comes to stories. To this end, every acceptance criteria bit becomes a charter, and how I test it is left up to me or another tester. Given this approach, I will typically do the following...


I create data sets that are "meaningful" to me, and can easily be interpreted by other members of my team should I pass them on to them to use. I make them frequently, and I structure them based around stories or ideas I'm already familiar with. Currently, I have a group of details I maintain that originated in the Manga/Anime series "Attack on Titan" (Shinjeku no Kyoujin). Why would I use such a construct? Because I know every character, I know where they are from, what their "motivations" are, and where I would expect to see them. If someone in this "meta-verse" shows up somewhere I don't expect to see them, that cues me in on areas I need to look at more closely. I love using casts with intertwining relationships. To that effect, I have data sets built around "Neon Genesis Evangelion", "Fullmetal Alchemist", "Ghost in the Shell" and the aforementioned "Attack on Titan".


I load this data and instead of just reading the requirements dry, I ask "what would a particular character do in this situation?". This is taking persona information to levels that might not be intended, but I find it very helpful, since I can get closer to putting some sense of a personality and back story to my users, even if the back story in this case may be really out there.


The charter is the guiding principle, and with it, so is the clock. I want to be focused on a particular area, and I want to see what I can find in just that time period. Sometimes, I find very little, and sometimes I get all sorts of interesting little areas to check out. For me, having lots of questions at the end of a session is a great feeling, because it means I can spin out more charters and more sessions. If I finish a session where I have nothing else to consider, I tend to be worried. It means I'm either being way too specific, or I'm dis-engaging my brain. 


Taking a simple note taking system, I try to track where I've been, or if I want to be particularly carefree, I'll try to use a screen-capture program so that I can go back and review where I went. Barring that, I use a note-taking tool like Rapid Reporter, so that I can talk through what I am actually doing and think of other avenues I want to look at. Yeah, I know, it sounds like I'm writing the test case as I do them, right? Well, yes! Exactly right, but there is a difference. Instead of my predetermining the paths I'm going to follow, I write down areas where I feel prompted to poke around, not forcing myself to follow a pre-determined path. The benefit to this approach is that I can go back and have a great record of where I've been, what I considered, and what turned out to be dead ends. Often, this turns out to be even more detailed and cover more ground than I would if I had tried to spell out all the test cases I could think of up front. 


Bottom Line:


Whatever approach you use, however you make it or apply it, the goal is not to follow my recommendations (but hey, if you like them, give them a try). The real goal is to see how you can guide your ability to learn about the application, and how that learning can inform current and future testing. You may find that your management may not approve of this approach at first, so my recommendation is, don't tell them. Just do it. If it works for you, and you can get better testing, find more interesting problems, and be more engaged and focused, trust me, they will come to you and ask "hey, what are you doing?!" They won't be asking accusatorially, they'll be genuinely wondering what you are doing and why you are finding what you do. 


No comments: