Monday, April 23, 2012
The Pen Game: Software Testing Encapsulated
While I was at STAR East in Orlando last week, I had a chance to spend a fair amount of time with fellow testers outside of track sessions or after conference hours. In fact, I don't think I got to bed before 2:00 a.m. Eastern once the entire time I was there. With an admission like that, you'd be forgiven for thinking that this was a hard partying bunch, but that isn't the case. In fact, much of our time was spent talking philosophy, the variety of practices out there that could be considered "good" and improving the ways that we talk about testing. It was during this session that I was introduced to "The Pen Test".
The Pen Test is a simple game, where a "presenter" or product owner holds up a pen and repeats some questions. There are only two possible answers; Yes or No. The "tester's" goal is to figure out what makes the answer Yes, and what makes the answer No.
If you think I'm going to give you the answer, you don't know me very well (LOL!). If you want to play the game, find someone who knows it and ask them to walk you through it. The game itself isn't the point of the post.
What I found interesting about the game was the fact that it helped me explain software testing and the actions we subconsciously perform in a much easier way. We start out with a program (being presented the pen). We are shown behavior (the words and actions to display the pen), and based on that behavior, we need to determine what "state" the program is in. Sometimes we can guess, and do very well. but there's a danger when we guess and get it right many times. We create a mental model that may not be accurate, and when something goes wrong, we are at a loss as to why.
I had this experience when this was presented to me. Not by skill, but by luck, I was able to guess the correct answer nine straight time. It was on the tenth try that I got it wrong, and then had to regroup and figure out again what the criteria for Yes/No could be.
For this game to be successful, the testers need to weed out as many non essential aspects of the game as they can. There are many aspects that we can see, and these aspects may or may not have any bearing on the trigger that makes the answer Yes or No. The effective tester uses as many tools at their disposal to weed out as many of these options as possible. We create a hypothesis, and then we test the hypothesis. If it holds up, we continue pressing, but if it doesn't, we should discard the model we have created (or at least the assumptions that underlie it) and try something different. Often, through this process, we are able to notice subtle differences, or eliminate things entirely from consideration.
During the game, the product owner is able to answer any of our questions, provided the answer isn't "write down what you say" or "tell me the answer". This is much like a black box level of testing. We have to determine the answers based on the behavior of the application. Through this process, we try out a number of different heuristics. They may work, they may not, but each time we hit a dead end, we have another variable that we can remove, and that, over time, gets us closer to a solution.
I had several conversations with people over the course of the week, and tried this out with a number of different people. The ability to have a conversation about what worked and what didn't helped greatly in explaining the way that we make assumptions, try out models and test their ability to be effective, discard theories that don't work, and keep honing our process until we nail down the correct item(s) that make for a Yes/No situation. The challenge, of course is that once a game gets well known, then it loses its effectiveness, because we focus in on the answer. This game, at least for right now, requires a lot of questions and inquiry to answer, so I think it's worth using as an exercise for the time being. If it gets too well known, I'll look for others. That's one thing I know that I never have to worry about; there's a lot of games that can be applied to software testing; so many that any given tester is unlikely to have seen or worked through all of them.