Showing posts with label games. Show all posts
Showing posts with label games. Show all posts

Monday, October 14, 2019

The Release Day Crime Scene - a #PNSQC2019 Live Blog


Bill Opsal and I have been in talks over the past year or so about putting together a simulation game that we jokingly called "CSI: Release Day". Think Clue but for software teams.

The premise is as follows.

There are six roles: Release Manager, Dev Lead, QA Manager, DBA, Project Manager, and Customer.

Each role has three key pieces of "mandated transparency", meaning if you have a role, depending on the scenario being run, you can say anything about what your role is but if you are asked a question where the answer is your piece of transparency, you have to say it. If your piece of transparency also indicates that you are the guilty party, you likewise have to say it.

The goal is to have the Crime Scene Investigation team question the people with their roles and see if they can ferret out the problem and the reason why it caused the problem.

All in the name of truth, justice, and the releasable way.

Bill and I enjoyed talking about this over the past few months leading up to this so it was fun to finally, FINALLY get to present it. On the whole, it seems like a lot of people enjoyed it and it also seems like the game has legs, so it's entirely possible we may expand on this and lmae ita little more concrete for a future conference :).

Thursday, May 23, 2013

Gaming and Skills Acquisition

Over the past couple of years, I've been involved in learning about, as well as developing, ways to teach software testing to others. One of the aspects that is appearing in many social interactions, from the trivial to the fairly high stakes, is an aspect called "gameification". If this term is new to you, it's the idea that, by adding elements of gaming to the skills we are working on, and sharing those details, we can both inspire ourselves and inspire others by our performance. Also, there's something that's just kinda cool about going somewhere, pointing to a screen and saying "see that high score? Yeah, that's MINE!"

The question that constantly comes back to me, though, is just how much gameification, or gaming itself, actually lends to skill acquisition. To experiment with this, I decided to resurrect an old friend and some old memories. Some background; I played some video games in my youth and teen years. We owned an Atari 2600 console, and had several of the 8 bit games available at the time. I did not, however, follow on to the "Golden Age of Gaming" the way my brother and elder younger sister's generation did. My own gaming experience wouldn't be brought out of hibernation again until 2003, when I went to work for Konami. It was a prime time to start looking at video games once again, since my kids were just becoming old enough to be aware of things like Pokemon, Yu-Gi-Oh, etc. ). With that, I decided to buy a couple of GameBoy Advanced handheld consoles and start playing along with them. Shortly afterwards, the PlayStation 2 invaded our house, followed by the Nintendo Gamecube, the Nintendo Wii, the Playstation 3, and subsequent updates to the GBA and Nintendo DS formats (I still have my original cobalt blue DS I bought in 2004; it works perfectly). For grins, I plugged in a favorite game that I spent many hours with a decade ago; "Castlevania: Aria of Sorrow".

One of the things I learned from this game in particular is that there are two types of gamers, when you get right down to it. The first type of gamer is what I call a "finesser". These are the people that learn the special moves, practice their hand/eye coordination, and their overall speed in moving through games. If I had to use a sports analogy, I would call them "boxers", a la the Muhammed Ali type. These are the type that, to carry the boxer metaphor forward, would wear out their opponents, and then through grace and yes, finesse, finish off the fight. The second type of gamer is what I call the "statmaster". These players know what it takes to beat a given level, and they build the stats necessary. When you hear about people who grind through levels to maximize skills, points, equipment, amass money in games to get top notch equipment, it's the "statmasters" that make up the far end of the spectrum. These are the ones that believe in "victory by overwhelming firepower". Level up to 15, and taking on a Level 10 boss is almost boring. These people are, to carry on the boxing metaphor, are what I would call "punchers", a la Mike Tyson in his prime. They don't have a lot of finesse, they don't play elegantly, but they are devastating when they hit, and they hit really hard.

So why does it matter if you are a finesser or a statmaster? Personally, I think it matters a lot. Gameification of objectives needs to consider the fact that people are a spectrum, and that, typically, they fall on either far sides of the finesser or the statmaster, or anywhere in between. Elevate the stats aspects, and people will gravitate towards that will grind to meet the stats. Focus on the finesse aspects and those who are most adept at those aspects will learn how to finesse the game. Give an experience that is too heavy one side or another and you will alienate or underwhelm the participants that fall to either side.

My question is, as I consider activities and goals is to see how do we actually balance these two aspects? Why does Aria of Sorrow speak more to me than, say, Metroid Fusion, even though both games share a lot of the same anatomy and approach? It's because, at least for me, Metroid Fusion rewards the finessers early on; there's little in the way of grinding to get better. Either you figure out how to win the battles or you do not advance. Aria of Sorrow, on the other hand, lets you level up and build strength if you want to. Granted it can take a lot of time, but by doing so, you can both get the stats you want, as well as develop the moves necessary to win (though frankly, at a high enough level, the moves aren't even all that important, just hit and hit until the enemy goes down).

Note, the examples I'm using are a bit old now, but I think the dialog still applies. As we think about the skills we want to acquire, and we say to ourselves "make it like a game", there's more to that statement than meets the eye. Not only if you like to game, but how you like to game, will determine which model and approach works best for your personality and style. As an admitted statmaster, I realize what works for me won't necessarily appeal to the finesser, which means I either have to consider two different methods and models, or try to find a happy medium ground between the two. How about you? Do you consider your gaming style when you engage in these types of things? Does the gaming metaphor work for you when you try to learn new things?


Monday, April 23, 2012

The Pen Game: Software Testing Encapsulated


While I was at STAR East in Orlando last week, I had a chance to spend a fair amount of time with fellow testers outside of track sessions or after conference hours. In fact, I don't think I got to bed before 2:00 a.m. Eastern once the entire time I was there. With an admission like that, you'd be forgiven for thinking that this was a hard partying bunch, but that isn't the case. In fact, much of our time was spent talking philosophy, the variety of practices out there that could be considered "good" and improving the ways that we talk about testing. It was during this session that I was introduced to "The Pen Test".

The Pen Test is a simple game, where a "presenter" or product owner holds up a pen and repeats some questions. There are only two possible answers; Yes or No. The "tester's" goal is to figure out what makes the answer Yes, and what makes the answer No.

If you think I'm going to give you the answer, you don't know me very well (LOL!). If you want to play the game, find someone who knows it and ask them to walk you through it. The game itself isn't the point of the post.

What I found interesting about the game was the fact that it helped me explain software testing and the actions we subconsciously perform in a much easier way. We start out with a program (being presented the pen). We are shown behavior (the words and actions to display the pen), and based on that behavior, we need to determine what "state" the program is in. Sometimes we can guess, and do very well. but there's a danger when we guess and get it right many times. We create a mental model that may not be accurate, and when something goes wrong, we are at a loss as to why.

I had this experience when this was presented to me. Not by skill, but by luck, I was able to guess the correct answer nine straight time. It was on the tenth try that I got it wrong, and then had to regroup and figure out again what the criteria for Yes/No could be.

For this game to be successful, the testers need to weed out as many non essential aspects of the game as they can. There are many aspects that we can see, and these aspects may or may not have any bearing on the trigger that makes the answer Yes or No.  The effective tester uses as many tools at their disposal to weed out as many of these options as possible. We create a hypothesis, and then we test the hypothesis. If it holds up, we continue pressing, but if it doesn't, we should discard the model we have created (or at least the assumptions that underlie it) and try something different. Often, through this process, we are able to notice subtle differences, or eliminate things entirely from consideration.

During the game, the product owner is able to answer any of our questions, provided the answer isn't "write down what you say" or "tell me the answer". This is much like a black box level of testing.  We have to determine the answers based on the behavior of the application. Through this process, we try out a number of different heuristics. They may work, they may not, but each time we hit a dead end, we have another variable that we can remove, and that, over time, gets us  closer to a solution.

I had several conversations with people over the course of the week, and tried this out with a number of different people. The ability to have a conversation about what worked and what didn't helped greatly in explaining the way that we make assumptions, try out models and test their ability to be effective, discard theories that don't work, and keep honing our process until we nail down the correct item(s) that make for a Yes/No situation. The challenge, of course is that once a game gets well known, then it loses its effectiveness, because we focus in on the answer. This game, at least for right now, requires a lot of questions and inquiry to answer, so I think it's worth using as an exercise for the time being. If it gets too well known, I'll look for others. That's one thing I know that I never have to worry about; there's a lot of games that can be applied to software testing; so many that any given tester is unlikely to have seen or worked through all of them.

Tuesday, December 20, 2011

The Cult of SET?!


I think it was last year around September, when Matt was interviewing Goranka Bjedov and Dan Downing for TWiST #10, and in the closing minutes, when Matt made a joke about playing Blackjack, Dan mentioned "I don't, but I play SET!". I thought, "huh, that sounds interesting" and then let it drift out of my consciousness. However, it wouldn't be too many months later that I'd be with a gathering of other testers, and they'd mention the game of SET as well. SET? Why am I hearing that name again? A few more months later and another tester gathering, and as we're all talking, out come cards with interesting symbols, and a group of testers gathering around them. What is this strange game, I wonder? Sure enough (as if you hadn't already figured out), there it was again. This mysterious game of SET, and a group of testers all actively playing it, often for hours.

At CAST in 2011, after the first conference night's activities, we all came back to the hotel, and sure enough, not ten minutes pass that a group of testers has not busted out the SET cards and played for hours. What was it about this game that seems to be so ingrained into my fellow testers' consciousness? More to the point... why had I not heard of it before?

I think it's safe to say that a lot of test community folklore had gone by me unnoticed for years, because until 2009, I wasn't really in any way directly engaged with the broader testing community. As I came to be more aware of it, I started to understand more of the things that testers do, including various card and dice games to help both pass the time, but more importantly, help testers recognize patterns and issues in places they otherwise might not see them. It's in this context that I think SET fits brilliantly.

OK, I have a feeling I'm not the only one late to the party on this, so I am willing to bet there may be a couple of you out there thinking to themselves "OK, Mr. Testhead, what is SET?!" Put simple, it's a card game where any number of players get together, and the dealer lays out twelve cards in a grid four cards long by three cards deep... like this:

You may notice that the cards have various things in common:

1. There are three colors used (red, green and purple).
2. There are three shapes used (diamonds, ovals, and squiggles).
3. There are three shadings used (empty, lined, and solid).
4. There are three counts for shapes (one, two and three).

Three cards make a set if, in each of a card's features (shape, color, fill, and count) every card matches the others, or no cards match each other. As an example, one of the sets you can make is with the three single diamond cards. They are all the same count (one), they are all the same shape (diamond), they are all a different color (red, green and purple) and they are all a different shading (solid, lined and empty). Any property has to match on all cards, or has to not match on any cards to be a valid set.

So what's the point? The point is that, with these twelve cards, you try to see how many sets you can find before the other players do. The way that we've played it at tester gatherings has been if you find a set, you take those cards out of play and you hold them, and new cards are dealt. The player that finds the most sets when the cards are exhausted wins the game.

There are a variety of different rules that can be used in SET, but they are all variations on the same theme. Find as many SETs as you can. You can also play the game yourself, and there are a number of web sites that feature frequently updated puzzles. In the single version of the game, the most common approach I have seen has been 12 cards are dealt, and in the twelve cards there are six sets. Your job is to find them (and often a timer is kept to see how long it takes you to find them). Based on the above graphic... can you find all six sets?


I think that the draw of this game specifically for testers is the fact that it's a quick game to get up to speed with and understand, but more to the point, it helps to remind us that patterns are all around us, and things that we often think we see, we don't really see unless someone else sees them, or we spend the time and trial and error our way around them. This is a great reminder to us that there are many ways we can sharpen our problem solving skills and also sharpen our own perceptions.  The point was brought home to me because, last week, one of my co-workers, as a "secret snow person" gift, bought me my own SET card deck. Now I can jump in and be a part of any tester gathering, too, and I can also practice on my own and see if I can sharpen my own perception and ability to pattern match.

If you'd like to play online, here's a good version of the game. It resets after you find each series of 6 sets, so you can play as little or as much as you want to :).