Tuesday, October 29, 2013

Scenes from the Inaugural Bay Area Software Testers Meetup

Sorry I'm late, but I was playing co-host for this event, and frankly it was just not possible to live blog and host at the same time.

The Climate Corp orb knows all :).
Last night, a couple dozen testers gathered together at Climate Corporation, had been and wine, some soft drinks, and a lot of conversations and made new friends. To fill this out a bit more, the Bay Area Software Testers held its inaugural meetup last night. As I've said in previous posts, I'm excited to see this group form and come together. It's been a long time coming, but we're here at last.


To celebrate the first event, we chose to focus the evening around "99 Second Lightning Talks", in honor of Rosie Sherry, she of Software Testing Club and Ministry of Testing fame (aka the original publisher of the "99 Things You Can Do to Become a Better Tester" eBook).


To not keep anyone in suspense, I did not make the 99 seconds (and I proved to be the only one, so my "perfect record" of just on time" delivery of lightning talks is now shattered. Ah well, had to end eventually ;) ). I covered the fact that of the 99 Things eBook suggestions, only about 10% of the topics were technical. 90% of the topics were non-technical and skills that many people could do without a major investment in technical skills, which I think makes for an interesting comparison. We see all of the talk around tools and technical skills, but the community seems to feel otherwise. Worth investigating, I think.


One of several 99 second lightning talks.
Curtis Stuehrenberg gave a talk about the fallacies that exist when we look at and deal with testing. Just because it's happened at the same time, doesn't mean it was caused by that (post hoc ergo proctor hoc), confirmation bias (we find what we are looking for, and ignore everything else), sunk cost (just because we buy something we think that we have to continue to buy into it to support the expense, when the expense has already been paid, it's done), regression patterns and the false leads they can give us, etc. Pretty good for 99 seconds :).


Another talk from  Fred Stevens-Smith was based around "building QA as a Service", either in using the concept, or in helping build the process. Fred is part of RainforestQA, which is developing this model where tests are written in "plain English", and then the service releases the tests out to crowd sourced testing (sounds intriguing, definitely something I'd like to explore as a Weekend Testing topic :) ). For those who want to see or know more, check out rainforestqa.com or +RainforestQA on G+ and @rainforestqa on Twitter.

Eric Proegler talked about two recent conferences (WOPR and STP-CON) and how the subject of technical debt came to be a recurring theme in both of them. His (highly rhetorical, I might add ;) ) question was, are only the old dinosaur traditional companies dealing with this, or are these hip, new Agile development initiatives immune from technical debt? that elicited a chuckle, but brought home an interesting point, which is, if we are not actively working to pay down technical debt or prevent it from accumulating, we are accumulating technical debt, and we all had better give some consideration as to how we are going to deal with it.

Another crowd source testing talk was given focusing on QA as a service. This whole "QA as a Service" thing may actually have legs, and it was interesting to see two takes on how organizations are making this work.

Our final talk was about how we view negative testing and the way that we talk about it. Using a simple example of a light switch, and asking how we confirm that there is both a passing test and a failing test, or that we neglect to see when the light is turned off as a negative test. It was a fun way to wrap up the talks.

A spirited discussion about metrics ensued.
From there we went into a discussion of "Metrics, or what do we really mean when we are asked for metrics?" This was prompted by Josh Meier, and we approached it from a Lean Coffee perspective and gave it ten minutes as a discussion topic, to see if we wanted to carry it forward from there (turned into a fifteen minute discussion, but with lots of great input and a hasty mind map from yours truly put up on the board).

We broke off into smaller groups, and I had some interesting chats with some new friends about how we approach automation, and whether or not it made sense to do so from the perspectives that are so common (UI testing vs. API testing, which makes more sense and why).

...and resulted in this mind map,
along with some good suggestions.
Finally, at the end of the night, a group of us who could stay out a bit longer on a "school night" made our way to Thirsty Bear to continue the conversation over beer and tapas.


Overall, I would say that the first BAST meetup was a success, and I want to give my thanks to everyone who I met lat night, thank you for coming out and helping us get this off the ground and we hope we made a memorable first impression. Now, of course, comes the next challenge... following up from here. 


Thirsty Bear for a post Climate Corp wrap up.
One thing Curtis and I are hoping is that we don't want to have this be strictly about testing, and we certainly want to keep this group tool agnostic. Wait, then what will this group be about? We hope to make it about ways that we as software testers can add value, and approach our space with a broader rather than a narrower vision. 


To that end, we are looking to get speakers from multiple perspectives and disciplines, such as marketing, finance, operations, sales, customer support, as well as software development and software testing. We will, of course, also look to share ideas and structure discussions around good testing practices and using them in the right contexts.

To close this, I want to say thanks to Curtis for taking the bull by the horns and getting BAST started, to Climate Corporation for providing us a great space to hold our first meeting, to the Association for Software Testing for helping seed this initiative with a grant so that we could hold this event and also help hold future events, and most importantly, thank you to all of the participants that came out on a Monday night to talk testing and hang out together. Curtis and I have long hoped to create a space where testers from around the Bay Area could get together and share ideas and knowledge from a number of different perspectives, and not have to haul down to Silicon Valley to do it. It's a great first step, and we look forward to many more to come.

Saturday, October 26, 2013

Try Always to See What Happens When You Violate the Rules: 99 Ways Workshop: Epilogue

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific. My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #100: Try always what happens when you violate the rules. So here's the second sentence also. - Teemu Vesala


The original eBook ended with a number 100. Why? Just because. It was meant to be a little poke in the ribs to say "ha, see, we put in 100, just because we wanted to". In other words, expect the unexpected.


Epilogue: Where I've been, and what I've learned


This has been a wild ride, to tell the truth. It was started a bit anxiously, then felt like I got into a bit of a groove, then I decided to go for it and put out two a day where I could. For some of the entries, it was easy to come up with examples. For some others, it felt like pulling teeth to make examples that were coherent and understandable. Still, I found when I was pushing out two posts a day, I felt confident I could push out two posts a day. Then CAST (the Conference for the Association for Software Testing) came, and I had to travel, go and participate in the conference, and I decided to put down this project for a week to focus on other things. I figured it wouldn't be too hard to pick it back up again. Wow, was I wrong.


As I've said in other items I've written, the habit becomes the driver. When we create a habit and put it into practice, it become easier to meet the habit. When we put down the process, even if it's just for a week, it's so much harder to pick it back up again.


September and October also coincided with a increase of writing and focusing on initiatives preparing for talks I will be delivering at Oredev in November. I found myself fighting with, and trying to make time, to finish this project. One stretch of working through the Excel example, and condensing it down so that I could put it into a blog post in a way that was coherent, took almost two weeks for me to get together.


One of the funny things I noticed as I was writing these examples: I would find myself talking to people in other contexts, and as I was talking with them, I had to stop and think "wait, is that something I wrote about?" If the answer was yes, I would go back and see if I agreed with what I wrote originally, or if I would want to modify what I had said. If I was discussing something that wasn't in the examples thus far, I would make notes and say "hey, that conversation would be great as a topic for number 78. Don't forget it!" 


Mostly, it feels really good to know that I made it from start to finish. Seeing re-tweets and favorites on Twitter, plus likes on Facebook and the comments to the blog posts themselves, shows me that a lot of you enjoyed following along with this. More to the point, this project has doubled my daily blog traffic. Now, of course, I feel a little concerned… will all those readers drop away now that this project is finished? Will they stick around to see what I have planned next? What do I have planned next?


I do have a new project, but it's going to take me longer to do it. Noah Sussman has posted on his blog what he feels would make a great "Table of Contents" for a potential book. The working title is "How to Become a More Technical Tester". I became intrigued, and said I would be happy to be a "case study" for that Table of Contents, and would he be OK with me working through the examples and reporting on them? He said that would be awesome, and thus, I feel it necessary to dive in and do that next.


So, did I just throw out a "bold boast" immediately after completing another one?! Didn't I learn anything from this experience? The answer is yes. What I learned the most is that creativity strikes and skill grows when we actively work them. Without that up-front work it stagnates, and becomes harder to draw out. Therefore, I would rather "keep busy" and make more "bold boasts" so that I can keep that energy flowing. This will, however, be a more involved process. I am not going to make any promises as to how much I can update or how frequently. This may take a few months, it may take a year. It's hard to say on the surface. I do know that I want to give appropriate attention to it and do the content justice. Who knows, maybe Noah will be willing to consider me a collaborator for this book… but I'm getting ahead of myself again ;).


My overall goal for this project was to do more than I figured I ever would if I just read the list and said "hey, those are cool". Here's hoping that my example will encourage you to try to likewise reach in side and fins something you can use. Your outcome may be remarkably similar to mine, or entirely different. If you do decide to take any of them on, please blog about them (and please, put a comment in my blog so I can see what you have written).


Now, however, it's time to close this project, at least for the time being. Time will tell if we've seen the last of me on this (hint: probably not ;) ).

Question the Veracity of 1-98, and Their Validity in Your Context: 99 Ways Workshop #99

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #99: Question the veracity of 1-98, and their validity in your context - Kinofrost


Heh, I think it's somewhat appropriate that we close out this project (and land on #99) with a need to talk about context directly. Yes, I admit it, I consider myself an adherent and a practitioner who believes in and tries to follow the context driven principles (below in the workshop, btw ;) ). Too often we talk about context driven testing as though "it depends" solves all the problems. I'm going to do my best to not do that here. Instead, I want to give you some reasons why being aware of the context can better inform your testing than not being aware or following a map to the letter.


Workshop #99: Take some time to apply the values and principles of Context-Driven testing, and call on them when it comes to determining of anything from these past 98 suggestions actually make sense to be used on what you are working on right now.


First, let's start with what are considered to be the guiding principles of context-driven testing (these are from context-driven-testing.com).


- The value of any practice depends on its context.

- There are good practices in context, but there are no best practices.

- People, working together, are the most important part of any project’s context.

- Projects unfold over time in ways that are often not predictable.

- The product is a solution. If the problem isn’t solved, the product doesn’t work.

- Good software testing is a challenging intellectual process.

- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.


The most likely comparison you will hear is something on the polar oposite ends of the spectrum, i.e. the differences between testing a medical device like a pacemaker or the control software for the space shuttle vs. a video game app for an iPhone. On the surface, this should feel obvious. The scope of a project like a medical device and the repercussions of failure are huge. The outcome is literally life or death for some people. When it comes to a video app or an inexpensive game, it hardly warrants comparison. 


With that in mind, let's try something a little more direct in comparison. What level of testing should go into the actual control software for a pacemaker vs. a monitoring application that resides on a computer for a pacemaker? The pat answer doesn't work as well any longer, but there is still a question here that's not trivial. Are there differences in testing? The answer is yes. The pacemaker controller itself would still be going through much greater levels of scrutiny that the monitoring software would. In the event of system failure, the monitoring system can be rebooted or the program turned on or off, with no effect at all on the pacemaker itself. If the monitoring software did cause the pacemaker to malfunction, that would not only be seen as catastrophic, it would also be seen as intrusive (and inappropriate). 

This opens up different vistas, and again, begs the question "how do we test each of the systems?". The first aspect is that the pacemaker is a very limited device. It has a very specific and essential functions. There's less to potentially go wrong, but its core responsibility has to be tested. In this case, the product absolutely has to work, or has to work at a astonishingly high level of comfort for those who will be using them. For them, this is not a math theorem, this is the power of life or death to them. The monitoring software is just that. It monitors the actions of the pacemaker, and while that's still important, it's of a far secondary level of importance compared to the actual device. 
  

This brings us back to our past 99 examples. advice I've given may work fine for your project(s), but in some cases, it may not be wise to use the approaches I gave to you. That is to be expected. I can't pretend I will know every variable you will need to deal with, and for you to say "well, that may be fine for your project, but my manager expects us to do…" and yes, there you go, that's exactly why we tend to not spell things out in black and white when it comes to context driven testing commentary. We need to look at our project first, our stakeholders next, and the needs of the project after that. IF we are planning our testing strategy without first taking those three things into account, we are missing the whole point of context-driven testing. 


Bottom Line:

In this last statement here, I' going to be borrowing from my own website's "What it's all about" section. In it, I share a quote from Seth Godin that says "Please stop waiting for a map. We reward those who draw maps, not those who follow them." In this final post, I want to make sure that that is the takeaway that this whole project gives. It would be so easy to just look at the 88 Things, assign the ideas to our work, and be done with it. I've strived to put my own world view, my own context, and write my own map in these posts. I may have succeeded, I may not have, but if there's any one thing I want to ask of anyone who has followed this is to not follow any of these ideas too closely. 


If these workshop ideas feel uncomfortable for what you are doing, don't get frustrated. Instead, focus on why they feel uncomfortable. What is different in your case? What could be modified? What approaches should be dropped altogether? It's entirely possible that there are better ways to do any and all of these suggestions than what I have spelled out here. I encourage you to find out for yourselves. I've drawn a map, but it may not be the best map for you. If it's not, please, sit down and draw your own map. the testing world is waiting to see where you will take it.

Test What Matters: 99 Ways Workshop #98


The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #98: Test what matters - Rosie Sherry

Ahhh, it all comes down to that, doesn't it? It's simple, elegant, really easy to understand, and yet, try as we might, it's so very hard to actually do (no, seriously, it is). See, what matters is tremendously subjective. Who are we talking about? Are we talking about our end users? Our management team? Our co-workers? Our shareholders? 


We'd love to believe that each and every one of those groups are aligned in purpose and intention, that they would all want the same things, and that what maters to one matters to all. Sadly, that's not true. Thus, to make sense of what matters, we first have to make a solid determination as to "who" matters.


Workshop #98: Get a feel for the five biggest customers that your organization wants to keep happy. Hint: they may not be end users of your product. Once you find them, get to the heart of the matter and discover what really matters to them (ask informally if you can't get direct answers from the one's who call the shots). Then structure your testing regimen to focus on what matters to those people (hint: they will not always be aligned).


A famous phrase that many of us have heard over the years is "Quality is value to someone who matters" (thank you Jerry Weinberg). Therefore, what matters is what we can identify that is important to the person or people that matter. Those people can shift, and they can have significantly different goals. Therefore, what should we do? Do we skip around from person to person and find out what matters most to them, and make sure that we deliver it to them? We could, but I would also hazard that it would make us look schizophrenic, and quite possibly untrustworthy. Quality is, indeed, "value to someone who matters" and the "who matters" part can be hard to pick out at times.


Therefore, rather than a fragmented and manic rush to figure out what is most valuable to any one person at any given time, I'd much prefer to go at it from another route, which is to provide information that will help those people that matter make the best decisions they can. I'm not there to "check off the list that makes my manager happy" or "work on the story that makes the director of development look good" or " deliver under cost or ahead of schedule so that we can maximize sales ahead of the upcoming holiday season". Those things are all valuable, and they are all, in their sphere, important. If I choose to chase any one of those, I will be doing a disservice to everyone else who matters. So what should we do?


It comes down to what I feel is the fundamental thing that testers do, and it's not find bugs, or prove that software is "fit for use". Instead, it's to provide information about the state of the product in ways that are meaningful, and to let those in other positions in the organization make the best decisions that they can based on what we have aggregated, analyzed and synthesized. In the end, the development team really doesn't care how many test cases I ran if I didn't find the issue that is most embarrassing to them. The CEO doesn't care that I was meticulous and covered multiple testing scenarios if, when they stand up and give the demo to customers, the program crashes. The customer doesn't care how many features were delivered if the one that they actually care about still doesn't work. 


We need to be more focused than that, and we need to contribute to more than just working in our predefined box and testing what we are told to test. If we are information providers, than we need to be bold and brave enough to provide information. Even when it isn't convenient. Even when it may embarrass some people. Even if it may mean we have to announce a delay. If we try to please all entities, we will end up pleasing none of them. If we are honest, and show integrity, we may still not make a whole lot of people happy, but we will do one thing for certain… we will be providing the key information for all parties to make the best decision possible. If we truly believe that is what matters, then that is what we need to deliver. That information, the kind that helps make an informed decision.

  
Bottom Line:

So much of what we do is laced with politics, cronyism, and what I often refer to as a "perverse reward system" that tends to honor the short term benefits over long term health. If we focus too much on the short term goals, we can win many battles, but ultimately lose the war. We can paint ourselves into a corner, and have no way to get out without causing a mess. Pick whatever metaphor you want to, but realize that what will please one person may royally irritate someone else. Quality works the same way, and playing sides will ultimately win you few friends. Instead, pledge to make the story, the whole story, the most important thing that you can deliver. By doing so, you can make sure that you are delivering something of real value, and value that will last. Ultimately, that is what really matters, so go forth and do likewise :).

Friday, October 25, 2013

You Won’t Catch All the Bugs, and Not all the Bugs You Raise Will Get Fixed: 99 Ways Workshop #96 & 97

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #96: Be prepared, you won’t catch all the bugs, but keep trying - Mauri Edo
Suggestion #97: Be prepared, all the bugs you raise won’t get fixed - Rosie Sherry


This is really two sides of the same coin, so it pays to focus on them together. In the world of testing, nothing rolls downhill faster than "blame". If everything goes great, the programmers are brilliant. If there are problems in the field, then the testers are marched in and demands made to know why we didn't find "that bug". Sound familiar? I'm willing to bet it does, and if it doesn't sound familiar, then count yourself very lucky. 


This comes down to the fact that, often, an unrealistic expectation is made of software testers. We are seen as the superhuman element that will magically make everything better because we will stop all problems from getting out. Show of hands, has that ever happened for anyone reading this? (crickets, crickets) … yeah, that's what I thought.


No, this isn't going to be about being a better shield, or making a better strategy. Yes, this is going to be about advocacy, but maybe not in the way that we've discussed previously. In short, it's time for a different discussion with your entire organization around what "quality" actually is and who is responsible for it.

Workshop #96 & #97: Focus on ways to get the organization to discuss where quality happens and where it doesn't. Try to encourage an escape from the last minute tester heroics, and instead focus on a culture where quality is an attribute endorsed and focused on from day one. Get used to the idea that the bug that makes its way out to the public is equally the fault of the programmer(s) who put it there, as it is the testers(s) who didn't find it. Focus on maximizing the focus of quality around the areas that matter the most to the organization and the customers. Lobby to be the voice of that customer if its not coming through loud and clear already.

Put simply, even if we were to be able to catch every single bug that could be found, there would not be enough time in the days we had to fix every single one of them (and I promise, the number of possible bugs is way higher than even a rough estimate could give). The fact of the matter is, bugs are subjective. Critical crash bugs are easy to hit home. Hopefully, those are very few and far between if the programmers are using appropriate steps to write tests for their code, use build servers that take advantage of Continuous Integration, and practice proper versioning. 


There are a lot of ways that a team can take steps to make for better and more sable code very early in the development process. Contrary to popular belief, this will not negate the need for testers, but it will help to make sure that testers focus on issues that are more interesting than install errors or items that should be caught in classic smoke tests.


Automation helps a lot with repetitive tasks, or with areas that require configuration and setup, but remember, automated tests are mostly checks to make sure a state has been achieved, they are less likely to help determine if something in the system is "right" or "wrong". Automated tasks don't make judgment calls. They look at quantifiable aspects and based on values, determine if a should happen, or if something else should. That's it. Real human beings have to make decisions based on the outcomes, so don't think that a lot of automated "testing will make you less necessary. They will just take care of the areas that a machine can sort through. Things that require greater cognitive ability will not be handled by computers. That's a blessing, and a curse.


Many issues are going to be state specific; running automated tests may or may not trigger errors to surface, or at least, they may not do so in a way that will make sense. Randomizing tests and keeping them atomic can help with the ability to run tests in a random order, but that doesn't mean that the state that will be met when the 7,543rd configuration of that value on a system is met, or when the 324th concurrent connection is made, or when the access logs record over 1 million unique hits in a 12 hour period. The point here is, you will not find everything, and you will not think up every potential scenario. You just won't! To believe you can is foolish, and to believe anyone else can is wishful thinking on steroids. 


Instead, let's have a different discussion.

- What are ways that we can identify testing processes that can be done early as possible? 
- Can we test the requirements?
- Can we create tests for the initial code that is developed (yes, I am a fan of TDD, ATDD and BDD processes)?
- Can we determine quickly if we have introduced an instability (CI servers like Jenkins do a pretty good job of this, I must say)?
- Can we create environments that will help us parallelize our tests so we know more quickly of we have created an instability (oh, cloud virtualization, you really can be amazing at times)?
- Can we create a battery of repetitive and data driven checks that will help us see if we have an end to end problem (answer is yes, but likely not on the cheap. It will take real effort, time and coding chops to pull it off, and it will need to be maintained)?
- Can we follow along and put our eyes into areas we might not think to go on our own in interesting states (yes, we create scripts that allow us to do exactly this, they are referred to as "taxis" or "slideshows", but again, they take time and effort to produce)?
- Can we set up sessions where we can create specific charters for exploration (answer is yes, absolutely we can)?
- Are there numerous "ilities" we can look at (such as usability, accessibility, connect-ability, secure-ability)?
- Can we consider load, performance, security, negative, environmental, and other aspects that frequently get the short end of things?


Even with all of that, and even with the most dedicated, mindful, enthusiastic, exploratory minded testers that you can find, we still won't ferret out everything. Having said that, if we actually do focus on all these things early on, and we actually do involve the entire development team, then I think we will be amazed at what we do find and how we deal with them. It will take a team firing on all cylinders, and it will also take focus and determination, a willingness to work through what will likely be frustrating setbacks, lots of discoveries and a reality that, no matter how hard we try, we can't fix all issues and still remain viable in the market. 


We have to pick and choose, we have to be cautious in what we take on and what we promise, and we have to realize that time and money slide in opposite directions. We can save time by spending money, and we can save money by spending time. In both circumstances, opportunity will not sit still, and we have to do all we can to somehow hit a moving target. We can advocate for what we feel is important, sure, but we can't "have it all" No one can. No one ever has. We have to make tradeoffs all the time, and sometimes, we have to know which areas are "good enough" or which areas we can punt on and fight another day.


Bottom Line:


No matter how hard we try, no matter how much we put into it, we will not find everything that needs to be found, and we will never fix everything that needs to be fixed. We will have plenty to keep us busy and focused even with those realities, so the best suggestion I can make is "make the issues we find count, and maximize the odds that they will be seen as important". Use the methods I suggested many posts back as relates to RIMGEA, and try to see if many small issues might add up to one really big issue. Beyond that, like Mauri said at the beginning, just keep trying, and just keep getting better.



BAST Meet-Up To Go Live Monday, October 28, 2013

For those who read this and are residents here in the San Francisco Bay Area, and are software testers specifically, this post is for you. If you don't live in the Bay Area, but know software testers (or otherwise interested parties) that do live in the Bay Area, then this post is for you, too!

Curtis Stehrenberg and I have gone back and forth over the past couple of years to see if we could find ways to engage the broader Bay Area software testing community, and to that end, we created the Bay Area Software Testers (BAST) group on LinkedIn. Now, thanks to a fortuitous visit from Software Testing Club's Rosie Sherry to San Francisco (and a prod by Josh Meier via Twitter that we Bay Area folks should do something while Rosie was in town), Curtis and I decided it would be fun to hold a Meetup while she was here. More to the point, we decided that this Meet-Up would be a great opportunity to launch the BAST Meetup. So that's what we are doing :).

First things first. Curtis works at The Climate Corporation, and they have graciously offered to provide the space for this Meetup. They have also stated that they would spring for dinner and drinks if we could get a large enough RSVP list. Yes, I am appealing to all TESTHEAD readers! If you are in the San Francisco Bay Area and can come out to our first Meetup, we would love to have you there. If you know software testers in the Bay Area, please get this notice to them to join the BAST Meetup group, and more to the point, come out and join us.

Details for the event are below, please RSVP on the Meet-Up site.



Software Testing Club in San Francisco
Monday, October 28, 2013, 6:00 PM

The Climate Corporation
201 3rd Street #1100, San Francisco, CA (cross street Howard)


Rosie Sherry, the founder of London's wildly popular and now infamous Software Testing Club, is in town! It's last minute but we thought this might be the perfect opportunity to officially launch the Bay Area Software Testers (BAST) Meet-Up group. Either myself or Michael Larsen (http://www.mkltesthead.com) or both will be giving a lightning talk on a subject to be announced at the event.

If we get enough RSVPs by the morning of the event, our hosts have graciously agreed to kick in for beer and wine. If we get a bunch of people showing up I've also heard dinner might be provided as well! If that doesn't work out we're planning on adjourning next door to the Thirsty Bear for drinks and socializing.

Michael assures me they have been warned (and yes, they have been :) ).

Thursday, October 24, 2013

Try to Find Problems and Victims: 99 Ways Workshop #95

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #95: Do not try to find errors or bugs - try to find problems and victims. Testing is more than checking. - Thomas Lattner

Wow, that's a provocative statement, and it's one I agree with 100%. One of the things that happens a lot is that we end up couching "issues" and "glitches" into safe words like that. We don't talk about "problems" or "bugs" or "catastrophic failures" because they might have the potential to hurt someone's feelings. We are encouraged to report dispassionately, to stick with facts and to couch words in ways that are not inflammatory.


For the long term survival of our jobs and sanity, that may well be appropriate. However, when we test, drop the polite shtick. As Steve Martin quoted many years ago, 'Comedy is Not Pretty". Neither is testing, if it's done right. We don't have to speak to others like murderous pirates, but we sure had better do so to the code we hope to test, otherwise we will miss something important.



Workshop #95: Exercise the product you are testing with an air of "what's the worst thing I could do here?" Think of a person that could be harmed, and how. Do your absolutely diabolical worst to see if you could expose the most sensitive aspects of their data and exploit it. Then write it up (in whatever dispassionate verbiage you choose) and lay it on the table. See who stands up and reacts.

Yeah, these are getting a little more tricky to write here at the end, I will admit it. I already feel like I've said this several times in other workshops, but I'll say it again here. We do our best persuading when we can make the bugs we find personal, when the programmers and product team can most directly empathize with the pain. Therefore, set yourself up so you can really bring the pain (or give it your all trying).

- Create a persona.
- Make it as detailed and as data rich as you can.
- Give this person a back story, and as much "dirt" as you would want to keep hidden.
- Then do everything you can to expose that dirt (or have one of your teammates try to do it).

Some of you are saying I'm taking "bug hunting" and I'm creating an inversion. In a way, yes, that's exactly what I am doing. I'm approaching the application from the aspect that I want to learn all I can about that person, and I want to do so with whatever restraints I can configure. The more restraints, the more aggressive I want to try to overcome them. 

Can I determine a password? 
Can I sent HTTP requests that will send me back raw data in clear text? 
Can I get to their credit card information? 
Can I order things on their behalf? 

One of the oft heard phrases is "well, no user would do that". They are right; no normal user we have ever envisioned that would be friendly to our product would aim to do such things. We're not testing for nice people. We are testing to help us thwart truly rotten, obnoxious and dangerous people. If we could put a human face and emotion to the issues we find, we will get much more attention than if we have some abstract, corner case feeling bug. Think about it, what's going to draw your attention more, someone saying: 

"in this obscure case, where I entered in the same password 700 times, I was able to throw an exception and leave the machine in a bad state" 

or 

"hey, check this out! Running this looped bash script and cURL, I was able to clobber the machine, get to the database prompt, and I now have the credit card information of all our customers. Yee Haw, who wants a Harley?!!" 

Bottom Line:

Sometimes it's a little to easy to get into the mindset of focusing on the features we are testing, but neglect the risks that might reside around those features (and yes, that's a point I made in another post a few days ago). For many, though, as Roland Orzabel* so aptly put it in "Goodnight Song", "Nothing ever changes unless there's some pain." It's up to us to help our customers see the potential pain, and  prove that it is possible. Not an abstract pain, but one that has a person's name, face and anguish behind it. The more you can make the hair stand on end for those discussing your findings, the better :).


* Tears For Fears, Elemental, 1993


Wednesday, October 23, 2013

Understand Your Domain (and Your Competitors) Fully: 99 Ways Workshop #94

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #94: Understand your domain fully and as importantly your competitors - Stephen Blower


There's no question that having knowledge of your domain is critical if you want to be successful in whatever job you are doing. I used to believe that I was capable enough that I could test anything. Yes, I suffered from hubris when I was younger, but indeed, I believed that all i'd have to do was apply some testing maxims and I'd be able to take my testing knowledge anywhere. In superficial cases, that's true. Domain analysis and equivalence class partitioning are the same everywhere, but how you use them, when you use them and how they will actually inform you of something relevant depends on the domain and your understanding of it. Put more bluntly, I don't care how great a game tester you are, if you want to apply at Genentech and work in their biomedical engineering initiatives, you had better know something about the world of pharmaceuticals, chemistry and biomedical engineering.


That covers the domain, sort of, but what do we do about understanding our competitors? Ahhh, now we can talk about something fun and entertaining, at least, if you do it the right way. 


Workshop #94: Round up some of the primary applications in your area of work and expertise. Your own products and those of your competitors (demo or trial software is fine). Create a matrix of applications and then set up criteria you would like to compare. Walk through all of the applications with an eye towards dispassionately seeing how they all stack up against each other (i.e. an application "shoot out"). gather your findings and analyze them. Determine which applications come out on top in the various categories and write up executive summaries. Share these findings with your development/product team.


I'll be frank, Competitive Analysis is fun, if you approach it with the right mindset. 


First and foremost, you have to make one simple commitment… set your personal biases, passions, and loyalties at the door. If you want to do competitive analysis, you have to be prepared for the fact that your product may not perform well. 


Second, you need to construct workflows that are representative of the way that your company and your competitors products interact with their users. Get creative with this, and see how many of these workflows you can create. Work through ways that you can quantitatively and qualitatively represent the interactions. 


Some examples can use benchmarking tools, or can use your automation tool of choice to run the system through hundreds or thousands of looped tests. How did your app do? How did the competitors app do? Can you record the details in such a way that, when you declare the winners/losers that you can do it based just on the numbers, and not on your knowledge of what's been entered? Note: this works best when you have a team that you have given random numbers and those numbers correspond with a platform that you, as the evaluator, don't know about. 

Quantitative reviews are fairly simple; if the specs and numbers are faster for one app vs. the other, it'll be there in the numbers. Qualitative reviews are more tricky. How do you rate user experience? What feels good vs. feels not so good? In qualitative reviews, language needs to be precise, and it needs to be consistent. Since I am comparing many different examples, you want to make sure that the level of your review, and the language you are using, is applied fairly. Collect the data from these activities in a database or, if you want to be old school, use a spreadsheet. See my earlier post about dashboards; competitive analysis is a fun place to play around with dashboards, because I am looking at lots of interesting data points that can be fiddled with.

Doing competitive analysis is a great way to do a little bit of sleuthing and play detective with competitors products, but it has a marvelous knock-off effect. Looking at a whole range of products in a similar product space will quickly help you to become a domain expert, not just on the products, but on the business range as well. Back when I worked at Connectix, I did a lot of testing on all of the available virtualization options at the time, and by the time I got through with doing that, I learned a tremendous amount about how virtualization was being used and who was using it for what. The more examples and workflows you compare, and the more products you cover, the deeper the domain knowledge you get to develop. It's cool how that works :).



Bottom Line:

Learning your domain is important,. Learning how your competitors handle your domain is equally important. Going through the process of studying up on your competitors, how they do what they do, and with a critical eye for what and how you can improve testing of your own products/applications will give you an edge over testers who don't do this. Oh, and did I mention that it can be a lot of fun performing these product and workflow shoot-outs? Seriously, it is :).


Tuesday, October 22, 2013

Understand Your Business and Customer Needs: 99 Ways Workshop #93

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #93: Understand your business and customer needs, not just the requirements - Mike Hendry


I was so tempted to put this into a combined post with the last one, because a lot of the suggestions would be the same. However, a  funny thing happened on the way to throwing in the towel and saying "OK, this could use the same advice as the one that I just wrote". I participated in a conversation on Twitter with another tester that was frustrated with the fact that "meaningless metrics" were getting their team nowhere. While they were still being asked for measurements, there was a chance that management might be open to another conversation. In the process of that conversation, we came down to a piece of advice that I realized answered this suggestion pretty well. I love when things like that happen :).


Workshop #93: Sit down with your management/product team and have a frank discussion about risk. What are the real risks that your product faces? Get specific. Map out as many areas as you can think of. Identify key ares that would be embarrassing, debilitating, or even "life threatening" for an organization. Develop a testing strategy that focuses on one thing, mitigating those risks.


Most organizations couch testing in terms of coverage or bugs found. It's a metric, and it may even be a meaningful metric, but very often, it's not. "Coverage" is vague. What do we mean? Are we talking about statement coverage? Branch coverage? Usually, the term "test coverage" is used, and again, probe deeper and see if that word means what the speaker thinks it means. If someone asks if "all possible test cases have been identified", we've got problems. At this point, it would be helpful to instruct and show that complete, exhaustive testing of all scenarios is impossible (not to mention infinite).


In most of the places I have worked, "risk" has not been directly assessed, and there are valid reasons for this. True and honest risk assessments are hard to do. They are subjective. Risk to who? Risk in what way? Under what circumstances would something we consider to be low risk become high risk?


Risk is not a sure thing. It's the probability that something could go wrong. The more something is used, the higher the probability that something will go wrong. Not all risks are weighted the same. Some risks are trivial and easily shrugged off ( typos and cosmetic errors) because the "damage" is minor. Other risks are much more important, because the potential for damage is very high (an iFrame can open you up to cross server scripting attacks). Risks are identifiable. They may or may not happen, but you have a handle on how they could happen. 


Here's a process you can use. The next time that you sit down for a story workshop (or whatever you refer to an initial exploration of a new feature idea and implementation), take the time to ask the following questions:


- What would be the downside if we didn't deliver this feature?
- What would be potential problems that would prevent us from implementing this feature?
- What other areas of the code will this feature associate with?
- In what capacity could we run into big trouble if something isn't configured or coded correctly?
- What are the performance implications? Could this new feature cause a big spike in requests?
- Is there a way that this new feature could be exploited, and cause damage to our product or customers?


Yes, I see some of you yawning out there. This is a blinding flash of the obvious, right? We all do this already when we design tests. I used to think the same thing… until I realize how much we were missing at the end of a project and we opened it up to  much larger pool of participants. Then we saw issues that we hadn't really considered become big time bombs. We all started working on interrupts to fix and close the loops on areas that were now bigger issues. It's not that we hadn't tested (we had, and we did lots of testing), but we had placed too much of our focus on areas that were lower risk, and not enough focus on areas that were higher risk. Our test strategy had been feature based, instead of risk based.


A risk analysis should consider a variety of areas, such as:


- defects in the features themselves and customer reaction to those defects
- performance of the feature under high load and many concurrent users
- overall usability of the feature and the user experience
- how difficult will it be to make changes or adapt this feature based on feedback
- what could happen if this feature were to leak information that could allow a hacker to exploit it.


Each of these areas are risks, but they do not have the same weight. At any given time, these risks can change in weight based on the feature, platform and audience. Security may be a middle-weighted issue for an internal only app, but much more heavily weighted if it is a public facing app. Performance is always a risk, but have we considered certain times and volume (healthcare.gov being a perfect recent example)?


Additionally, identify features that are critical to the success of a project, their visibility to users, frequency of use, and whether or not there are multiple ways to accomplish a task or only one way. The more an audience depends on a feature, the greater the risk, and the more focused the testing needs to be. Likewise, if the item in question is infrequently used, is not visible to a large audience, and there are other avenues or workarounds, then the risk is lower, and the need for voluminous tests lessened. 


Ideally, we would spend most of our time in areas that are high risk, and very little time in areas with little to no risk. Making that small shift in thinking can radically alter the landscape of your test cases, test coverage and focus.


Also, we would like to prevent another insidious risk factor that can change this equation and balance, and that's time pressure. If we have to compress a schedule, we radically alter the risk profile. Issues that are relatively low risk given enough time to test become much higher risk when there is time pressure and a "death march" mentality to getting the product out the door.


Bottom Line:

Everyone's risks are going to be different. Every organization has different fears that keep them up at night. Making a check list to consider every potential risk would be pointless, but a framework that allows us to examine the risks that are most relevant to our product owners, company and customers will help us to set priorities that are relevant, and place our efforts where they will have the most potential impact. 


Again, risk isn't black and white. It might happen. We might do something that could cause us damage down the road. We might be performing lots of "cover our butt" tests that, really, have a very low likelihood of actually happening in the wild, while missing important areas that have a much higher chance of occurring. Shift the conversation. Move away from "how many tests have we performed" to "how many risks have we mitigated, and are we focused on mitigating the right risks?"


Friday, October 18, 2013

Hang Out With Developers, Designers, Managers: 99 Ways Workshop #92


The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #92: Hang out with developers, designers, managers - Rosie Sherry


This is a great piece of advice in general. We learn a great deal about our role as testers when we have a better understanding of what other people do in their organizations and roles. What matters to them, what pressures they face, the issues that are unique to their spheres, all of these can give us a better view and frame of mind to help us approach how we test. 


Beyond that, though, is the potential for knowledge transfer and skill acquisition. Ever wonder why certain tags and attributes are used on the web site you test? You could look up those tags, or you could ask your front end developers why they choose to use what they do. Ever wonder why certain features seem to outweigh others, but you can't understand the rhyme or reason as to why? Talk with the project and product managers and see what they are dealing with. If you are interested in learning a new programming language, you could read up online about it, or you could go attend a Meet-Up related to that language, meet some people who attend, and ask them what makes it a good choice (or not, believe me, programmers are happy to share both the good and bad about what they deal with).


Workshop #92: Make a plan to do several "gemba walks". Each of them should be in a different place in your organization, or in a different aspect of what you want to learn more about. Attend team meetings, talk to individuals, join and attend meet-ups related to the skills and areas (UX, programming, management, lean coffee, etc.). Try to see how what you learn from these experiences can add to your ability to test more effectively.


As I mentioned in a previous entry, "gemba" is a Japanese term that means "where the action is". therefore, the best chance for you to learn what developers, designers and managers know, need, fear, and aspire to is to go where they actually do their work. Some basic ways to do this are as follows:


Pair with a programmer regularly if the opportunity arises, or make opportunities that are beneficial to both you and the programmer. One great way to do this is to see if you can focus on a story together in the early stages of it being programmed, or to collaborate on automated tests you have decided need to be created.


If you want to see what is important to project managers, ask to get on their calendars as they are looking to scope out new features that have been requested. Offer to sit with them and brainstorm approaches and ideas, letting them lead the way while you ask them questions. As they are scoping out ideas for features, you can also make suggestions that will help with testability, or give some suggestions that will help broaden (or narrow) the focus of a feature so that it can be more easily implemented, described, and yes, tested :).


Find out who the managers are in various capacities in your organization, and make the point that you would like to learn more from them as to what is important for organizational success. This doesn't have to be a formal meeting, it could be a lunch break, or a visit to a local pub after work, or in some other capacity. The point is, get to know what these managers are doing, and why they need what they do to be successful. Be open and honest about your intentions. Also, be persistent, and be the one to make the invitation, and do it regularly. You may notice at first that "oh thanks, but I'm too busy" is often the first response. That's OK, but follow up and ask again, and again. Usually, the response will be "Oh, OK, you're serious about this. Um, sure, how about Friday for lunch?" 


Barring any of these (it's possible that you work remotely or in a way that you can't directly access the people you'd like to talk to), go the Meet-Up route. I am blessed to live in an area that has an embarrassment of riches when it comes to structured meet-ups of every imaginable type. Sure, getting together with my team of developers is effective, but if none of them are working with the language I'm interested in learning more about, then going external is necessary. Same goes if I want to learn more about Lean Startups, mobile applications, User Experience, or software management. Another benefit to the Meet-Up approach is that you can see what other people value in their organizations, and see how other people tackle problems that may inform the ones that you have. 


Bottom Line:


We can learn a lot by reading, experimenting, and trying things out on our own, but ultimately, our success depends on the organization as a whole being successful. The better we understand the unique challenges other groups face, the better equipped we will be to target our testing to the areas that have the greatest risk (i.e. those ares that each groups feels are the greatest risks to them). By getting a feel for and an understanding of what each group does, actively tries to accomplish, and yes, even secretly or not so secretly fears, it is possible to focus our attention and energies on the areas that really matter to a broad and diverse set of stakeholders.

Thursday, October 17, 2013

Stop Following Test Scripts and Think: 99 Ways Workshop #91

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #91: Stop following test scripts and think - Stephen Blower


One of my great laments of my earlier software testing career was the fact that we had regimented test plans that had to be explicitly spelled out, supposedly followed to the letter, and repeated with each new version of the software. These were large undertakings, and they often would result in several rounds of review and complaints of "this is not detailed enough".


I would go into our document management system and see other test plans, and very often, I would copy their format or their approach, which was to, effectively, take a requirement from the specification, add words like "Confirm, validate, ensure, determine" or some other term that meant "turn a requirement into a "test case" with the least amount of effort possible. When I did this, I got my test plans approved fairly quickly.


My lament over this is the fact that, even though we did a lot of documentation, we rarely followed these tests in this manner, and most of the interesting things we found were not actually found following these as defined tests. Don't get me wrong, there is a time and a place for having a checklist. When I was working for a game developer in the early 00s, the concept of the Technical Resource Checklist (TRC) was mandatory, and the test cases needed were extremely specific (such as a title screen cannot display for longer than 60 seconds without transitioning to a cut-scene video). Sure, in those cases, those checks are important, they need to meet the set requirements, and they have to be confirmed. Fortunately (or unfortunately, depending on your point of view) most software doesn't have that level of specificity. It needs to meet a wide variety of conditions, and most of them will simply not fit into a nice step by step recipe that can be followed every time and find interesting things.


Workshop #91: Take a current "scripted" test plan that you may have, and pull out several key sentences that will inform what the test should do. Create a mission and charters based on those sentences. From there, set a time limit (30 minutes) and explore the application using those questions as guidelines.


This is going to look like an advertisement for Session Based Test Management. That's because it is. Based on writing by James and Jon Bach, as well as tools like Rapid Reporter and others, I have become a big fan of this approach and consider it to be a valuable alternative to the scripted, ironclad test cases. In truth, I don't care how many test steps you have devised. What I care about is that the requirement for  given story have been examined, explored, considered and that we, as testers can say "yes, we look good here" or "no, we have issues we need to examine".


In my current assignment at Socialtext, we use a story template that provides acceptance criteria, and that criteria can be brief or voluminous, depending on the feature and the scope. We also use a Kanban system and practice what is called "one piece flow" when it comes to stories. To this end, every acceptance criteria bit becomes a charter, and how I test it is left up to me or another tester. Given this approach, I will typically do the following...


I create data sets that are "meaningful" to me, and can easily be interpreted by other members of my team should I pass them on to them to use. I make them frequently, and I structure them based around stories or ideas I'm already familiar with. Currently, I have a group of details I maintain that originated in the Manga/Anime series "Attack on Titan" (Shinjeku no Kyoujin). Why would I use such a construct? Because I know every character, I know where they are from, what their "motivations" are, and where I would expect to see them. If someone in this "meta-verse" shows up somewhere I don't expect to see them, that cues me in on areas I need to look at more closely. I love using casts with intertwining relationships. To that effect, I have data sets built around "Neon Genesis Evangelion", "Fullmetal Alchemist", "Ghost in the Shell" and the aforementioned "Attack on Titan".


I load this data and instead of just reading the requirements dry, I ask "what would a particular character do in this situation?". This is taking persona information to levels that might not be intended, but I find it very helpful, since I can get closer to putting some sense of a personality and back story to my users, even if the back story in this case may be really out there.


The charter is the guiding principle, and with it, so is the clock. I want to be focused on a particular area, and I want to see what I can find in just that time period. Sometimes, I find very little, and sometimes I get all sorts of interesting little areas to check out. For me, having lots of questions at the end of a session is a great feeling, because it means I can spin out more charters and more sessions. If I finish a session where I have nothing else to consider, I tend to be worried. It means I'm either being way too specific, or I'm dis-engaging my brain. 


Taking a simple note taking system, I try to track where I've been, or if I want to be particularly carefree, I'll try to use a screen-capture program so that I can go back and review where I went. Barring that, I use a note-taking tool like Rapid Reporter, so that I can talk through what I am actually doing and think of other avenues I want to look at. Yeah, I know, it sounds like I'm writing the test case as I do them, right? Well, yes! Exactly right, but there is a difference. Instead of my predetermining the paths I'm going to follow, I write down areas where I feel prompted to poke around, not forcing myself to follow a pre-determined path. The benefit to this approach is that I can go back and have a great record of where I've been, what I considered, and what turned out to be dead ends. Often, this turns out to be even more detailed and cover more ground than I would if I had tried to spell out all the test cases I could think of up front. 


Bottom Line:


Whatever approach you use, however you make it or apply it, the goal is not to follow my recommendations (but hey, if you like them, give them a try). The real goal is to see how you can guide your ability to learn about the application, and how that learning can inform current and future testing. You may find that your management may not approve of this approach at first, so my recommendation is, don't tell them. Just do it. If it works for you, and you can get better testing, find more interesting problems, and be more engaged and focused, trust me, they will come to you and ask "hey, what are you doing?!" They won't be asking accusatorially, they'll be genuinely wondering what you are doing and why you are finding what you do. 


What Does Endorsing Me Mean to You?

This has been an interesting experiment. For those who don't know what I am referring to, LinkedIn has an option where people can "endorse" you for skills or attributes that you choose to post. I've given a few of these out to people I know or have interacted with, and I've received quite a few as well.

What's been interesting is to see exactly what I have been endorsed for. The top endorsement? Test Automation. Seems rational on the surface, except for the fact that, if you actually read my blog or my articles, you don't see a lot of "expertise" in those posts. Mostly, what you see is a fair amount of frustration, a lot of questions, and a fair amount of true confessions about how I struggle with programming. Based on this, does this mean that people are endorsing my abilities with test automation, or are they endorsing the fact that I am honest about the trade offs between what test automation promises and what it can actually deliver?

Other areas that have me baffled are the tools that I get mentioned for. Selenium is a tool i actively use, and I attend and report on both the San Francisco and San Jose Selenium MeetUp groups sessions, so that one makes sense. Getting endorsements for HP Quality Center or QTP don't make any sense at all (true confession time, I have never seen a single screen of either application in my entire testing career).

I get what the option is there for; it's a way to give kudos to people without having to go into depth. It's LinkedIn's version of the "Like" button. It's easy to click, and yes, it does look really cool when you can see a wall of icons next to a skill, or even a 99+ next to a particular item. I am grateful to those who have posted these, and I appreciate your willingness to support and back me, but it leaves me with a question... why that skill? What is it about that particular one that makes you willing to click that button that says "Does Michael Larsen know about Agile Methodologies?", or one of the other ones? Example: I spend a lot of time and advocacy on teaching and coaching, and I actively write, a lot, about those areas, but they are way down on the list of things I'm rated much higher for, with little in the way to show my knowledge or expertise.

This brings me back to what I try to do with these listings. I want to have some level of interaction with you on the topic in question. That interaction could be an email thread, a Twitter run, a collaboration on an article, participating in a Weekend Testing event or a BBST class. The venue doesn't matter, but generally, I need to have witnessed your skills. If I can't come up with a concrete place where I've seen you demonstrate said skill, I can't in good conscience endorse you for it. I'd ask that you do the same. Again, I appreciate the good vibes from all, but part of me really wants to know why you feel I deserve that thumbs up. If you can do that, then please, feel free to vote me up on any skill you want to. Likewise, if you see my picture next to a skill you've listed, know that that is what I am doing for you :).

Wednesday, October 16, 2013

Become an Excel Power User: 99 Ways Workshop #90

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #90: Become an Excel power user. Functions, logic and conditional formatting can all be used as powerful analysis and test tools.


This has actually proven to be the hardest "workshop" to create, mainly because (true confession time) I have not used Excel as a "testing tool" for over three years. I used to use Excel religiously for storing all of my test cases and ticking off pass/fail details for years.


Since I started working in "Agile" shops, the traditional "test case in Excel" model has ceased to be part of my daily activities. The closest I get these days to using Excel in any testing related capacity is when I use Rapid Reporter and save the output of the tests.


Having said that, I realize that there are many people who still work in environments where Excel is used frequently to construct and track test case execution. Barring anything else, it's a system that is portable, and the skills developed can be used anywhere, provide Excel (or another spreadsheet application) is the tool being used. While it's a bit "old school" now, one of my favorite uses of Excel is to use it to create a low-tech "dashboard" for a given project. That, however, also requires a bit of creativity and manipulation, which is not easy to do in a 1000 word blog post.


Additionally, just to make life more interesting, my PC bricked in the process of working on this (oh, you lovely "BOOTMGR is missing" failure, you pick the best times to happen, don't you :p?).


Workshop: Use Microsoft Excel to create a low-tech dashboard to help you see progress, and moreover, print out a visual summary for your group/team meetings. Take advantage of the formatting options available to help keep the message focused on the data and the analysis, rather than the features or eye candy aspects.


Dashboards were all the rage a few years ago, and being able to make them in Excel seemed to be a special kind of "Holy Grail". While they don't seem to be as hip and happening today (much has moved onto the web or into various plug-ins for tools like Pivotal or Jira), there's still a benefit to learning what it takes to make even a simple low-tech dashboard, because the skills needed to create a simple dashboard will also help considerably when it comes to learning analysis aspects of Excel (read: the ones that software testers may find helpful/interesting).


Dashboards tend to be three layers deep. the three layers are Data, Analysis, and Presentation.

 The data layer is where you store whatever it is you are interested in analyzing. This is your raw info. This can be output from a number of test runs, a log file, or some other output that is stored in an easy to list manner (the .csv (comma separated values) format is ubiquitous and easy to set up). Since every organization will have some unique aspects to what they want to keep track of, it won't make much sense to define something ironclad, but for the sake of a very simplistic example, let's go with a few fields:


TestNumber, Date/Time, BranchID, Platform, Browser, TestDescription, DevComplete, TestComplete, Comment



The biggest challenge is that most test data that we would store, unless we were looking for things that are specifically numeric, don't actually map to raw numbers. Still, there's a lot of things that we can look at and get a handle on that can be quantified, such as date/time, DevComplete and TestComplete. 


Where I currently work, we typically don't look at PASS/FAIL counts, we look at acceptance criteria and when something is "done" or "accepted". Regardless of what you want to call it, a dashboard works much better when it has something to count based on a particular date and time. Also, formatting the data in a way that is easy to read and reference is vital. Flat data files or a tabular dataset are great for this purpose, and can easily be imported (and more to the point programmatically created ;) ).


The second layer is analysis, and this is where formulas, macros and bits of VBA code tend get called in to help shape the data you have entered or collected. By creating a second sheet for analysis, you can create areas that do multiple calculations such as:


- How many test cases were completed on a given date? 
- How long has a particular story been in flight? 
- Are there tell-tale signs of a section or a particular acceptance test that has been open for an extended period? 


A single report won't tell you this, but having a space where multiple calculations can be formatted and processed helps make this much easier to handle. Note: this page can have a lot of calculations happening. It just depends on what you are looking to analyze. Here's where pivot tables can be extremely valuable, and also give lots of "what if" scenarios to channel and experiment with.


There are a variety of quick formulas that you can use to do calculations; SUM, SUMIF, COUNT, COUNTIF, AVERAGE, VLOOKUP. Using them to span a section of your data sheet, you can create smaller areas of analysis, and you can make as many of them as you want or need to. What's the real benefit of these "analysis tables"?  Other than being able to gather up a lot of information in a number of different ways, you can also condense down the information that is really important to you. By turning them into report or pivot tables, you can also make them expandable as you add more data, and the biggest benefit… the charts and pretty stuff that you display on the front (presentation) page will be created and pulled from the data that is created in these analysis tables.


The third layer is presentation, and this is where the charts, graphs, gauges, colors and any variety of other things you would want to display can be seen. The recommendation here is to be spartan; display only what you need to tell your story.


While there's a lot of ways to add eye candy and make it look all spiffy and cool, focus on what will make your case and share the information you most want to emphasize. One of the best suggestions I've received is to say that, if you can orient the presentation layer so that it can be printed on a single piece of standard 8 1/2" x 11" paper, then you've probably got it right. 


Bottom Line:


This is a huge topic. There's a lot of things that can be done, and way more space would need to be dedicated to doing this than a blog post could possibly cover at one shot. There are several resources out there that can help with learning how to make dashboards effectively, and one book that I have and like is "Excel 2007 Dashboards and Report for Dummies". Don't knock the title, this is actually a very good book and has a lot of great recommendations to play with data and set up a working data model. 


Also, many of the options listed here don't necessarily require that you use Excel. You can do most of the same things in Google's spreadsheet program as well. How basic or fancy you want to get with this is entirely up to you. The point to doing this is to have some opportunity to use Excel (or another spreadsheet app) as what it should be used for, which is an analysis application. Use it to examine your data, and tell your story in the way that you choose. Of course, if you decided that you want to take the next step and put this information up on the web, the data model you use here will also work with a database and server side scripts as well (but that's another post entirely ;) ).