Wednesday, July 31, 2013

Learn Another (Spoken and Written) Language: 99 Ways Workshop #25

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #25: For non-native English speakers: Improve your English. For native English speakers: Learn another language. - Stephan Kämper


When I was growing up, I had the opportunity to take various classes in school that focused on foreign languages. In elementary and middle school, I took a couple years of Spanish. In High School, I took three years of German. During my adult life, I have, through various media interests, become very interested in learning both Japanese and Korean (as spoken and written). 


To tell the truth, while I can hear a lot of these languages around me every day, and through various mediums experience Spanish, German, Japanese or Korean, and understand a fair bit of what I hear and read, I struggle with speaking it in any way that doesn't come off sounding ridiculous. 


The reasons for why I'm not better with these languages are many, but I believe it comes down to one fundamental issue. We learn, and remember, what we use and directly interact with. My Spanish and German probably would be a lot better today if I had more of an opportunity (and took advantage of the opportunities that I did have) to daily use those languages. Hearing, reading, writing, but most of all actually speaking with other people in that language. The fact is, I grew up in an area where, at the time, there were not many Spanish or German speakers. Even today, while I love watching Anime and K-Drama, the biggest hindrance so far has been having access to a limited number of fluent speakers to interact with (and to be fair, would be willing and patient enough to interact with me ;) ).

Language acquisition is easiest when we are young, because we hear it while our minds are making the mental map of our world. We are able to associate sounds and actions early on, and those become part of our everyday language. Our "Native Tongue" is easiest because it's where all of our formative experience are associated. Later on in life, as we try to learn a new language, we find that we struggle to make the same kind of connections. I find myself actively translating what I hear, formulating what we want to say in English, then translate it again to say it back to the person I am speaking with. The tighter I can make that feedback loop, the more likely I will be to gain comfort and fluency in that language.


Workshop #23: Commit to Listening to, Reading, Writing and Speaking a Different Language


This workshop will not be easy, and it will not be something that can be accomplished in a short period of time. Anyone can make some progress, but to get genuinely good (i.e. fluent) could take years! While there are some software applications that can help with this (Rosetta Stone, etc.), and of course we could take language classes at a local college, I want to explore some low cost or no cost ways to do this.

The examples below are going to use Japanese because that is the language I'm currently focusing on. Anywhere you see Japanese, replace with the language of your choosing.

1. Find several books in Japanese and English (or online sites if you prefer), a translation dictionary, and some books (or sites) purely in Japanese (I've found that Manga works great for this).
2. A pad of paper and a comfortable pen. This is for me to practice regularly writing out Kana and Kanji characters. I say them out loud as I write them, and I speak out the words that they form as I do.
4. Movies, television shows and audio programs in Japanese (decades of love for Anime helps a lot here. If the option exists to toggle subtitles on or off, even better).
3. Some friends that speak Japanese fluently, and are willing to spend time talking to me. Seriously, this last one is crucial, and I know I have to be really nice to them. My plan is to buy them dinner or take them out for drinks… frequently :).


Using each of these tools, I then spend as much time as I feel comfortable listening to, reading, writing and speaking Japanese. Reading helps me see the flow of the words, and how they relate. Listening to dialogue helps me hear words in context as well as proper pronunciation. Writing things out help me recognize words as I become more familiar with them (especially true with Kana/Kanji, since they have no resemblance to my familiar Roman alphabet at all).


While all of these will be helpful, to really make it stick, having access to people who will take the time to talk to me in Japanese will be the biggest factor. Since I'm still on the early part of the learning curve, those people will need to be remarkably patient, and I will need to reward their patience and willingness to put up with me (I am totally serious about the buying dinner for them from time to time). The consistent speaking and varying of conversation, I feel, is the most effective way to really learn a language, to be able to adapt and begin to "think" in that language. 

Bottom Line:


It's not enough to casually read or "get the gist of a language", I will have to do enough and be involved with it enough so that I can genuinely make it a part of my everyday interactions. Barring an opportunity to move to a location where I can be fully immersed in Japanese (I would love to move to Tokyo or Sapporo for a year or two, but that's just not practical and my wife might strenuously object), the next best option is to utilize various media and interact with real people. All of this will have me regularly reading, writing, hearing and speaking Japanese. I wish all good luck in the language you choose… and if any out there fall into the camp of wanting to improve your English, I'm happy to help where I can (I'll leave whether or not I'd be an acceptable coach as an exercise to the reader ;) ).

Follow Other Testers on Twitter, Share Ideas & Experiences, Get Feedback & Practice Your Debating Skills: 99 Ways Workshop #23 & #24

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #23: Follow other testers on Twitter - Stephan Kämper
Suggestion #24: Addition to Stephan's suggestion: Follow testers on Twitter is a good start but make sure you don't stop there. Twitter is a great way to share ideas and experiences, get feedback and practice your debating skills not to mention getting in contact with other passionate testers. - Erik Brickarp

These two likewise work together, so they are going to be presented as two workshops but with the same general goals. Posting two separate workshops as two separate posts would have me duplicating a lot of stuff.

I'm sure you have heard the jokes over the years. Twitter is where people go to talk about what they had for breakfast, or to post pictures of their latest stop at some trendy bar someplace. Yes, there's plenty of that out there, but there's an even more interesting aspect of Twitter that doesn't get much press. Twitter is (and I am not kidding when I say this) a 24/7 Software Programming and Testing conference.


Workshop #23: Take a Random Walk Through the Programmers and Testers Active on Twitter

For those who are not on Twitter yet, well. do something about that! 

Go to http://www.twitter.com, and set up an account for yourself. 

If you're already on Twitter but don't want to commit your main account to this experiment, create a new account that you will dedicate to software testing.

If you are comfortable blending a personal account into your testing explorations, hey, rock on. 

Twitter allows for a number of ways to find people. You can directly search for people by using their names or their Twitter IDs. 

Anyone who is interested in following me on Twitter is by all means welcome to do so (my Twitter ID is @mkltesthead). I currently Follow about 450 software programmers, software testers, and a handful of miscellaneous accounts that don't necessarily fall into either category.

Note:  My list of people I consider worth following may or may not be an ideal list for you. That's why I'm not going to specifically say "here's a list of twenty-five people you should follow on Twitter". Seriously, what fun would that be? 

Actually, if you want a short list based around software testers to get started with, Matt Heusser already compiled one, and he has 29 recommendations.

Workshop #24: Follow and start communicating with programmers and testers on topics that matter to you

Twitter is more than just posting blog update announcements and sharing links. Yes, many of us do a lot of that. I do a lot of that. I also engage with other programmers and testers, and communicate with them on topics that I find interesting. Sometimes I join in on debates between people I know and trust in the programming and testing spheres. Twitter is designed to allow for this easily. Additionally, you may find that these debates will introduce you to additional interesting people, and in turn, you may find value in their "signal". If you do, start following them, and see who they communicate with.

A variety of hashtags related to #testing can be seen, and rather than just give a list of them, I'll say follow various tweets and you'll see them for yourself. Clicking on that hash tag will aggregate tweets that have them, and will likewise help to put you in touch with and give you the opportunity to follow people who are interested in topics that interest you. Hashtags can also be saved and made into lists, so you can categorize and prioritize tweets based on what most interests you.

One word of advice: Twitter can quickly become an information firehose, and the intensity of the signal will rise the more people you follow. I personally limit my Twitter focus to what is in front of my face when I'm on at any given time, or filter on a couple of lists that I have set up or follow and review when I have more time. Additionally, there is an option you can use called "Favorite". Use of this options varies from person to person, but I use it most frequently as a way to say "this looks really cool, I want to make sure to read about this later".


Bottom Line:


Twitter is quite the cool place if you are a programmer or a tester. You can learn a lot, you can discover a lot of interesting initiatives and opportunities. You can make some great friends and find collaborators. You can get involved with projects.  All of this, and much more, could be yours, but you'll need to get out there and participate.


Take the time to talk up that "famous programmer or tester", find out what they are thinking, discuss a thorny problem with them, see if you agree or disagree with their approach and suggestions. Politely debate with other testers and programmers (please, politely :) ). You may find that you quickly cultivate an amazing virtual programming and testing conference that runs 24/7 as well. Also, if you want to have a good starter list for Twitter that goes beyond Matt's, feel free to peruse the list of people I'm Following. The vast majority of them are just plain awesome ;).

Tuesday, July 30, 2013

Take the Association for Software Testing "Black Box Software Testing" Course(s): 99 Ways Workshop #22

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #22: Take the Association for Software Testing "Black Box Software Testing" course(s) - Stephan Kämper


Standard Disclaimer time... I have very much a self serving interest to have people take the Black Box Software Testing (BBST) classes. For starters, I am (at the time of this writing) the Chair for the Education Special Interest Group with the Association for Software Testing. Long story short, I'm the Administrative head of this initiative. Would I love to have everyone who reads this become members of AST and pay to take the classes? Absolutely! Is that going to happen? Highly unlikely! 

I really do think they are incredibly valuable, and I also think they are well worth the money charged for them. However, for some, money and time are genuine issues, and for many, they want to know exactly what they are getting involved in before they plunk down a chunk of change to take a class.

Having said all that, I'll also make something else clear… every concept, every lecture, every study note, and every exercise and reading that is part of the BBST series of classes can be accessed for free, on your own time, at your own pace, as much or as little as you want. If that sounds interesting, then please see the workshop details below:


Workshop #22: Read through and practice all of the materials available at http://testingeducation.org/BBST/

I will make no bones about it. The BBST courses are tough. They ask a lot of participants. They cover a lot of ground in a short amount of time. Were I to make any one suggestion to anyone looking to participate in taking an actual BBST class as we offer it, I would likewise say "spend some time reviewing all of the material as the BBST site. Watch all of the lectures. Read all of the readings. Consider all of the projects and labs. Practice all of them." Why? Because you will understand the level of thinking and involvement that goes into the courses.


What will you not get with the BBST site? You do not get instructors who will coach and guide you. You will not get the quizzes or final exam questions to consider. Most important, you will not get a group of participants that you can interact with and share your experiences with, and receive feedback from. In truth, that is what you are paying for when you sign on to take a BBST course through AST (or through anyone else). 

For those who just want to peruse the materials and see what this "context-driven testing" stuff is all about, and the idea of taking a formal class doesn't appeal to you, that's fine. Read through the materials and learn that way. If the idea of interaction and sharing is high on your priority list, then having read through these materials first will give you a huge boost when the time come to actually take the class. It's very likely that the individual experiences from each of the participants will add considerably to what you've learned, but having the baseline understanding first will, undoubtedly, make the experience easier to follow and meet in the time allotted for each class.


Bottom Line:


I think the BBST classes have great value, and I am happy to have been a participant and an Instructor for all of them, and look forward to doing so for a long time. Other instructors feel the same way, and we are here an ready to teach if you want to join us. Even if you don't want a live class, please take the time to check out the materials and read what has been compiled over the years. 

If all you do is go through the materials on your own, and thoughtfully consider each of the lectures, lessons, labs, and readings, and take the time to work through each of the sections, I promise you will walk away with some great skills and a sense of better understanding where the context-driven approach of testing comes from. If you do choose to join us for the actual classes, then interactions with the other participants and their experiences (and yeah, a little boost from some live instructors) will help you learn a great deal more.

Book Review: The "A" Word

It's been awhile since I've done a book review here, and I figured this might be a nice time to insert a small pause between the rash of "99 Things"posts that are coming, and will continue to be posted for quite awhile yet.

The last time I reviewed a book by Alan Page, it was "How We Test Software at Microsoft". That "review" actually turned into a full synopsis of the book, over multiple posts, and Alan joked that I had provided "the most exhaustive and complete book review of all time". He also said that, should he write another book in the future, I could have a copy free of charge. Unfortunately, or fortunately, depending on how you look at it, I couldn't take him up on that, since the proceeds for this particular book are going to Cancer Research, and I've lost too many friends to Cancer. Therefore, I guess I'll have to wait for his next book to be published to accept that freebie. I gladly paid for this one.

So what's this new book of Alan's I'm talking about? It's called "The "A" Word": Under the Covers of Test Automation". As you might guess, the central theme of the book is... "automation". Not how to do automation. Not tools used in automation. Much higher, conceptual discussions about automation. Where are we doing it right? Where are we missing the mark? Why do so many test automation projects and initiatives fail? In short, this is a collection of short essays, most of which are already available on Alan's blog. The benefit of having them here is that they are structured to flow into one another progressively. Alan is aiming this book for those who are interested in having a discussion about what, how, and why we think about automation the way we do, and sharing his own experiences and philosophy to that regard.

One of the key themes that will be obvious in just the first few chapters is that test automation is abused and misused. Automation is not a time saver, and automation does not replace manual testing. If these are how we are conditioned to think about automation, and we believe these two statements, Alan want to make clear that we can, and must, do better than that.

Two thoughts that stick out in the first few pages, and beautifully encapsulate Alan's position is as follows:



Humans fail when they don’t use automation to solve problems impossible or impractical for manual efforts.

Automation fails when it tries to do or verify something that’s more suited for a human evaluation.


Alan makes a very good case that automation should be used (needs to be used) to take care of "the boring parts", meaning the repetitive steps that, if you were just to make a simple script to encapsulate five or so keyboard commands, you could get on with doing real stuff that matters, instead of wasting time with needless repetition. The problem is that, for many of us, that mindset carries over to all of our automation efforts, and really, let's do better.

One of my favorite chapters is "It's (probably) a Design Problem", and here Alan makes a great case as to why so many automation initiatives fail. This section is focused on why GUI automation is often a bad idea (note, often, not always) and lays out the case for where most of us fall short. While this is aimed at the shortcomings of GUI testing, the advice here is excellent for any automation project.

Alan's blogging style comes through on many of these posts. If you've ever heard Alan speak, every chapter rings with his voice and his mannerisms. It makes every section feel authentic, relevant and honest. He pokes fun at bad practices. He pokes fun at himself, but when he's in earnest, he's sharp, direct, and focused. He pulls no punches, and doesn't couch things in soft terms. For a direct example of this, check out "LOL - UR AUTOMASHUN SUCKZ!" No, seriously, do not skip this one. It's wonderful advice about how to get you to test automation that, well,  does not suck! As he puts quite nicely in "Exploring Test Automation":


"...test design is far more holistic than thinking through a few sets of user tasks. You need to ask, “what’s really going on here?”, and “what do we really need to know?”. Automating a bunch of user tasks rarely answers those questions."
[...]
When I think of test automation, I don’t think of automating user tasks. I think, “How can I use the power of a computer to answer the testing questions I have?”, “How can I use the power of a computer to help me discover what I don’t know I don’t know?”, and “What scenarios do I need to investigate where using a computer is the only practical solution?”.



Alan makes a great case for the fact that test design is the most important part of all this, and this book focuses a lot on test design and understanding the real questions we want to have answered. Running manual tests, getting bored, then getting the computer to run our steps may be helpful in certain cases. It's wonderful for setting up environments, and covering those areas that we know are 100% of the time a royal time suck if we don't use it. Using that as a basis for our test design, though, will leave us sorely lacking in tests that provide us any useful information, or help us learn anything new or interesting. Regression testing is fine, but there's so much more we can do, that we should do, but we will not succeed unless we put some actual thought and effort into up front test design.


At the current writing of this review, this book is fairly brief. It's a total of 58 pages cover to cover. Do not think that means that this is a "small" book. The information inside of this volume is strong, focused and timeless. I like his take on things. I like where he comes from with his advice. I like that he is real and that he doesn't sugar coat things. Test automation done well is HARD. Test design done well is HARD. For those people who need to get a better handle on how to do it, you would do yourself a great service by getting The "A" Word and reading it cover to cover. As an added benefit, it's just straight up a fun read. When we're talking about software test automation... seriously, that's saying a lot!










Attend or Speak at Software Testing and Programming Conferences : 99 Ways Workshop #21

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #21: Attend or even better speak at Software Testing and Programming conferences (there are free/low cost one that provide great value) - Stephan Kämper


This may well be the best kept secret out there, but yes, if you want to get into various software testing conferences (and developer conferences, too), there are numerous ways to get involved and participate so that you can attend either for free, or as close to free as possible. Most conferences rely on a cadre of volunteers to run. They need to have a quantity of paying individuals to attend so that the conference breaks even of sees a prefect, but that typically does not prevent conference organizers from looking to volunteers to help them make the conference a success. The key, though, is to be able to offer a talent or a skill that will make you a good candidate for accepted as a volunteer. From my own experiences, I'm offering some ideas below.

Workshop #21: What Can I Do to Be a "Volunteer"?


Put simply, there are lots of avenues and opportunities where even basic skills can help tremendously. From my own experience I have:

- arrived early to stuff bags and have badges ready to go
- manned the front desk and given out badges and bags to attendees
- offered to be a "track coordinator" and introduce speakers, manage questions & answers, and collect surveys/questionnaires
- help out with the A/V needs of a conference, whether it be to do live sound, recording of audio or video taping of sessions
- offer to live blog or otherwise promote the conference (this has worked for me because I have an established blog and a reputation of posting live blog updates, but it's definitely worth asking if such a thing would be worthwhile to a particular venue)
- offer to do interviews and convert them into podcasts or available audio content for the conference organizers
- arrange several months in advance to assist with web content creation and management for the conference web site
- offer to do systems administration or other chores that the conference needs (registration, front end development & testing, content uploads, etc.)
- offer to review papers and presentations from speakers. This is a huge service, and one that is very often needed

and the final recommendation… offer to speak.

I saved that one for last for a specific reason; it's often the most difficult of the volunteering opportunities to fulfill. 

At a variety of conferences, getting picked to speak is a big deal, and there are more rejections than acceptances. Does that mean don't try? Of course not, but it does mean give some consideration as to where you are looking to speak. If you have never spoken at an International testing or programming conference, it might be hard to get an acceptance as a first time speaker. Often, regional or local conferences are a better bet for first time speakers. 

My first opportunity to speak at a conference came from the fact that I had established a reputation doing something first. My first two "conference talks" were both related to Weekend Testing, and both came about because I was asked to speak about my experiences facilitating sessions. The first opportunity came at CAST 2011, and would have been followed up by speaking at PNSQC 2011, but a broken leg prevented that from happening. My original paper, though, was published by PNSQC, and that paper being published helped lay the ground work for my presenting it at STAR EAST 2012 (along with a friend who read the paper and said "dang it, this needs to be presented" and championed me to the conference organizer). 

Those experiences helped make it possible for me to present additional papers at additional conferences and get consideration to be reviewed by other conference committees and make additional presentations. In short, volunteer efforts get you known, those efforts help you develop experience reports you can share, that sharing (and positive reviews) opens up other avenues to speak and present. 


Bottom Line:


Conferences are great opportunities to learn and interact, but they are also great opportunities to share your own experiences and develop a broader community. If you have the time and the energy to volunteer at the local or regional level, do so. Express your interest early, and offer in areas that are not glamorous. Show that you are reliable and want to be engaged. That enthusiasm is remembered, and you will find that you'll be on a short list of contacts the next time around. 



Speaking opportunities are often available to those willing to share what they have learned and initiatives they are involved in. Topics do not need to be "The Next Great Revolution in Software Testing" and you do not need to be "The Greatest Rock Star Tester in the World" to be asked to speak. Every day people have compelling stories and Lessons Learned that can help the industry. Be one of those voices, and be willing to work up to being one of those speakers… oh, and don't be shocked when you are offered more and more opportunities to participate. The testing world is small, and it's always surprising how many people know each other. Show that you are ready, willing and able to work to make a difference, and you'll be contacted for lots of opportunities, I can almost guarantee it :).

Monday, July 29, 2013

Learn From Other Tester's Mistakes, but Learn From Your Mistakes First: 99 Ways Workshop #19 and #20

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #19: Learn From Other Tester Mistakes - Mantas
Suggestion #20: True, but learn from your mistakes first :-) - Mauri Edo

There's no way that I can make the point that I want to without duplicating effort (well, I suppose  I can, but I don't really want to). Therefore, #19 and #20 are going to be the same recommendation, but from two different perspectives. In both cases, I'm going to suggest the same thing.

The fact is, testers are human. We make mistakes. Note the emphasis "we make mistakes". That goes for beginners and veterans. Long time testers are just as prone to fatigue, boredom, misunderstanding and patience as anyone. However, we have an opportunity to do two things that will help us get better and learn how to be better focused going forward.

Workshop #19: Perform regular "Bug and Story Autopsies"

The bug database of any company is often filled with carcasses of unclaimed and unloved bugs. Think of it as a morgue where the remains of a deceased individual have never been claimed. At least with the stories and bugs that have been worked, resolved and closed, they have received proper burials. While there are plenty of opportunities to go there and see what could have been done better, a much better and more fertile place is the "morgue" or "the icebox" or whatever your organization calls the myriad list of bugs that have not risen to the level of "important enough to do something about".

Why do I suggest starting here? Because as testers, the most visible aspect of what we do is the bug report. We may not like that fact. We may wish that a broader interpretation of what we do is considered. Nevertheless, to most people, bugs make the tester, and bugs break the tester. That's how many people see our role and our value. Therefore, if we really want to "learn from our mistakes", the lowest hanging fruit is to be found in the bug database, and any bug that does not rise to the level of "we need to work on this".

Please understand, there are a lot of reasons why bugs do not get worked on. It could be a time limit. If could be a resource limit. It could be "out of the scope" of the development team's mission and goals. All of those are legitimate, and there will be bugs that fall into those categories. Instead, I'm asking you to focus on the bugs that have been filed and haven't been picked up that you really cannot put into those categories. If you can see the value and validity of them, yet don't understand why they haven't been picked up yet, it's possible that you have a chance to learn from and improve on the past. 

As in the advice given in Workshop #17, "Be Critical but Do Not Criticize", it's possible that we could do a much better job with selling the bug. Bugs need marketing campaigns, too, and for a campaign to be successful, there needs to be an emphasis on key areas. In the previous example, I suggested using the RIMGEA method to make sure that the bug gets the best shot at being seen for what it is, and what the potential impact can be. That advice will also work here, too.

Additionally, take the time to triage these areas regularly. If possible, have the test team look through this bug database and see which "top ten neglected issues" stick out. Winnow these issues down to see why the issue isn't being addressed. As you identify these "neglected" issues, see if the marketing campaign for these bugs is effective. Look over the marketing copy (i.e. how the bug is written). If you can improve it, do so. As you do this, report on the ten most underrated issues in the bug database at a weekly team meeting, and see if you can make some movement on getting these issues addressed. You won't be able to win all of the time, but you may be surprised at how many issues get second looks when we clarify the details and help others see why the issues are important.

Workshop #20: If you haven't already, start a blog

For me personally, this has been a great way to examine my own mistakes and consider ways that I could do better. For too many people, a blog is considered a "mouthpiece of authority" and therefore, to have a blog, you need to speak as though you are an expert. That's ridiculous. One of the most valuable kinds of writing for potential readers are "Lessons Learned", examples of where things didn't go well, a course was changed or a potential solution tried, and the outcome of those experiments. The fact is, most of us don't write about amazing successes. Usually, we write about areas that frustrate us, or confuse us, and how we have endeavored to clear up the confusion or errors. A lot of my blog posts are, on the surface, embarrassing. They show more of my deficiencies than they do my strengths, and that's wonderful! 

By sharing the errors of our ways, and providing a context for them and solutions that have worked for us, we show others that there are other ways of doing things. We show that a fair amount of trial and error is often necessary to accomplish certain goals. This also shows others out there that we are actively engaged and genuinely considering other options, and not just blindly doing the same thing or following the "best practice". Have you found yourself following a best practice and discovering that it's either not helpful or that it's actually detrimental to what you are doing? Other testers deserve to know about that! Many of my most successful blog posts have not been where I said "hey, I have this cool new idea". Usually it's "hey, I tried doing something, and it totally blew up on me, but here's what I discovered in the process." Maybe it's the tragic nature of humankind, or we just like seeing when others take their lumps, but there it is.

Writing a blog also helps out in another key way… it gets you in the habit of seeing if your ideas make sense to others. The feedback you receive through various avenues (rebuttal blog posts, comments, tweets and retweets, shares on Facebook, Tumblr, LinkedIn and Reddit, etc.) can all help you develop your ideas and write them better in the future.  This has certainly been the case for me. It also acts as my "institutional memory" of things that I once believed to be true, but on further investigation, proved otherwise. That's important to know, and it's important to review where you have been, what you have subsequently learned, and updating and revising with new posts to compare to where you used to be. 

Key admonition: under no circumstances should you edit older posts to reflect a current mode of thinking. Typos and bad grammar fixes are fine. Wholesale changes of opinion or ideas should be handled in separate posts, and supplemented with a "Here's what I thought in 2010. Here's what I think in 2013. Here's why I think this update is needed."

Bottom Line:


Perfection does not exist, in software or anywhere else. We all make mistakes, and we all have the opportunity to learn from one another and get better at what we do. Even when something seems like an ironclad rule or principle, there are context that can appear that will challenge the validity or soundness of those principals. Seek to understand where those areas might be. Improve on issues where you can. Learn from issues where you can. Most importantly, share what you learn with others. Be willing to be one who makes "break-throughs" and "breaks-with", no matter how embarrassing the journey. One one hand, you will learn a lot along the way. On the other, many other testers will be enlightened, and quite possibly entertained, by your travelogue. 

Create a MindMap: 99 Ways Workshop #18

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #18: Create a MindMap - Rosie Sherry


Here's a specific skill that can be worked on and achieved through a number of different means. A Mind Map (or mind mapping) is a physical way to capture a mental model and help organize thoughts and ideas.

Image courtesy the Wikipedia article on Mind Mapping


The idea behind a mind map is that you start from a central word or concept, and you branch out into words or concepts that associate with that initial element. Each of these nodes can, additionally, have sub-nodes that you can associate additional ideas and words with (and so on, and so on and so on).

Mind maps can be done on paper, on a white board, or with any drawing application on a computer or a mobile device. There are also dedicated mind mapping applications available. A benefit of these dedicated tools is that the user can expand or collapse branches and sub branches,focusing on the areas that are most important at the given moment. One example of a free tool that is available for mind mapping (and the one that I use) is Xmind. I'm referencing it for convenience sake. There's plenty of others out ther. Do a search for "Mind Mapping tools" and you will see what I mean.

Workshop #18: Practice using Mind Maps when approaching concepts or challenges


To get in the habit of using mind maps, the best way to do it is to start small. Think of something you would like to organize information around.

Start with a central idea or topic.

Example: create a central node around the term HICCUPPS

Then consider the various areas that you can branch off that idea. 

History
Image
Claims
Comparable products
User expectations
Product
Purpose
Statutes

From there, look at each node/branch. 

Could you add details for each of the terms?

What sub-branches could you create based on those terms?

Do some of the sub-branches relate to others? If so, make a representation that shows that they can be inter-related (usually by drawing a dotted line from one to another).

By the time you finish, you will have a relatively compact model with some key ideas in a small space. Yes, I'm deliberately not drawing an example map here, because I want each of you reading this to try it out for yourself, and try out ways that work best for you.

One benefit of using a software tool that is designed for making mind maps is  that you can make truly large maps that, can capture a lot of information in a single place. Having the ability to collapse or expand nodes makes even large concepts manageable. As an example of a "Big Map", I took James Bach's Heuristic Testing Strategy Model and made it into a single mind map, so that I could use it with a variety of testing situations. For those interested in playing around with it, it's at http://www.xmind.net/m/WhWe/


Bottom Line:


Mind maps can be effective ways of organizing thoughts and making connections between ideas and putting ideas down quickly. It allows for a way to communicate a lot of information without taking a lot of space or words to do it. They can be as simple or as complex as you want to make them. The key is to get in the habit of working with them and using them to capture and convey your ideas.

Sunday, July 28, 2013

Be Critical But do Not Criticize: 99 Ways Workshop #17

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #17: Be critical but do not criticize - Kim Knup


As testers, we have to be the one's to point out that something is broken, inoperable, or isn't working the way that it is intended. It's important, but it often makes us the "downer" of the group. We (software testers) seem to always be the one's bearing bad news and telling people that "their baby is ugly". OK, that's a bit extreme, but the fact is, we are hired to point out when things don't work the way that they were designed to.


While I'm not going to say "never criticize" or "just be nice and positive all the time", it is entirely possible to give status updates, share findings, and discuss bugs without being cruel or obnoxious.


Workshop #17: Think of RIMGEA when discussing issues or problems


This comes straight out of the BBST Bug Advocacy class. As a software tester, my mission is "to find and share information effectively so that stakeholders can make appropriate decisions". By getting out of the business of deciding if something I see is "right" or "wrong" and presenting it as "here is what I see, what do you think we should do about it?", I change the tone of the entire conversation. I'm sharing findings, not assigning blame.

The information we provide to our stakeholders can be improved by using RIMGEA. It's a mnemonic that stands for:

Replicate: Make sure that any issue I am trying to report is something that I can recreate. Use a count if it's something easily reproducible (5/5 times) or if it's something that will only happen a small percentage of the time (1/5 times, for example).

Isolate: Can I get to the minimum number of steps to show how or where a problem occurs?

Maximize: What are ways that I can show that the issue in question has a particular scope? Is it minor, or does it relate to something much larger? Can I flesh out the issue so that the full impact can be seen?

GEneralize: Does the issue in question occurs in only a tiny, isolated spot, or if it's something that can occur in a broader and more general way?

"And say it clearly and dispassionately": Here's where "be critical but do not criticize" really comes into play. State the facts. State the steps to get to the problem. State the issue you are seeing. State what you would expect to see. State what heuristics you are using to come to the conclusion that the issue is an issue. Leave personal feelings or comments out of the conversation entirely.


Bottom Line:



It can be all to easy to say "hey, why did you introduce this bug into the system?" Flip it around and think about how it feels each time you have heard someone say to you "hey, how come you didn't find that bug?" The fact is, programmers and testers do not want to release bad software. Both groups work together to release the best product possible. Approach reporting issues from the perspective that "we are all in this together" and practice the skills within RIMGEA. This will go a long way to ensure that testers are seen as positive contributors and not antagonists.

Understanding Mental Modeling, Scientific Method and Design of Experiments: 99 Ways Workshop #16

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #16: Also get an understanding of mental modeling, the scientific method and design of experiments - Kinofrost


When we think of things in systems, and try to figure out how those systems interact, we create a mental model to help us interpret what we see. It's not complete, and it's not perfect, but it helps us with the ability to reason how things work, or how they might work. Often, taking the next step and diagramming out how systems work and the interdependency helps us expand or refine the mental model that we already have.  


The Scientific Method is part of the process of refining the mental model that I create, and it has some fundamental aspects that need to be followed to be effective. The Scientific Method requires the person using it to characterize a situation (I make a mental model of what I think should happen). This is called a hypothesis. As I develop that hypothesis, I consider the factors that might confirm it, as well as those that might refute it. Based on the determination of those factors, I then create an experiment, where the hypothesis is challenged. By performing the experiment and analyzing the data I receive, I can determine if the hypothesis is correct, if the hypothesis is false, or if more experimentation is required. The data and experiment are shared with others to review and consider, to verify that the experiment performed was valid and repeatable.


Workshop #16: Create a Software Test that Utilizes the Scientific Method


Since we are looking at this from a software testing perspective, I would suggest that the tester consider an issue and consider "how could I model my tests in a manner that is consistent with and uses the scientific method? 


Often, I use hunches or my own personal feelings when we determine that something is correct or incorrect. This points back to my previous post about understanding and using heuristics as oracles. There are a variety of situations where I have a strong confirmation. If I run a spreadsheet calculation where two and two returns a result of five, the history of math tells me that is wrong. There are also situations where I have a weaker confirmation. I just don't like the way something looks, or I don't think other people will like the way something looks. In situations that are aesthetic, the scientific method may not offer me much. Fortunately, there are plenty of situations where I'll have more concrete data to consider.


Search is interesting since there are a number of "rules" that can vary from application to application, as well as how it is implemented. Let's use Search as a model to set up an experiment that utilizes the steps of the scientific method.


Note: this may be harder to control with larger applications (Google, DuckDuckGo, etc.),  so I'll consider this within the domain of a dedicated application.


First, let's think of a variety of search terms (they can be words, phrases, wildcard searches, etc.). Let's make a hypothesis, which could be as simple as "I have 30 documents that have the word "JMeter" in them. If I enter the term "JMeter", I should be able to see those 30 documents".


Next, think of ways to prove the hypothesis is false. As a software tester, I prefer looking to see if I can disprove before I prove. If I'm too fixated on evaluating if something is true (that it "works"), I may subconsciously focus on just those areas where I confirm what I want to see. In this case, I would want to try to see if I could look for additional items in those 30 documents, perhaps using additional words or word fragments to see if I can locate specific JMeter documents, and exclude others.


I then run experiments in a variety of ways to see if the term I have entered will return something that does not correspond to the "desired result". If I get entries that do not conform to the search criteria, I write them down.


Finally, I share my data and consult with the developers to determine if the search code is wrong, if my search criteria is wrong, or if there's any "unseen elements that may be affecting my results. When we conduct experiments on a complex system such as a web application, there may be a variety of variables we have not considered, or may not even know about. Perhaps the search engine is configured in a way that we have not taken into account. As I learn of these additional variables, I then add them to my mental model of the system, reformulate my experiments, make new hypotheses, and try again.


This process is  repeated and refined (including changes to code and repeating tests) until I and all involved parties decide that the application in question is doing what it is "supposed to do". 


Bottom Line:



The scientific method, and structuring tests around it, is a way that we can be more specific and more exacting with what we are testing. In areas of look and feel, it may not be very helpful. In areas of data accuracy, calculations, returning expected results, applying security rules, etc. it is a very consistent methodology. Using it can help make sure that I am testing what I intend to test, and communicating my results in a way that is sound and will stand up to scrutiny.

Saturday, July 27, 2013

Get an Understanding of Systems Thinking: 99 Ways Workshop #15

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #15: Get an Understanding of Systems Thinking - Martin Huckle

Most large scale problems in the world do not exist in isolation. When a solution is applied to one area, it may have an averse effect somewhere else. Understanding how problems have ripple effects in other areas is the basis for systems thinking, and the ability to look at issues as they relate to broader areas.


Systems thinking, in its simplest sense, it trying to look at issues and situations as part of an interconnected group. A "simple" example would be to consider traffic patterns in a city. How could the traffic patterns in a city be improved? Applying isolated solutions to individual areas (providing more buses on city streets, increase the number of trans that run into the city) may yield a benefit for those specific instances, but have spill-over and cause problems in other areas, potentially making those areas worse. Only by thinking of all the aspects that affect traffic can solutions that will have a cumulative effect be considered, studied and applied. 


Workshop #15: Examine Systems in Software And Outside of Software


To make matters simple, and something that can help get you into the mood of thinking about systems:


1. Look up the classic board game "Mouse Trap". Sit down and consider all the parts of the game, and the variety of systems that connect together to make viable solutions.


2. Consider a web application. Based on what you can see from just the user interface, try to think about all of the moving parts that make the application work. Create a state diagram that can show how the various systems (pages, screen, buttons, navigation links, etc.) fit together. See how complete you can make the map just based on the visible elements.


3. Using the same application, view the page source, and see if you can determine libraries and attributes that help make the pages appear the way that they do. Make a diagram as to what the system could be (some areas such as back end scripts and database topology may not be accessible, but do the best you can to identify all of the underlying systems that make the application work.

4. Taking the software example above, if you have access to the application and code base, try to complete the exercise and diagram. See which areas interconnect, which scripts call other scripts, which database elements are inserted, updated and deleted, etc. Try to make as complete a diagram as possible of the entire application and all elements that make it work (include machines, disks, networks, etc.).


5. To go into bigger systems, and broader ideas, Gerald Weinberg has written a book about General Systems Thinking that would be a wonderful addition to this group, specifically because Weinberg's writing style helps make a lot of these ideas more fun than they would be if you were to tackle them from other examples (seriously, this is a topic where a light touch and some humor help considerably :) ). 


Bottom Line:



Systems thinking is a whole discipline and covers some interesting problems, not just in the software world, but in areas such as ecology, economics, politics, demographics, population, ethics, etc. Seeing issues as interconnected, and having effects on other components helps to shift the way that we think about problems. 


In the software world, few applications have issues that are independent and isolated (beyond cosmetic to spelling/typos). Getting in the habit of considering the components that make up software systems. From the code at the functional and class level, up to the services and network dependencies, issues can be seen in the context of the overall systems where they reside. By seeing these inter-relationships and dependencies,  the trade offs for solutions can be examined with regards to their effect on each other. 

Keep Your Eye on the Ball (The End Goal): 99 Ways Workshop #14

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #14: Keep Your Eye on the Ball (The End Goal) - Kate Paulk


What is the point of software testing? Why do we do it? In the simplest sense, as I've said previously, software testing is about providing information about the state of a product or project. It provides guidance to all parties in an organization as to the quality of an application and its functionality. It illuminates risks to value. It highlights areas that may need to be improved. All of these require a variety of skills and attributes, many of which would go well beyond a specific "do this one thing and you'll get better" approach.


For this section, I'm going to focus on the statement "Keep your eye on the ball" as it pertains to software testing in general. Very often, we find ourselves looking at a piece of functionality. As we are asking questions and exploring, we can veer off into directions that, while interesting, can easily get us off track. Don't get me wrong, exploration is part of the job. It's often the most fun part of software testing. Keeping a focus on the areas that matter, at least for that given time, is a skill that all testers should develop.


Workshop #14: Session Based Test Management (SBTM) and Thread Based Test Management (TBTM)


Many testers have heard the term "Session Based Test Management" (SBTM) at one point or another. If you have participated in Weekend Testing, you have seen first hand how SBTM is applied. SBTM is a structured and focused session of exploration. The goal is to make sure that you know where you plan to go, and record what you do along the way. Jonathan Bach has written an excellent overview of this approach. 

The main idea behind SBTM is that you set a time to focus on something specific within an application. The time is pre-determined. I prefer sessions to be short, roughly an hour in total duration, though they can be longer or shorter as needed. I also prefer to create very specific missions and charters for these sessions. A mission is the overall goal of the session, and the charter(s) are specific tasks I want to accomplish within the scope of that mission. The more specific the mission and charters, the better my chances of successfully completing them within the given time frame. 

Taking notes or capturing what I am doing is important. This can range anywhere from just having a simple text editing application open in the corner while I test, using a dedicated SBTM tool like Rapid Reporter to gather my approach and findings, to actually recording a screen cast of my testing and capturing both my exploration and my commentary of what I am doing. How the session is structured is up to you, but structure the session, take notes of your exploration and your findings, and be sure that you can explain what you did and what you found along the way.

What's this "Thread Based Test Management" (TBTM) think I am referring to? It likewise has its roots in STBM, but think of it as a particular tangent you might take if you were to get involved in while testing. Have you found yourself talking with someone and, as you are discussing a particular area, it would lead you down a path that you might not have considered at the outset? Sometimes those paths are short, and they provide good information to the main topic. Sometimes those paths are much longer, and they seem to take you away from the main point of the discussion, but they are still valuable. What do you do when you find yourself exploring a path that fits your mission or charter, but opens up into areas that you might consider to be too broad or too long for your session. Do you terminate it? Do you go back to the main mission and charters? Using TBTM, you note that you are following a thread, and you see where it takes you. Again, Jonathan Bach has written about his own experiences with using TBTM here.

On a personal level, I like to use the TBTM approach and apply the "rule of three" when it comes to specific threads. If I find myself making more than three steps away from the main mission and charter I am working on, I note that this may be a thread worth exploring later on, but I try to not "rabbit hole" on a given area if I see it's taking me out of the area I've scoped for that particular session. Rabbit holes do, however, offer some great exploration opportunities, so we don't want to forget them. When I find myself taking three or more steps away from the main mission and charter, that's a good indication to me that that thread probably deserves its own session, its own mission, its own charter and dedicated time to focus on it. 

ADDENDUM: This was written some time ago and the state of the tooling world has changed a bit. I was asked if I would include the following list of up[dated tools, so if you are seeing this post in 2022 or later, first off, thank you for reviewing my earlier work :) but also, here's a listing of 10 Additional Tools that you can use to support your Exploratory Testing efforts (happy to see Rapid Reporter is among the list :) ). 

Bottom Line:


Both Session Based Test Management and Thread Based Test Management offer opportunities to structure or explorations, and reference how we got to key areas in applications s that we can communicate what we found, how we found it, and why it maters to the stakeholders. Above all, it provides a way to keep focus while we are testing. Exploratory testing is often seen as mesh and unstructured, a lark that happens to sometimes find something interesting. Those of us who use SBTM and TBTM approaches know that we can provide a great deal of concrete evidence of where we have tested, what we have learned and how that learning can benefit the team that is developing an application.

The key part of Exploratory Testing is that we learn along the way, and we adapt and focus our efforts base on what we learn. Rather than formulating our plan up front and slavishly following a list of scripts and to-do items. We loosely develop missions and charters, and then try out our ideas based on those missions and charters. By keeping track of the threads that we follow, we can determine which areas might require follow up and additional testing.

Friday, July 26, 2013

Understand the Business Model and Business Challenges: 99 Ways Workshop #13

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #13: Understand the business model and business challenges/context before determining the testing challenges. Its not all clicks and buttons. - Mohan Panguluri


Over the years, I have worked for a variety of companies and with a variety of organizations. Each has had different goals and needs, and have been part of different spaces and industries. During my software testing career, I have worked for a very large networking company (and during my tenure watched it go from a relatively small networking company to one of the world's biggest), a company that made virtualization software, a company that specialized in touch devices and peripherals, a video game publisher, an immigration law software application developer, a television aggregator, and a maker of social collaboration tools. 

Were I to have approached my testing in each of those places in a "cookie cutter" way, there is no way I would have been successful across the board (it's likely I might not have been successful at all). Each company made decisions that were unique to its environment, and each worked to woo new customers, as well as work to keep current customers happy. Taking the time to get to know those customers and what they want is always a good investment. It also helps guide you on what areas are really critical.



Workshop #13: Take a "Gemba Walk" with Someone in Sales and Support

For those not familiar with the term, Gemba is a Japanese word and it means "the real place". It is used in connotation of "the scene of the crime" or "live from this spot" or "the place of interest", but they all annotate the same thing; here's where the action is. If you really want to understand the business model and what's important, you need to go where the action is. In most companies, the "action" is in Sales and Support. They deal with both sides of the business. First, the sales people are the ones selling. They have the ear of both the current and potential customers, and they know very well what is bringing people to part with their money, as well as what is needed to keep them happy and active customers. Second, the customer support people know intimately well what is causing customers discomfort. They also know which issues are tolerable, and which ones are deal breakers.

Taking a "Gemba Walk" means to go and explore what's going on in these critical areas. See if you can spend part of a day with the people in sales (preferably multiple people, if possible). Be there to observe, listen in on calls, take notes, and see where your learning takes you. Don't go in with set expectations or an idea of what you want to discover. Additionally, spend some time as well with the support team (multiple people, if possible). Sit in on calls if you can. Take notes of what you experience. Like with Sales, do not go in with predetermined questions or expectations. Be open to what you learn,

After taking these walks go back and read what your documentation, marketing literature and company website have to say. If the materials you have support what you have seen on your Gemba walk, that's a good place to be. If you're marketing and website materials are not reflected in the Gemba walk, I recommend you give priority to what you see and hear on the Gemba walk. The marketing and website literature may be idealized. The Gemba Walk will reflect reality.

With the insights these sessions give you, take a look at your overall testing strategy. Do your testing efforts align with the reality of your business model, the *REAL* business model? Do you understand what the real goals and the real risks of your organization are? Based on what you have learned from Sales and Support, what would you want to change and do differently with regard to your testing?


Bottom Line:


Companies tend to put their best foot forward when marketing themselves. They also tend to cherry pick the feedback they get from their customers and share with the public. They may share a bit more with the development team and with the testing team, but we should not be surprised when they don't go out of their way to share more than the peaks of the highs and lows. Many important and under served areas often go unrecognized because they are in that indeterminable "middle" area. Those issues in isolation may not rise up to be seen as threats. When taken together, and examined with a different perspective, patterns may emerge that can highlight risks that we don't consider. Just because they don't rise to the level of "urgent", doesn't mean they aren't important.

Sales and Support have vital information, but they may not be sharing it because no one has really expressed interest in understanding it in a way beyond quarterly targets and trouble ticket resolutions. These are both areas where a keen tester could learn a lot, and possibly cause them to dramatically change where to focus time and attention.

Learn to Explain: 99 Ways Workshop #12

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #12: Learn to explain. - TonyBruce


Here's another one that dovetails into "advocacy". When I was asked once if I believed there was a "metric" that could help demonstrate if someone was a good tester, I said "look at the ratio of bugs that a tester has reported. Compare them to the number of bugs that were actually fixed. The closer that ratio gets to 1, the better the tester". I can't claim to be the originator of that. I heard it first from Cem Kaner in his BBST materials for Bug Advocacy, but it's as good a jumping off point as any. To get bugs fixed, you have to "sell the story". To sell the story, you have to be able to explain what you are doing, and do so convincingly.

Additionally, and in a somewhat separate sphere, from time to time we need to teach others what we are know, and get them up to speed so that they understand and can readily do what we want to have them do. We need to confirm that they understand what we have taught them, and verify that, were they to go and do that task themselves, that they would be successful.


Workshop #12: Use Screencasting and Master the Art of the Demo


If you tell me something is wrong, I may or may not believe you. If you show me something is wrong, I will be much more likely to believe you. If you show me something is wrong, explain to me what should be happening, and can help me see why the behavior isn't what we want to see, you have a really good shot at convincing me, especially if you can do it quickly.


All of these aspects can be rolled into a simple premise. Give quick and convincing demos of the issues you see, the features that should be implemented, or the shills that need to be transferred. People learn most by doing, and they learn best when they can go back and review what's been shown to them.


From my own experiences working with both local and distributed team members, being able to break down steps into specific and easy to understand sequences can make or break a demo. For a demo to succeed, it's important to place things in the proper order, and to talk about them in the proper context. A great way to do this is to get an application that will allow you to capture application steps and play them back. Screencasting software is available from a number of providers and ranging in price from Free to around $100. Using the screencast software to capture your screen output is a great way to demonstrate issues, provide annotations, and make clear how you were able to get from point A to point Z. 


Recording your actions, however, is not enough. To make a demo actually work, try to use the following in you presentations:


- Imagine that the person you are talking to has no prior understanding of the application you are demonstrating. Could you explain the issue to such a person so that they would understand what is happening?


- Take the time to start from the beginning, and walk through to the point where an application will show the behavior that you want to describe. Take baby steps, and fill in the details as you go. Son't assume that those who are seeing the screencast will know what you are trying to describe. At the same time, don't add commentary to things that don't need it. If you need to press the Start button or launch an application from the dock, don't say that you are doing that. Your actions there are clear. Save the direct commendatory when you start getting into the section of the app that has an issue, or an area that you really want to explain carefully.


- Pause as you make key points. Allow the viewer/listener to take in what you are describing. You may find it worthwhile to reiterate certain things or explain the same thing but using different words. This is often helpful in in-person demo situations as well as for recorded screencasts.


- Provide annotations or create outlines, circles, or any other visual cues that can help you explain what is happening. 


- In a live situation, take the time to assess if the person watching is folioing along, is engaged, and understands what you are saying. Ask questions to determine if they really do get what is going on. 


- In a video example, make sure to go from start to finish for the necessary situation. Avoid any details or commentary that is not relevant to what needs to be explained. If demonstrating a bug, show the steps to get to the issue, demonstrate the issue, and briefly explain what you would expect to see happen. If you are demonstrating a skill or examples of a workflow, go from start to finish, and be as direct as is practical (remember, people can rewind and rematch videos).


Bottom Line:


The ability to explain and have that explanation motivate someone else to action is a great skill to develop. Using a screen casting application can really help drill down a person's ability to explain their ideas, and do so in as direct and effective a method as possible. Realize that, when I am talking about screencasts, they can be as simple as just capturing a few seconds of actions and displaying an exception or a crash, up to long, fully annotated and professional quality talks and demonstrations. In both cases, the goal is the same. Make it so that the recipient of the video or live demo can understand exactly what you want to say, and take the time to make sure that they actually do understand. In a live setting, that will require asking multiple times if they understand, or having them repeat what you are doing on their own. With a recorded video, your steps need to be direct and to the point. If the viewer wants to review again (or multiple times) they will be able to play a recorded video as often as they want to. YouTube doesn't get tired if someone wants to watch it for the tenth time.

Thursday, July 25, 2013

Learn to Question: 99 Ways Workshop #11

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #11: Learn to Question. - Tony Bruce


At its most basic and fundamental level, software testing is the process of asking an application a question, receiving a reply, and based on that reply, formulating additional questions. From the answers we receive, we make determinations as to whether aspects of the application are working as they should, or if there are issues. That's all it is, and "that's all" is actually a galaxy of possibilities. We often lament that we don't get the answers that we want. Do we stop to ask ourselves "are we asking the right questions to begin with?"


Workshop #11: Understand How We Reason, and the Value of a Close Read

All right, that's definitely too big a chunk to put into a blog post, but if we were to place any emphasis in making an effort towards "Learning to Question", I would have to start with these two ideas. 

Critical Thinking is a slow process. It requires effort. It takes time. We need to resist snap judgments, and we need to try to see how we can determine a course of action by what we are presented. Typically this starts with applying reasoning to understand our world and the environment that shapes it. To keep the discussion simple, we'll focus on two primary modes of reasoning, inductive and deductive

Inductive reasoning is where we take specific facts, figures and data points, and based on those values, try to come to a broader conclusion. Inductive reasoning allows for the idea that a conclusion might not be right, even if all the specific premises used to reach the conclusion may be correct. Inductively derived conclusions are referred to as being strong or weak, and the strength or weakness is based on the probability of the conclusion being true. 

By contrast, deductive reasoning works the opposite way. It starts with a broad aspect, applies premises that are specific, and if the premises used are true, then the conclusion, by definition, is also "true". Deductively derived conclusions are rated on their validity and soundness. It is entirely possible to have a "valid" conclusion, but have the conclusion not be "sound". Often, this is referred to as a "fallacy".

So what do these skills have to do with asking questions? It's important to understand how we are reaching premises, and what we are doing to come to conclusions. The premises we examine will help us determine the questions we need to ask. Too often, we deal with a paradigm of "here's a question - what is the right answer?" Testers have to work from the opposite perspective. Our product is the answer, perhaps millions of answers. 


How do we shift to asking good questions? I think learning how to do a critical read is important, and understanding what is really needed for a given context. 


I will forever remember an answer that I gave in the BBST Foundations class that I took a few years ago. I read the scenario, and I was off like a bolt of lightning. I wrote down what I thought was a brilliant answer. It was detailed, leaving nothing out… and basically I received an across the board agreement that I "failed" the question. Why? Because the answer I gave didn't answer the call of the question. It answered what I wanted to have the question to be about, but the actual question that was asked was never answered. I didn't give a critical read to really understand what the question was asking. Additionally, if we don't give a critical read to the materials that comprise the details of an application, a story, a tool, or a customer inquiry, we might put our own biases and opinions on what we need to examine, and totally miss what really needs to be looked at.

One of the things I recommend to students in the BBST classes (based entirely on my own chagrin at blowing it so spectacularly), is to take the text that's presented (a spec, a story, a customer inquiry) and break up the text so as to be sure to isolate and identify the questions (if explicit) or write out the questions (if implicit). From there, go back and look at the details to see if the inquiry makes sense. If I can get a clear idea as to what we should be examining, that gives me the ability to craft questions that will best address the criteria I am interested in. In the event of an inquiry from a customer, I can tease out the supporting statements and the fluff and see if I can isolate what the real question(s) might be.


Bottom Line:

Research and read about aspects of logic and reasoning. Read up on inductive and deductive reasoning. Practice using the skills mindfully so that you understand where, in real world use, you are using one or the other. Grade your premises and your conclusions; see if they are strong or weak (if using inductive reasoning) and if they are valid and/or sound (if using deductive reasoning). As you practice these methods, keep a record of the questions that you ask. Write them down or record yourself speaking, but capture the questions. As you work through subsequent examples, see how your questions evolve over time, and see how well they help guide you to answers or other areas worthy of inquiry.


Additionally, get into the habit of finding the call of the questions you are asked. Separate out the non-essential aspects, and make sure that you are actually focusing on the real question, then work to answer it. Undoubtedly, in that process, you will develop more specific, deeper and probing questions. Repeat this process... forever.

Remember it is About People: 99 Ways Workshop #10

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #10: Remember it is about people. - Tony Bruce

At some point, regardless of where software is deployed, it is written to serve and benefit people. Sometimes those people are abstract, sometimes those people are present and immediately specific. In any case, no matter how deeply embedded in a system, or nested inside of middleware that maybe a half dozen human beings will ever see, real people benefit or suffer. In trivial cases, we may be talking about a momentary frustration. In extreme cases, we could be talking about fatalities. In all cases, human beings consume software and use it to achieve various means to an end. Ultimately, every feature benefits someone, and every bug has the potential to harm someone. Making this connection makes the concept of Bug Advocacy a lot more understandable, even if the term feels awkward or legalistic. All testers are advocates for customers, which is a fancy way of saying "we are the voice for real people". In many cases, we may be their only voice.


Workshop #10: Flex Your Advocacy Muscles

How can we practice advocacy? The first step is to consider any software application you interact with. Remove yourself from the equation. Who is that application for? What's important to those people? Can you visualize who those people are? Can you visualize what they will do with your application? What is their definition of a useful and successful interaction with your product? What is a genuine pain point for these people? How are they impacted by a bug that you find?


These are not arbitrary questions. These are legitimate things to consider, since the impact on one person could mirror the experience of thousands, or even millions, of people. By recognizing the potential impact of an issue, we can better make our case as to whether or not a fix is worthwhile to pursue. The fact is, not every problem can be fixed. There is a finite amount of development time, energy and resources. The development team will focus their attention on areas they are convinced pose too great a risk to not fix. How can you make the case that an issue rises to the level of attention?


One of the ways that I do this is to tie any issue I find to an easy to recognize "Oracle". An Oracle is a tool that software testers use to determine if a test has passed or failed. They can be as strong as rules of mathematics or grammar, official specification documents, and rules of law, to weaker Oracles such as personal preferences and gut feelings to what we believe customers will want to see. Additionally, there are a number of heuristics that we can use to help us strengthen our case that something we have reported could be bigger than others might believe. Some issues are slam dunks. Crashes, exceptions, etc. are usually not too difficult to argue the merits of fixing. Other areas, such as look and feel, or User Experience, are often harder to lobby for since the issues are subjective.


A famous set of "Oracle Heuristics" used in software testing is the "HICCUPPS" mnemonic. James Bach first identified these areas (Michael Bolton wrote the article that references them) as shorthand to say "this is an issue because what we are seeing is inconsistent with [History|Image|Comparable products|Claims|Users Expectations|Product|Purpose|Statutes]. Note: since the original article was published, Michel Bolton has updated and added some additional items we can use as well:


Inconsistent with [Familiarity|Explainability|World] extends the mnemonic to "FEW HICCUPPS".


Sometimes, multiple Heuristic Oracles (i.e. multiple letters of the FEW HICCUPS mnemonic) can be applied. Generally speaking, the more letters you can apply to an issue (the more places that an issue is inconsistent), the better your case that it is an issue that deserves attention.


Think about it.


"This application's buttons are strangely colored, it looks bad"


"The proximity of these two colors for background and buttons will make it difficult to differentiate for a user that is color blind. Since we advertise our high marks for accessibility, this color combination is inconsistent with our own claims, user expectations and potentially could disqualify us with some customers because of legal statutes"


Which one of these is more likely to get your attention?


Bottom Line:

Ultimately, all software is for and about people. Suggestion #1 says to know your customers, and this suggestion aims to make that connection even more personal. Internalize that connection, put yourself in your customer's shoes, and ask yourself "what's the risk if our customers come across this issue I'm seeing?"


Now go and advocate likewise.