Friday, April 30, 2010

Does Being Part of a Community Make You a Better Tester?

“You will be the same person in five years as you are today except for the people you meet and the books you read.” – Charles “Tremendous” Jones

For most of the time I have been involved in testing, I have basically just done what I needed to do. It’s not that I don’t enjoy testing, I do, but often, it seems like I have just gotten into a mode of firefighting and doing everything I can to cover my bases and fight the fires. It’s only been in the last year that I finally sat down, took a look at what I was doing and said to myself “am I going where I really want to go? Do I even really know where I want to go?”
To borrow a little bit of Steven Covey, this state is what he refers to as being in “the thick of thin things”. My director and I discussed this challenge, and he put it in a great perspective. When we create or test code, it’s way too easy to get caught up in the tasks of “getting it done” and “getting it out the door”; people tend to lose track of where they ultimately want to go.


When I was taking stock of what I wanted to do and how to determine where I wanted to go next, I couldn’t help but compare my paths through Scouting and Snowboarding (to anyone who reads these posts in the future, you will just have to get used to the fact that Scouting and Snowboarding analogies will appear very often in the things that I write. Outside of my family and my church, Scouting, Snowboarding, and Software Testing are the things that most occupy my time, attention and passions).


When I first started as a Scouting leader, back in 1993, I did it because it was something I was asked to do by my church. I was willing to do it, but I did it because I was asked to. At the time, I didn’t have kids, and while I enjoyed working with the program, I didn’t really go out of my way to do all that I could do. I felt I did a pretty good job, enough to keep me active for 6 years, but I was just one of many scout leaders, and I didn’t really stick out. This changed when my son was old enough to become a Tiger Cub himself, back in 2002. Once my son became a scout, my entire approach to how I approached scouting changed. Now, I wanted to make sure that I did all of the essential trainings, I wanted to attend roundtable so I would know what was happening in the council. Through this, I met other adult leaders, developed friendships with them, learned from them, and ultimately started teaching and interacting with others. Through this, my game as a scout leader got better and better, to the point where I now participate as a staffer for various scouting activities throughout the council as well as staffing the top level training we offer to Scout leaders, a program called Wood Badge.


Back when I first started snowboarding, in 1994, I had fun going up with friends, just tooling around and riding wherever I wanted to, hanging out with some friends here and there. It was fun, and I got to be a pretty good level rider, but it wasn’t until 1998, when my company decided to field a competitive snowboard team, that I felt something come alive in me. It was that series of races, with a broader community of snowboarders, many of them lots better than me, that I developed a drive to improve. Through that, I was introduced to the USASA, and to the South Lake Tahoe Snowboard Series, which frankly produces some of the greatest snowboarders in the world. I competed for a number of years in the Master’s division, and made some of the greatest connections with some of the greatest people I’ve ever known. My ability as a snowboarder honestly tripled over the course of seven years. While I would never consider myself an Olympic caliber competitor (hah, not even close!), I did have some success with racing and freestyle events, and I won a fair share of medals competing against a great group of peers over many years.


What was the driver in these cases? Was I a different person? Nope, same guy, but something fundamental had changed. The people I was interacting with were the ones that were encouraging me to up my game. Their enthusiasm helped me develop enthusiasm. Their ability helped me develop my ability, and their passion helped develop my passion. So what does this have to do with testing? A lot, actually! Many of us working in an organization may find that we are just going through the motions, were just “doing stuff” because it’s what we need to do. We get in, we find out the most pressing thing that day, we take care of it, and we move on. What happens when we work with other people who are in the same mind set? Where does one go to get inspired? One goes to the community and finds inspiration. When you are hard pressed to find a mentor in your current situation, you need to look elsewhere for that mentor. Sometimes it will be a co-worker, sometimes a manager or senior staffer at your current company. Other times it may well be to reach out to people you worked with previously. Many times, I have found mentors just by looking on the web and trying to find information. I was introduced to Randy Rice because I was curious to see if there was a software testing podcast or two out there. Turns out Randy had made 20 of them, and I’ve listened to each and every one of them, some of them several times. Through these podcasts, I’ve also learned about and found writings and podcasts from many other contributors. Through looking at Software Test and Performance magazine, I discovered a lot of other testers that write, speak, and present on testing topics. Getting familiar with these people and their writing and teaching, I myself became inspired to look at my own testing and my approach to it.


In short, the testing community, just like the scouting community and the snowboarding community, has encouraged me to up my game more than any other incentive. Knowing that people I admire and respect in my industry have actually taken the time to look at what I’m doing and offer encouragement, that’s huge to me! It’s given me the drive and the desire to learn more, do more, and go farther with testing than I have to date. To reference the quote at the top of this article, I firmly believe that the people that you interact with will determine how far you will go and what will inspire you. If your current team isn’t inspiring you to up your game, seek those who will encourage you to do so. Note, I am not saying that you need to leave your company or get a different job to up your game, but you may need to reach out to others to find those will inspire you. The cool thing is, you may find that your enthusiasm will carry over to others, too, and with time, you may well be the one that inspires. I’ve had people in the Scouting world tell me that. I’ve had snowboarders tell me that. Frankly, nothing would be cooler to someday get a letter or a comment from a tester somewhere that says “thank you for your example, insights, encouragement and enthusiasm… you changed my game!” It truly is my hope to get to the point where I can do exactly that.

Thursday, April 29, 2010

Thinking "Hard" to Test "Soft"

Recently I had the opportunity to discuss some creative testing challenges with another tester, and this brought to mind some of the more interesting and creative test tools I have used over the years, as well as some of the techniques that we worked on to make them possible.


Back in my previous life at a computer peripheral company, I worked as a lone gun tester for a product management team. My goal was to work with both hardware and software to test engineering prototypes and other pieces of the puzzle. Since many of the devices I checked were also hardware devices, this gave me the opportunity to test elements that many software testers may not often think about. When I was looking at touch pads, more than just the pad and its circuitry and driver had to be taken into consideration. We had to look at things like adhesive, semi-conductive coverings, and how to test the devices under adverse conditions. Three items that became my friends in testing were the oven, the freezer and the ESD gun.


When you test hardware, you actually have to create real life harnesses and bezels to hold devices in place. One of the most fun aspects of working at the computer peripheral company was the time I spent in the shop. We would often need to make plastic and brass bezels to hold devices, or to create test devices for certain requirements. At one time, we had to work on a way to apply weight and force to a navigation stick (the kind that looks like a pencil eraser in the middle of the keyboard. This device required the ability to test for shear force at a specific weight, and as such, required a unique testing apparatus. It was fun to sit down with a program like solid works and design a literal piece of hardware that was shaped like a hollowed out octagon, with a slot cut into each side, and a mounting point in the back. This allowed us to test eight axis points, each at 45 degree angle increments, and apply direct pressure as measurable points, as well as retrieve data on the device’s readings at those specific weights.


One of the most interesting devices we had was the ESD table and gun. This device allowed the user to lower an electrical element towards an object, and when it came within enough distance and metal to discharge, it would fire off the charge that was stored (manipulated by a computer program). I found it interesting to note that the huiman body, under certain circumstances, is able to generate anywhere from 8kV at normal activity to upwards of 25Kv (this is that static electricity build-up that can be so damaging to electronics, and thus the reason we test for it). The harness that worked the gun was a simple air compressor, when the user wanted to lower the gun, a foot pedal was pressed, and the apparatus lowered to the height set by the user. Taking the foot off the pedal caused the air chamber to fill with air, and the pressure caused the gun to rise to its previous position.


To simulate force, we often created apparatuses that would allow us to swing weights towards objects, sometimes small, sometimes fairly heavy. I had the benefit of a lot of creative types at this company who were more than willing to give me their time and ideas to make some of most interesting contraptions. One of the devices was a “finger” that would rotate at random along the surface of a touch pad, simulating various weights and taking readings from the pad under various adverse situations (extreme heat, extreme cold, with sand swirling around in the chamber, etc.). It was fun seeing what these people would come up with.


Today in software testing, while I do not have to necessarily test for the weight that it will take to render a device malformed o the point where it will no longer work properly, that level of “creative destructiveness is still needed to effectively and thoroughly test software. Each day that I work, I try to think to myself “what would be the equivalent of that “swing ing hammer test” or that “octagon shear test” or “that finger in the freezing windy sandstorm”?With software, everything is abstract and takes place in code, so much of the “physical apparatus” isn’t actually used, but it still helps to think about them from time to time, as it gives me some paradigms to compare and parallels I might come up with. If you haven’t used hardware testing techniques before, do a little reading on some industrial testing systems to see how companies have solved some of these challenges. If you have worked with hardware, look for the comparisons and see where a little bit of “shop work” might come in handy on your next software project. You may be surprised at what you find.

Wednesday, April 28, 2010

Wednesday Book Review: Software Testing: Fundamental Principles and Essential Knowledge

Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.


If you were to talk to someone who was going to go into the testing field, and they said “I want to get something that would really help me get to the heart of what I need to know quickly and efficiently”. Given that, what would you recommend? I think the book “Software Testing: Fundamental Principles and Essential Knowledge” by James McCaffery would be a good first start.


First, this book is very thin. I don’t say that to criticize, but to extol a virtue. This book is focused clearly and distinctly on what the author has determined to be, in his opinion, the fundamental and essential. This book is terse, and focuses on a number of concepts directly. Each concept builds on the previous ones, and each chapter gets the user from basic understanding into actual testing scenarios, with a review question and exercise at the end of each section. The answers to these review sections are given in the back of the book. 105 pages is all there is from the Introduction page to the last review question answer. So what get’s covered in 105 pages?


Chapter 1: Software Testing and Test Cases. This section runs the tester through a whirlwind tour of test cases and test suites, and how test cases are put together and what they are meant to cover.

Chapter 2: Fundamental Mathematical Techniques. Combinatorics, Statistics, and some details about cryptography are covered, as well as a section on Pair-Wise testing (which made me laugh a little considering my blog post of yesterday :) ).

Chapter 3: Project Management Concepts. A quick tour through the Software Development Life Cycle, Test Documentation, writing of bug reports, and reproducing issues dominates this section.


Chapter 4: Core Principles. Various methods of software testing are discussed, including manual vs. automated, black/white box testing, performance, stress, configuration and localization testing.


Chapter 5: Essential Knowledge. This is a collection of miscellaneous test areas, including Character encoding, XML, databases, HTTP protocol, and networking testing and troubleshooting.


Chapter 6: Units, Modules and Components. This section talks about details associated with Dynamic link libraries, module testing, some details about unit testing, the Windows System Registry (noticing a bias here? Yep, this book is heavy on Windows testing), and creating stub and mock tests.


Answers to Review Questions. A section at the back of the book gives the answers to the review questions in each of the sections.

No question, this book takes the approach of grabbing the user and diving in. There’s little in the way of a gentle curve or illustration for points. The details here are focused and meant to help the reader get right into the thick of things. The good news is that the book is not designed to have to be read through from front to back. Any of the areas can be read in any order, and each has a review question at the end of each section to check to see how well you have retained the ideas.

Disclaimer: this book is associated with a certification program offered by Volt Information Services, Inc. and as such, the questions are written as though they are part of a practice test for a test certification. Some of the answers to the questions did have me scratching my head in a few places (not that the answers were wrong, just that they seemed to be strange choices based on the material presented).

Bottom Line:
If you are a beginner tester, or if you want to learn more about basic principles of software testing, this would be a good first book, but I wouldn’t recommend this as the only book a tester get. While it covers many areas, there are lots of other areas that get little to no coverage. Supplementing this book with the test fundamentals found in Kaner, Falk and Nguyen’s “Testing Computer Software” would give a beginning tester a great foundation to build on.

Tuesday, April 27, 2010

The Dialect of Testing


Yesterday I had an interesting experience. I was talking with a co-worker who has a friend who is a recruiter. The recruiter was looking at a number of resumes they had received for testers, and she was trying to determine if the person in question would be a good fit for the job. My co-worker asked me really quick if I could review the job description and see if I could give any suggestions as to how to narrow down the list. On one side, I was able to do so, but on the other, I noticed that there was a vagueness as to the original description. In the description, they are asking for people with UI experience, but they do not spell out if they mean experience with developing and designing user interfaces, or with testing user interfaces. Likewise, they made a request for familiarity with test scripts. I explained that, with the vagueness of the description, they could be looking at resumes for an entirely black box tester who writes literal test scripts (enter value A into input B, expect C=Pass, else FAIL) to test user interfaces, or they could be referring to a white box tester who understands user interface development, and hence can write unit tests to test functions and procedures. I gave him some suggestions to give as feedback to the recruiter to make sure that they are specific about the technologies, methods, and language that they us when they are describing testing, because, to quote Indigo Montoya from "The Princess Bride"... "you keep using that word... I do not think it means what you think it means!"

This has been something I’ve started to notice more and more. Developers and testers tend to think that they speak the same language, but there are many examples where testing phrases and concepts that are well understood by testers are either less understood or otherwise totally foreign. As an example, many testers are familiar with the concept of “pairwise testing”, where the tester creates a matrix of test options that identifies all discrete combinations of parameters, as opposed to testing every parameter exhaustively. The phrase “pair wise testing”, however, seems to be one of those “test dialect” statements, as when I have spoken with software developers and stated I was going to use  pairwise testing to help bring down the total number of test cases, I have received a few blank stares and an inquiry as to “pairwise testing? What’s that?”. When I describe the process, I often get a comment back like “oh, you are referring to Combinatorial Software Testing”. Well, yes and no, pairwise testing is a method of combinatorial software testing, but it is not in and of itself combinatorial testing, as it’s not an exhaustive process, but rather a way to identify the pairs of interactions that would be most specific and, ideally, the most beneficial to spotting issues.

Another testing technique that seems to have gotten a few heads scratching is when I mention “fuzzing” or “fuzz testing”. The idea behind fuzz testing is that a user (be it a live human or an application program or test script) provides unexpected or random data to a program’s inputs. If the program produces error code that corresponds to the input and appropriately flags it as being invalid, that’s a pass, whereas performing these steps and causing the program to crash, or present an error that doesn’t make any sense, would be a fail. Again, when I’ve talked to software developers, and brought up the notion of “fuzz testing”, they have looked at me like I’ve spoken a foreign language. When I’ve explained the process, again, I’ve been offered a corollary that developer’s use in their everyday speech (“syntax verification” has been a frequently used term; I’m not sure if that’s representative, but it’s what I’ve heard).

So what’s my point? Do testers have a distinct dialect? If so, how did we get here? And now that we are here, what should we do going forward? Also, have you noticed this in your own interactions? How many out there have had these challenges, and what has been your experience with clearing up the communication gaps?

Saturday, April 24, 2010

What Can You Do With an Hour a Day?

Does it feel like you are running to stand still? I feel like this on many days, especially when I have multiple projects to track, and goals that I want to accomplish but never seem to get any traction on. This isn’t a reality that just testers deal with. Almost everyone faces this dilemma, and much of the time, the “urgent” wins out over the “important”. Yet we all still want to make positive movement and develop and grow, right? So how can we do that, realistically?


I have been having a little bit of fun with my son the past month or so. When Final Fantasy XIII came out, we both decided to set up profiles on our Playstation 3 and encourage each other through the game. I knew I wouldn’t have the time to play that my son would have, but I decided on a simple strategy. I would set aside an hour a day to work through and play the game. This is in contrast to the fits and starts of when I play most games. Sometimes I would put in a four to five hour session, just getting totally engrossed, until I would be exhausted, and then other things would take me away from the game for several days, and then I would pour in another long session. The net result is that, while I enjoyed the games and I enjoyed finishing them, I felt like I was emotionally spent by the time the games were over, and I always felt somewhat uncomfortable with the amount of time it would take me to “get back into the groove” of the game.


It’s been a month since FF-XIII came out, and generally I have held to my one hour a day play approach. I’m a little shy of 30 hours into the game, and I have noticed something. While I still have challenges to overcome and strategies to work through and figure out, almost none of my frustration with previous games and that “finding the groove” has been apparent. Each day I get a tiny bit better and I move a little bit farther, but for the most part, I make steady progress and I am genuinely enjoying this game more than I have previous installments. Sure, the game looks great, and it’s fun to play and get bonuses and level up and all that, but the biggest difference is that I don’t miss a beat; each day I make a little bit of progress, and I retain the knowledge and skills needed to keep making steady progress.


I’ve been thinking about this metaphor because I have one major challenge that I just never seem to get any time to get real traction on at work, and that is automation. Through playing FF-XIII this past month, it’s dawned on me what my problem has been. I’ve been waiting for that golden moment when I can have a marathon “automation session”, but testing needs and projects have gotten in the way of my getting it done. So what happens? I keep trying to get the regular testing stuff done so I can clear a big swath to do automation work. Since I’m a Lone Gun tester, I do not have another person that I can say “hey, can you work on this while I do automation work?” So the automation gets put off until I can devote a big chunk of time to it, which frankly is exhausting, and then I have to get back to the regular testing efforts. Wash, rinse, repeat.


So what’s the point of this article? Simple… I hereby pledge to set up each work day so that there is one hour of dedicated automated testing time. I realize that might not sound like much, but it is way better than trying to find time after everything else gets finished. Scheduling it as a daily appointment forces me to clear the deck for that time, and communicate that fact to others, so that I can make some progress in an area that desperately needs it. From there, I keep a calendar and mark off the days that I was able to successfully complete objectives during that period of time. My goals are to start small, study up on frameworks and implement them, to utilize open source tools like Selenium and JMeter to get a better grip on how to develop automated tests, and to spend more time with my current automated test tool, which is TestComplete.


Why just an hour a day? Because I know that, even under the most adverse of testing environments, I can manage to focus an hour of time on this. If I can get more, then of course I will take advantage of that, but making sure that one hour a day gets dedicated to this purpose translates to 5 minimum hours per week, 22.5 hours per month, 250+ hours per year. What’s more, that dedicated time may not move the ball very far very fast, but it will move it forward, and that forward momentum is what’s often needed to gain acceleration and start getting better results. Plus it doesn’t require as much ramp-up time to get into the groove as it does with the marathon stretches separated with long stretches of inactivity.


I realize this is advanced common sense (or perhaps fundamental common sense) but occasionally, we need to be reminded that the important things need to be given time, even if that means some urgent things get pushed back or ignored for a little while. Here’s wishing you all a productive hour a day, and the determination to mark it, keep to your schedule, and stick with it.

Friday, April 23, 2010

Stepping Away to Step Back In

A few years ago, I made up my mind that I wanted to branch out of testing. I felt it was an endeavor that was poorly understood, and it was one that many people had too many mis-conceptions about. Because of that, I felt it might be time, after several years experience, to see what other avenues I could venture into.


My first example of this was to try Application Engineering. However, I found out that, if you ask ten different people what an Application Engineer did, you would get ten different answers. Some felt it was a fancy way of saying “tech support”, while others had visions of a process expert that retooled production lines and made sure that the processes were accurate and efficient. One thing became clear, the nebulous nature made it a difficult fit for me.


After I went back to school in 2003, I went to work as a tester again (hey, back to what I know) but after a short period of time actively testing, I was asked to do support while new projects percolated up. Several months later, I was still doing support while most of the testers were put onto other projects. I inquired as to why I wasn’t being given another testing project (had I done something wrong?). the answer was, no I hadn’t done anything wrong, but that I was doing something very right; they felt that they could get any number of people to test, but getting someone who could do quality technical support consistently was like striking gold! The fact that I could test and do good technical support put me in an interesting position, one where I could be very effective and get to really understand the User Experience issues that many of the testers and project managers were either not addressing or hadn’t determined what to do.


When I finished school, there was a lull in the Q.A. jobs being offered at the time, but quite a few opportunities in technical support. Since I’d spent the previous two years sharing time between test and tech support, I figured I’d give it a go. MY time as a dedicated support rep would be short lived, though, because as soon as I came into the organization, the development team was sharing their frustrations that they had little in the way of a dedicated Quality Assurance group, and I opened my mouth that, hey, I’m a tester (or I was, in any event). This resulted in a shared arrangement for a year, where I split my time 50/50 between support and testing, with the net result that I was asked to climb back into the QA arena full time again.


It’s been three years since that time, and I am happy to be back in the QA saddle again. What changed? When I look back at where I was in 2000, I had done several years of testing, but there was little structure or understanding behind it. I did it because it was needed. After my forays into Application Engineering and Technical Support, I had a much better appreciation of the value of the User Experience, and really getting a feel for what customers really wanted their products to do. This allowed me to set a whole new expectation for testing, and that new expectation energized me and gave me motivation to focus on ways to make sure that I was thinking of what the customer would see, and making that the paradigm that drove my testing. It had a big effect to getting me to get back to my love of testing and giving a new purpose to my efforts.


Sometimes stepping away from what you are doing is the perfect thing to help you get clarity on what you really want to do and why. Here’s hoping that, if anyone else wants to do this, it doesn’t take them seven years to get that clarity. Sometimes I wonder where I could be today and what I could be doing had I had that epiphany back in 2000, but the good news is, I’ve had it, and it’s what drives me today. We’ll see what tomorrow brings :).

Thursday, April 22, 2010

Does Simple == Reliable?


In the wake of the issues surrounding the Toyota and Lexus recalls, I have been thinking about cars a lot lately. Specifically, I’ve been thinking about the cars I’ve owned over the years and the various challenges they all have posed (or not posed). During my life, I have owned cars from the 70’s, 80s, 90’s and 00’s. Some have been gigantic vehicles (my 1971 Ford LTD Wagon had an engine that would put to shame most full sized trucks of today), and some have been a bit smaller (my 1975 AMC Hornet was an early example of what would come to be known as the “economy car”). I’ve owned what were considered top shelf cars (my 1978 Saab GLE, a true boutique car with seriously finicky boutique needs for maintenance) and genuinely bare bones vehicles (my 1990 Ford Escort Pony had no power anything in it, and was purchased for $6,000 brand new).


Looking back on all of these vehicles, one thing stood out. All things being equal (equal maintenance, equal attention to the wear and tear and replacing of items), the vehicles with the best longevity and the best overall performance (not referring to speed or handling, but costs to maintain during their lifetime) the simplest of my vehicles has to date gone farther and less expensively than all of the others, and that would be my 1990 Ford Escort Pony. In fact, between me and my brother, we drove that car relentlessly for ten years, averaging 100 miles per day between us, and finally retiring it after it had logged just shy of 300,000 miles (for those who are familiar with American cars prior to the 1990’s, you will understand why this astonished both of us over the years).


My brother, who is a pretty adept auto mechanic , told me why he felt the Escort did so well for so long. Since at the time, I was scraping together whatever money I could, and couldn’t afford an expensive car, I opted to buy a car off the lot that was the absolute base model. Manual transmission, no Power windows, no A/C, no power steering, no power braking (heck, it didn’t even have a radio when I first bought it, though I did put one in after the fact)., all of this added up to the fact that the amount of moving parts and electronic connections were limited to the bare essentials. In short, little went wrong because there was so little that could go wrong.


I’ve often thought about this when I’ve approached testing projects and testing methodologies. Many who have heard me talk about testing have heard me reference my “trusty old pony” as a testing metaphor. For those who haven’t heard it, here it is… just as the Ford Escort Pony had so little that could go wrong, one should make tests that, for the most part, as a simple as required to test the feature and perform the task. Just enough detail to accomplish the task successfully, but no more. Some people have criticized me for this in the past, asking for “more details” and “more explanation” when creating test plans or adding test cases (often times, just for the sake of adding test cases and no other reason). Past experience has shown me that the added documentation was rarely used, and that the added details often didn’t result in better test cases. More maintenance was required to keep the documents and tests up to date, absolutely, but there was not a direct relation to better testing or better performance. The process ran well when things went well, but there was a heavy re-do charge when things had to change (reminds me a lot of my 1978 Saab; a dream of a car when it ran, but very expensive to fix, and it seemed to need to be fixed an awful lot).


For those who find that they are looking at doing a lot of document maintenance, test case maintenance, and a lot of rework because of a lot of tests that promise to do much but don’t really seem to deliver, consider my “trusty old pony”, and try to see how much you can do with simple and focused plans and tests. I believe the time you will save in maintenance will translate to more and better testing, all things being equal. Happy driving :).

Wednesday, April 21, 2010

Wednesday Book Review: Linchpin

Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.

Some books allow a person to pick a nugget of interest. Some books are of practical interest and have much that a person can apply. Every once in awhile, a book comes along that is a “game changer”. For me, “Linchpin” is that game changer. Seth Godin has a knack for talking about marketing, and for finding the unique in the world of the mundane, and how to get that uniqueness out to those who are seeking it. In “Linchpin”, Godin steps away from the idea of marketing products, and instead, focuses the attention on the ultimate marketable product… ourselves!

So what is it about Linchpin that makes me consider it a game changer? It comes from the idea that Godin presents early in the book, the notion that the “take care of you” work bargain, and the sense of security that used to exist with it, is gone forever. Globalization, changing technology and increased sophistication of technology, and a changing population and focus of energies has made the old model obsolete. This old model was based on the factory being the center of life for so many, a life that rewarded conformity, sameness and reliable repetition of labor. Just like cogs in a machine, we as workers were trained to be cogs as well. Those days are over.

Instead, Godin makes the case that the truly indispensible people in a company, or in any job, are the ones that can make value beyond their role as a cog in a machine. They offer more, and their value is not just tied to the value of their labor. Instead, their value is derived from the fact that they more closely resemble artists than drones. What Godin calls for is a return to the “old ways” where the development of the artist, and applying one’s art in the workplace, is the differentiator. It’s the artist, Godin exclaims, who acts as the “psychic glue” that makes a project, a team, or companies really work. Godin calls these people who are able to achieve this level of performance the “linchpins” of an organization.

For those unfamiliar with what a linchpin is, it is a small device used to prevent a wheel or other rotating part from sliding off the axle it is riding on. Remove the linchpin from the wheel, or break the linchpin, and the wheel will come off. Godin compares the person who learns how to bring their craft of work to the level of an artist as one who has the hallmarks of being a linchpin to their organization. The point is also made that, to be this person, will require much work and sacrifice on the part of the person who wished to obtain that level. Linchpin’s work hard and they add value over time, to the point where, if a company was restructuring, the Linchpins would not be the first to go. In fact they would be some of the last. The capital of the artist, along with raw talent, is the outpouring of “emotional labor”, and giving the best of one’s self and efforts through the hard work of emotional labor is really the only reason people are paid. It’s what is truly expected of people.

Godin also makes the point that there are two big things that get in the way of doing extraordinary things. Those two things are what Godin calls “The Lizard Brain” and “The Resistance”. Summarizing these two concepts would take more space than a single blog post could do justice, but suffice it to say that both aspects are huge elements in getting in the way of our true potential. The Lizard Brain is the idea that there is a part of the human experience that is instinctive, that tries to keep us safe, that works to keep us from doing something that we would perceive as dangerous. The Lizard Brain is well adapted to keeping us from being eaten by a predator or running away from a situation where we might die (it’s what can drive the brain to excrete adrenaline so that we can perform super-human feats when we are in actual danger). However, the Lizard Brain also gets in the way of emotional situations. The things that make us emotionally uncomfortable will also trigger the Lizard Brain to jump into action. Along with the Lizard Brain, the Resistance is also manifested in these situations, where the various situations and our perceptions of the situations feed the reaction of the Lizard Brain. Both feed each other, and both work against us to doing our best work or doing those things that may be necessary and essential, but internally scare us because we know that there may be a significant potential of failure. Doing the best work sometimes means losing the script, chucking the manual, and going freestyle. Let’s face it, living without a map can be scary. Godin states that, not only is living without a map an important skill, it’s essential to surviving in the 21st Century marketplace.

External Video: Seth Godin describes the Lizard Brain

Godin makes two main points for readers to grab onto as they apply the efforts described in Linchpin. The first is the idea of expending “emotional labor”. This is one of the key areas that Godin encourages us all to learn how to do (for obvious reasons, he cannot give us all a roadmap as to how to effectively do this. If it could be broken down into a simple scripted formula, it would cease to be art; it would render the art into another cog that could be easily replicated and replaced). Some reviewers have criticized Linchpin because Godin does not go into the details as to how to apply the ideas. I would argue that it is not possible to do that, and that the real way to do and learn how to become an artist in your sphere requires you to actually practice art in your sphere. The second point is the idea of “giving gifts” of your art. Too often, we think about reciprocity in too many areas; I give you this and you give me that. In fact, most people tend to think about this in the workplace. I do my work, you give me money to do it; If you want my ideas, pay me first. Godin makes the point that this is backwards. When we give of our ideas up front, we set the stage for being rewarded down the line. As we build our personal brands, the act of giving more of ourselves and our talents will actually develop more of a following for our personal art, and that process will allow us to develop the very things that matter to us and that will matter to others. I will confess it was this very logic and idea that prompted me to start TESTHEAD. It’s a way for me to practice my art and test what I know, or think that I know, and give me an arena to give away my insights, and to receive feedback on those insights, both good and bad (or more to the point, how it is being received by others).

Bottom Line:

For any person who has ever been downsized from a job and wondered “what could I have done differently”, for any person who has thought “what could I do to become indispensable in my chosen field or endeavor”, or even those who think “I want to become absolutely awesome at something, whatever that something is”, Linchpin is a book to put high on your list to read. The prose is engaging, and Godin keeps your attention. You learn many things that you don’t necessarily want to face, but if each of us did face those issues, and really applied the recommendations in this book, wow, imagine what each of our work worlds would be like? Is the path from cog to artist, from disposable to Linchpin, an easy one? Absolutely not, but it is definitely a worthwhile journey, and it’s one I’ve committed to taking. Likewise, when I get those feelings and that little voice in my head telling me I shouldn’t bother, that I’m not good enough to do this, that I don’t have any right to go where I want to go, I can now recognize the Lizard Brain and the Resistance for what they really are, and now that they have names, I can call them what they are and overcome their control on my greatest efforts. For those who are looking to do the same, get Linchpin and decide how you will strive to become the artist you want to be.

Tuesday, April 20, 2010

Three Pillars of a Tech Job Interview

I felt compelled to revisit this due to the number of people that I personally know who are currently looking for work and are in transition at the moment (a friend of mine uses the term “Under Utilized”). For just about everyone that finds themselves in this situation, there is going to, at some point, be the sit down interview. While this varies from place to place, generally there are three areas that people look for and that will be the make or break for an applicant. Those areas are the technical, the motivational and the interpersonal. While they may not be termed in exactly this way, bet that interviewers are weighing these aspects whenever they talk to an applicant, and the ability to perform highly on each of these areas will determine who gets a second round interview or an offer, and those who do not.


I first became aware of these areas when I was working at Cisco Systems in the early 1990’s. During that time, Cisco was growing hand over fist, doubling in size or more every year. Yet even with that growth, they had a solid process for whenever they interviewed people. What I found interesting was the fact that, many times, even if someone was not as technically skilled as some other people, there was an uncanny ability to pick, most of the time, those people who would excel and grow over time, and be in positions to be solid leaders in a matter of a couple of years. I found that the emphasis on the three aspects of technical, motivational and interpersonal were the key to the success in these areas.

Technical: This is self explanatory. For most jobs, you have to have the technical skills to do the job. What’s not so obvious is that the technical skills are not always as cut and dry as many would think. Yes, if you are going to be a database administrator, you need to understand SQL and how to manipulate it. If you are hoping to get a programming job, you better know how to program. In some cases, having experience in a particular language is important or using a particular tool is important, especially when the company has a specific need for that particular ability right then and there. What’s not so well known is that many companies, especially smaller ones, are early in the process of developing and codifying the standards they will use, the tools they will utilize and the skill set needs are more fluid. If you are well versed in Perl and the shop in question is looking for expertise in Ruby, as long as the immediate need isn’t urgent, it is possible to show your experience in Perl as a selling point for that firm and that you could grow into the skills required with Ruby to be effective. One area that gets short shrift and can doom you is your cover letter and your resume, and your ability to sort words, spell words and write effectively. Yes, this is a technical skill, and one that undermines many candidates. It’s also not uncommon for companies to have a written test to see if you understand key grammar concepts, spelling, and sorting order for entries. Candidates have been bounced at this point and not considered for follow-up, even when their technical skills were excellent.

Motivational: I often phrased this as the “what gets you up in the morning” aspect of an interview. Motivational is a lot more fuzzy than the ability to discuss and ascertain technical ability. Also, motivational is an easy one to lie about, but not for long, because people’s true motivations do show through if probed long enough. Also, motivational factors can be colored significantly by what is going on in someone’s personal life. A person’s financial situation, home life, relationship with their family, friends, and social situations all come into play when probing the motivational aspects of a candidate. Most people who interview want a job, so the most basic motivation (being employed) is almost a given. However, there are other aspects that, when taken into consideration, change the way that a person interviews. When someone is in financial distress, even if they make no mention of it, it shows in the way that they sit and the way that they speak. When this is not an issue, people interview differently. Dave Ramsey talks about this on his radio program when he discusses financial situations with people and getting out of debt. He was the first to give me this idea. The way he puts it is that there is a “smell of fear” for those who have issues that are straining on them, whether they be financial, familial or social. When you have addressed these areas and they are not areas of stress in your life, you interview differently, your entire countenance changes, and what’s more, you are now not interviewing from the aspect of “I need this job” to “I would really like this job”. The difference is tremendous, and being able to speak to where your motivation comes from makes a tremendous difference. The less stress from other areas, the more you can focus on the enjoyment and challenge the position entails, and yes, it will show.

Interpersonal: This is the classic “can they play well with others” area. While the idea that everyone who comes into the work place will be a social dynamo is unrealistic, there is a definite need to weigh how a person interacts with others on the team. Cisco in the early 90’s did this with group interviews, where two or three people sat to talk with a candidate. It was not uncommon for an entire team to interview someone over the course of two or three sessions, and each person was encouraged to be themselves, albeit in a professional manner (no one would intentionally go in and try to push people’s buttons, for instance, though occasionally that is exactly what happened). This interview technique has a great potential to put someone at ease, because it’s much more conversational, and more people translates to a more casual conversation. It was often common to do this part over lunch, where we would go to a local deli and just sit down and talk about the things that interested us. Many times, work would not enter into the conversation at all. By gauging what interested people in their personal lives, what their hobbies were, and many times finding like interests really brought people out and had them freely communicating in a way they might not do in a regular interview setting. One word of warning, though; the Interpersonal part, when a person has been set at ease, does have the danger of veering the candidate into a potential trap. If you are too “free and easy”, you run the risk of offering too much information or putting your foot in your mouth. I remember one applicant who was out with us at lunch, and he started getting rather crude with his comments about previous employers, and offering gossip about where he had previously worked. This action was seen by many in the group as unprofessional, and cost him an offer. While these situations are rare, remember that keeping a professional demeanor is important, and there is no room for gossip in an interview. Love of video games, sure, but telling on co-workers should be avoided at all costs.


Even if your interviews are structured differently, the mix of these three aspects will be important in differentiating you from other candidates. Being strong technically but showing unfavorably in motivational or interpersonal areas may cost you a gig. Alternately, you could rock in the motivational and interpersonal areas, but be lacking in the technical area, and that too will cost you the gig. I will say, however, if you are slightly lower in the technical arena as compared to others, your ability to shine in the other areas can give you the nod over a slightly more technical candidate. So to all my friends who are looking for work or who are considering new vistas, keep these three areas in mind when you interview. You may be surprised at which elements makes or breaks a prospect.

Monday, April 19, 2010

Nor-Cal Scout Jamboree and Stress Testing in Real Life

This past weekend, I and about 20,000 other participants in the Boy Scouts of America descended upon the Alameda County Fairgrounds for what was to be the Northern California Jamboree celebrating the 100th Anniversary of Scouting in the United States. This promised to be a fun event with a lot of booths, activities, food and participation. My Troop and Lodge participated in the Pow Wow event to showcase Native American dancing and crafts.

From the time my group arrived on Saturday (which was going to be “the big day” of the event), we saw that there was a definite mis-match of service to need. Certain things that were meant to help the situation ended up doing the opposite. Below are some cases in point.

Pre-Registration vs. Paying at the Gate: In most cases, one would think that there would be a benefit to registering in advance and having payment made early. Likewise, paying the day of the event would be a much longer process, or so one would think. Interestingly enough, one of the boys that made the decision to come the day of the event walked up to the ticket booth and purchased his ticket within 5 minutes, at a rate of $5 more than the early registrants. Those who registered in advance had to check in at will call, where there was only two tables and a handful of volunteers processing the 18,000+ attendees who were claiming their pre-paid packages. The net result was that people had to wait upwards of an hour to get their pre-registered and paid for items. Needless to say, there were quite a few people who would have gladly paid for the on site tickets at that point, even if it meant to pay $5.00 per person.

Activities for 20,000 People: The lines for events were long, and in many cases, those looking to participate had a tight time limit on the things they could do. Most of the participants had to make choices as to what they could do with the limited time and forego other options. This is normal, but it wasn’t as though some things had long lines and others had very little; most areas were overloaded with participants and not enough volunteers or staff to man them.

An Hour Plus to Get Food: I was tied into an area for much of the day, so I didn’t get an opportunity to go and get lunch, so one of my assistant leaders offered to go and get some food for me, of which I graciously accepted. Over an hour later, my assistant arrived with the food. I asked if he had been waiting that whole time, and the answer was “yes”. There wasn’t enough food stalls at the site to cover the demand, and the vendors were the ones that were under contract to the fair grounds. Each of us received a $5 voucher for lunch, but almost nothing could be had for $5; everyone had to pony up extra cash even to get a small meal item. I later found out that the hour plus wait was on the low end of the spectrum. Another one of the adult leaders for my Troop waited over two hours in line to get food.

I highlight these things not to poke at the people who put this event together, nor am I saying we had a lousy time. For most of us, we had a great time and enjoyed the day immensely. What I wanted to point out was that, even with everything in place, and all plans made, many of the best laid plans can suffer when the system gets overloaded.

There has to be a tie-in to testing here, right? You bet there is. Often, we have a situation where those of us who are testers are given clear ideas and objectives as to what we need to do, and we are enthusiastic to meet them. Then when the day comes to test, we discover that we have to cover twice the load that was anticipated with no additional resources. What happens? Things bog down. Tasks take longer, coverage gets spottier, and the chances for dis-satisfaction grow larger. At the Jamboree, many people complained that the service and availability was poor, when really what happened was that there was disconnect between the perceived number of attendees vs. how many actually showed up. No question about it, those who were there performed herculean feats to do what they needed to do, yet it still wasn’t enough. We also face the same situations in our everyday testing lives. To this end, we owe it to our stakeholders to be as up front and honest about our capabilities as we can be. Even if we do think that we can cover what we’ve been presented, overestimating and having to scale back is better than underestimating and then having to deal with a situation where we cannot possibly meet the demand.

The Nor-Cal Jamboree was a fun time, and I was glad that I went, but I saw many areas where the process could have been improved, and in some cases dramatically, had there been more communication about the potential capacity. While I don’t expect our area to have an event of this size again anytime soon, I certainly hope that we all can learn from this and plan for more capacity next time around… and if they need a gadfly to test the scenarios and ask the annoying questions, hey, I’m happy to assist :).

Sunday, April 18, 2010

What's In Your Candy Bag?

Ever since I started teaching my son about looking at things critically and how to spot issues with things (or potential issues, sometimes things just work as designed, even though we may disagree with that assessment :) ), I've been amused at when he will come up to me and ask me "Hey dad, is this wrong?"

Sometimes it will be something on a site like Facebook or Gaia, where he will notice that an action does not do what it's supposed to do (and the subsequent frustrations that follow along with that) or as he's reading an article or a book and notices mis-spellings he hadn't noticed before, but today he pointed out something to me that just had me laughing.

At church, he is in a class with other teenage kids, some of which are rowdy at times, so as a way to keep order in the class, the teachers offer treats to kids who come prepared, participate, and do the reading assigned during the previous week. This week's treat was two bags of M&Ms (one plain ,the other peanut). My boy noticed on the bag that there was a number to call if there was any issues or anything out of the ordinary with the product. He brought home both bags and gleefully showed me that, indeed, there was a problem. Inside of each bag he counted out the M&M's, and he saw that both  bags had three and four candies, respectively, that didn't have M's stamped on the candies. He then told me he was going to call the company and let them know about it.

I chuckled, but he reminded me of a valuable lesson. To practice testing and finding answers, we have to be willing to look in multiple areas, and oftentimes find those things that are mundane and wouldn't normally rise to our attention. I will confess that I would not have thought to take out each candy and look to see if the label was there, but I smiled that my son thought to do that. Perhaps this was spawned by curiosity, perhaps boredom, but it shows that we can be made aware of little issues in almost everything, and looking at them and for them helps us develop our skills as testers. What was also great was to see the look in my son's eyes as he explained what he noticed, and how excited he was when he told me he was going to contact the candy company about it. I told him that, in many ways, this is similar to the experience of discovery and the satisfaction that a software tester feels when they find an issue in code that they are testing, and when they write up a bug report.

I'm interested in hearing about the results of the phone call he makes to the company (he couldn't do it today as it was Sunday and they were closed). Whether or not he gets any type of a reply, I congratulated him on discovering and reporting his first bug. I never guessed it would be inside of a candy bag :).

Wednesday, April 14, 2010

Wednesday Book Review: Surviving the Top Ten Challenges of Software Testing


Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.

For many people, there is a book that gives them an “a-Ha!” moment, one where they can see that, hey, this book was written just for me. Well, OK, there’s really no such thing as a book written just for a single person, but I will say that this particular book has been one of the most specifically helpful to me.

William Perry and Randy Rice wrote the book “Surviving the Top Ten Challenges of Software Testing: a People Oriented Approach” back in 1997. Quite a few changes in software technology and testing practices have happened since then, so there are a few areas that are not covered. An updated version would be helpful (and Randy Rice says he is working on an updated version of this book, so I’m definitely interested in seeing the changes when it becomes available), but even after 13 years, this book holds up well and sounds very much like the environments many testers work in today, and many of the aspects described in this book are every bit as relevant in 2010 as they were in 1997.

In “Surviving the Top Ten Challenges”, Perry and Rice take on the area that I consider to be the most challenging when it comes to testing; interacting with other people and groups within an organization. Perry and Rice take on the issues that they saw time and again when they would consult with companies and help them solve issues related to software quality, systems, and yes, testing. As they state early in chapter one, testing is “challenging enough in and of itself, but when coupled with office politics and interpersonal conflict, it can task a tester’s mental health”.

The book uses a unique format - the ten challenges each get a chapter to themselves, and each chapter is formatted the same way. In a way, this allows the reader the opportunity to focus on the challenge that most concerns them (not all ten challenges may be facing each tester at a given time, but it’s a pretty good bet that at least one of them will prove to be here and now relevant).

Here's an outline of how each chapter is set up:

- Overview
- State of the Practice (a worst case scenario situation that describes the issue for illustration purposes)
- Impact on Testing
- Solutions to the Challenge
- Solution Impediments
- Guidelines for Success
- Plan of Action

In descending order are the original ten challenges that were presented. Randy has since said that, were he to write this book today, he would reorder some, and perhaps remove some and add new ones that have come to the fore in the past 13 years, but overall, I think these are still representative of many organizations (your mileage of course may vary). I’ve also added my comments to each of the challenges and how I think they are still relevant today.

10. Getting Trained in Testing (not so difficult to do today, especially with the proliferation of information available through Wikipedia and other sources, but it still requires a fair amount of self-initiative to do. Getting buy-in from management to cover formal test training and pay for it is still an issue in many organizations).

9. Building Relationships with Developers (this is very important, and is still a source of friction in many organizations. Working in a non-adversarial method is vital to the success of any long-term involvement in Quality Assurance and Testing).

8. Testing Without Tools (today, the explosion of open source software testing tools makes this less of an issue, but the adoption of tools or standardizing on tools is still an issue. By the way, for those looking for tools, Randy maintains an excellent list of "Cheap and Free Test Tools" at his Rice Consulting site).

7. Explaining Testing To Managers (I’ve found that I haven’t had to explain the need for testing, but I’ve definitely had to explain what levels of testing are needed, and occasionally to talk someone off a ledge thinking the situation was hopeless, and that there were solutions we could use that were not going to require everyone to become ascetics monks for six months).

6. Communicating With Customers and Users (if I were to order these based on 2010 realities, I think that this would be the number 1 challenge, and the one that many organizations find the most difficult in this day and age. Amazing that with increased availability of communications tools, we are still having conversations about how best to communicate, but there we are).

5. Making Time For Testing (an issue then and an issue now, coupled with having enough dedicated testers and resources).

4. Testing What’s Thrown Over the Wall (there are still quite a few shops out there that practice a strict waterfall methodology, so this is still an issue for many, but adoption of Agile methods by many companies and work groups has made this less prevalent in the last decade. In some companies I’ve worked, though, development takes place off site and code was literally delivered via Fed-Ex, so there’s still the reality of “over the wall” testing happening. Deal accordingly).

3. Hitting a Moving Target (this situation is still prevalent, and it’s still a way too common occurrence).

2. Fighting a Lose-Lose Situation (the lose-lose situation Perry and Rice are referring to is the classic conundrum “find all the issues” coupled with “don’t delay the schedule”. It’s a Catch-22, much of the time. The best solution is to make sure that you do not present yourself as the “guardian of the gate” but rather the “teller of the story” so that those who are the stakeholders can decide what they want to do. This way, developers and project managers are the ones making the choice to go ahead, not the testers).

1. Having to Say No (really being the bearers of bad news; remember, test doesn’t have a veto power on a release any greater than any other stakeholder and key player, but we have an eye towards the issues that would potentially really irritate a customer, and we have to report that reality do or die. Of course, how it is reported is key. We are on the same team as the developers and the project managers, and we all have the same goal, to release a high quality product to our users. When approached with respect and with tact, but also with a full accounting of all testing and efforts, and our honest interpretation of all data, we can all come to a conclusion, and if testing is overridden, we have done what we had to do.).


There is also an additional chapter at the end of the book, “Plan of Action to Improve Testing”. In this section, there are some concrete plans that the user can address when they want to take on any of the challenges mentioned in the previous chapters, as well as ways to foster change in your organization to encourage that these areas get addressed and rewarding those who make the initiative and follow through with the desired changes.


Bottom Line:

It’s 13 years old now and some of the challenges presented are not the same as those faced by organizations today, but people are people, and every one of these areas could be potential challenges, and each one of the methods for improving the issue can be implemented to, very likely, much positive encouragement by any organization. Again, not every one of the situations will be relevant to everyone, but each situation will likely be relevant to someone in a testing organization somewhere. Even if just one of these challenges is met, addressed and dealt with, this book would be worth the read. As it is, it’s one that will be a permanent part of the “people skills” section of my library.

Tuesday, April 13, 2010

Aikido and the Goals of Certification

I think it is only fair to state that, at the current time, I do not hold any tester certifications. Therefore, please realize that what I am about to say may ruffle feathers, and some may have counter arguments to what I have to say, and there will be those who will say that I don’t know of what I speak because, hey, I don’t have a certification, so how would I know how valuable they are? The short answer is this; I have nothing but my own observations and my own experiences to speak to this, but I will offer that I am perfectly willing to admit that I don’t know what I don’t know.

Having said that, I will give a simple comparison to a sport I participated in for awhile. Back what seems like a lifetime ago, I participated with a dojo and practiced the sport of Aikido. I enjoyed it for a time and learned a lot about the techniques necessary for handling falls, disarming opponents, and performing and receiving throws. The interesting thing about Aikido is that you really cannot progress unless you show your ability, and there is a group review of your actual skill, in real time, under adverse conditions (the higher you place, the more difficult the tests become). In short to move out of the ranks of the “Kyu” (the student ranks)  and become a “Dan” (the master ranks) you have to really show that you know your stuff and prove it over and over again.

This brings me back to my current impression of certifications (and notice the words I chose to use, this is my opinion colored by my own experiences or lack thereof)… most of the Q.A. certifications that are mentioned and are bandied about focus on taking a multiple choice test. If you pass the multiple choice test, you are certified. In my view, this is like going into the middle of a dojo and testing for a rank by answering questions. While it may be great to have the book knowledge as to how to disarm an opponent with a knife and pin them, would you feel confident in this student’s abilities if all they did was answer a question about how they would disarm someone? No. Would you feel confident if you physically witnessed them disarm several opponents armed with knives? Yes!

In my mind, this is the big thing that is missing from most of the certification paths that I have seen to date. There is a lot of emphasis on passing a multiple choice test, but little emphasis on solving real world problems or proving that you are actually able to do the work listed on the exam, or that you genuinely possess the skills required to test effectively.  The other issue that I have with this is that, just like in an actual real world confrontation, some of the best practitioners of Aikido may not be the best at articulating each and every step, but my goodness they are whirlwinds on the mat and on the street! This is because they are instinctive, and their training has been less on the intellectual explanation and more on the raw “doing”! Don’t get me wrong, both skills are important, especially if one of your goals is to teach, but if I had to choose over the person who could analytically explain the situation but didn’t have the physical speed or dexterity to handle a situation quickly and effectively, versus the person less apt to explain but can quickly overcome an opponent, give me the latter any day. Likewise, in the certification sphere, I think it’s safe to say there are a number of testers out there (maybe a few of us have had experiences with them) who can pass a test, but in the real work crunch situations, may not perform as well as the paper says they can.

Now, again, it’s not my intention to poke at certifications, especially since I’ve already admitted that I don’t currently hold any (and for that matter, I only went through a couple of Kyu ranks in Aikido before life and its myriad of distractions took me away from my studies, so feel free to take that for what it’s worth, too), but I have watched a number of the higher ranking exams and witnessed the awarding of a Dan rank, and I can say with a surety that that was a person I would love to have with me in a rough neighborhood. I saw when she walked into a dojo that she had the appropriate trappings, including the black Hakama and belt that officially showed she was a Dan level Aiki (what we stateside would call a “black belt”), but she more than demonstrated that she deserved that Dan ranking by her movements on the mat.

Likewise, I am looking for a similar way to show my skills, or to develop them (as I say in the sub-text of this blog, this has to do with the mis-education and re-education of a software tester and others who may be dealing with similar challenges).  Were there to be a national or international and officially recognized certification that truly did such a thing, I would be greatly receptive to learning and sitting for such an examination. It’s very likely that I would learn a lot about what I did not know, and that, metaphorically speaking, I might be able to explain some situations but not fend them off in real practice, or be able to fend well but explain poorly. 

Kinda’ like real life.

Wednesday, April 7, 2010

Training the Tiger to Test

Over the years, I’ve often wondered if there was something about the way my brain worked that just wasn’t the same as everyone else. I don’t mean that I feel I’m any less or more smart than other people; I’d say I’m of fairly average intellect, and I’m OK with that (of course, always willing to improve on it where I can). What I mean is that, for years, I would amaze people with my ability to remember some of the most arcane things (historical events, timelines and pivotal events in history I could recite back with freakish detail) but when I tried to do the same things in my math or science classes… nothing. Well, OK, not nothing, but nowhere near as clear or as fast as I could do it with history or trivia or other factors. I listen to podcasts related to software quality and software development, and while I have all of the historical details down (such as what drove Dr. Deming to be embraced by Japan in the 1950’s and why we didn’t get to the meat of his proposals until the 1980’s, as well as his 14 Points) and I remember almost word for word many of Scott Hanzelman’s Hanselminutes podcasts, when I sit down to try to code something from scratch, a lot of the time, it’s just {static}….

This is not to say that I cannot do the tasks that I need to do. I can code and perform complex tasks when I need to, but for me, many times, the “when I need to” deal becomes more the focus. I confess that I don’t have the passion for programming that some others do, where they dive in and play with code because it’s fun. However, I love to test, and mess around with things, and I find it great fun to poke and prod things to see why they don’t work or why they do. Testing concepts I have no problems with (or very few, I should say. Again, it’s what I do every day).

I’ve often struggled with why I cannot get my brain to do what I want it to do right when I want it to do it. As I was pondering this, two books were recommended to me; “Linchpin” by Seth Godin and “Secrets of a Buccaneer Scholar” by James Bach. Both added some interesting insights, some of which I’ve heard variations of before, but never in quite the way that these two books present them. In “Linchpin”, Godin spends a lot of time talking about “the Lizard Brain” and “the Resistance”, and how both tend to sabotage our daily efforts to excel and develop. This was great, as it gave me a good way to look at what I feel are causing my “learning blocks” and that many of them could be dealt with once I understood what they were and why they were there. I realized that I retained those things that I found comfortable or fun to learn. Linear knowledge, like historical narratives or short stories, or even full novels, I was able to keep together and remember a great deal about, in some cases years later. I’ve picked up half read novels that I hadn’t looked at for close to a decade, read one paragraph, and the entire story preceding it jumped back into my memory as vividly as if I’d just read it. I do well with this type of learning because it’s comfortable for me.

Science, languages, recursion, mathematics… those are not comfortable topics to me, and it’s that very discomfort that gives both the Lizard Brain and the Resistance so much power. The Lizard Brain doesn’t want to be challenged, it seeks safety and comfort. The Resistance is the acknowledgement of the real or perceived external stimuli that feeds the Lizard brain, and if left unchecked, they simultaneously grow and become more powerful. When I sit down to try to write a new program or experience working with a new application, I now recognize when the resistance starts to set in, and I can now better deal with it and circumvent the negative aspects. Likewise, I can calm the Lizard Brain so that the more evolved brain can work with the situation.

In “Secrets of a Buccaneer Scholar”, Bach’s shares that he also had challenges related to learning certain things and the way that the school system was structured stifled and irritated him, to the point where he dropped out and pursued his own education. He developed his own mental map and approach to learning so as to be most effective, a full breakdown of which deserves its own post, or better yet, a visit to Bach’s Buccaneer Scholar website will probably be better. That way, you can see James’ own thoughts on this subject. I found many of the comments in his book to be very insightful and interesting, but it was during an interview segment promoting the book (Hanselminutes, Episode 187) that he said something that really hit home with me. In this interview he said he realized at some point that his mind was like a Tiger, and that it was willing to play and be engaged and even do focused work, but if it got pushed the wrong way, it would go run off into the forest, and he’d have to go chase it, or wait for it to return and do his bidding! In short, he couldn’t just force his mind to learn, he had to learn how best to engage his mind at the times it wanted to be engaged! Hurrah!!! Finally, someone who has had the same challenges I have had finally nailed a metaphor that helped me make sense of my own reality.

A number of years ago, I read a book called Peak Learning, and in that book there were certain exercises that helped the reader determine when their ability to learn was at its “peak” (hence the title). Through the processes laid out in the book, I realized that I had two excellent three to four hour periods of the day when I was “on fire” and could tackle just about anything, and then I found that I had times of the day where I was less than peak potential. What was interesting was to discover when those times of the day were. For some reason, my brain works the best early in the morning and the late afternoon. More specifically, my brain is best suited for creative and the hard analytical things (two elements not normally associated with one another) between the hours of 5:00 AM and 9:00 AM and between 3:00 PM and 7:00 PM.

Realize, these times are not exact, and it’s not as though I turn into a non-thinking lump during the other times of the day, but it helped me to realize that, if I wanted to learn things in the optimum time or do something really difficult, those two time periods were my best bet to do that. Wow, wouldn’t it be great to schedule my work day so that I could harness both of those times to their optimum benefit? Well, unfortunately, most companies would frown upon my working from 5:00 AM to 9:00 PM, taking an extended lunch until 3:00 PM and then picking up until 7:00 PM (my family wouldn’t be too thrilled with me either (LOL!)). But this gave me a powerful tool. By realizing that my early mornings could be harnessed for “hard think”, I decided that that would be my study time, and my time to program, tweak and play with ideas I struggled with. Likewise, the later afternoons would be time to do some of the more challenging testing that requires my entire focused attention (exploratory testing, persona based testing, etc.). During the other times, I would focus on the activities that didn’t require such focus and that could allow me time to read, write, report and gather information.

This approach has two benefits. The first is that it allows me to take advantage of the times when the Tiger is cooperative, and it also allows me the ability to structure my time so that I can get more of the things that are more of the busywork of testing at a time when the focus for those activities are where my head is at for that given time. Unfortunately, life doesn’t always behave that elegantly and allow us the ability to do that all the time, but when I have the opportunity to set my days up like this, I astound myself at how much I can get done, and how much I can learn, even on the subjects that normally freak me out. You may find your mileage to vary considerably from mine, but if you find that you have a Tiger for a brain, consider doing something like this as well. The results may surprise you.

Monday, April 5, 2010

Walking Through the Land Mines

So many of the really interesting times I’ve had in testing have been to look at areas that no one considers or that have been around for years. There’s a sense that, even though the area has been combed multiple times, that there’s an issue (or a bunch of issues) that someone has missed. A phrase I like to use when describing a walk through old application details is "Walking Through the Land Mines". Depending on where you are in a project, you may find that you have different feelings about taking this particular journey.

For me, early in a project, it is fun to walk in the odd areas, because triggering a land mine can be fairly easy, and the repercussions will often be laughed about, quickly fixed and then we all move on to other things. However, when you get down to the wire, and you are at the end of a project, a walk through a mine field will not get you the same results. Over the years, I have had reactions ranging from suggestions to look at other areas to absolute admonitions of “do not consider that area part of the scope of testing for any reasons” and that “any issues found will not be fixed”. Translation: enter into the mine field at your own peril, and don’t expect to be rescued if you step on something.

The reason we look at land mines as a metaphor for software issues is the fact that land mines do tremendous damage in the heat of a conflict, but the real tragedies of land mines are frequently what happens years and even decades after a conflict. A fascinating book by Donovan Webster called “Aftermath” describes the conditions of battlefields after wars have ceased, and the job of “sappers” to remove land mines after battles have been fought, sometimes decades after the fact. To borrow a little from the Amazon.com review… “each year more than 5 million new land mines are laid, and only 100,000 are cleared; a new mine costs $3, but removing one costs between $200 and $1,000”. Does that figure ring familiar with any testers out there? Doesn’t that figure sound strangely similar to the notion of finding issues early costing $1 to fix, but finding them in the field costing as much as $1000 to fix?

So what’s a tester to do? Do we honestly just consider some areas of code off limits? Do we continue in the false security that code that hasn’t caused a problem in years should just not be poked too much because we might set off a chain reaction we might regret? While I understand the desire to leave well enough alone, honestly, I think we do a disservice when we do that. While we may not have the power or authority to fix everything, we at least owe our stakeholders (in this case, the development team) an honest exploration of all areas. Yes, that means we need to take a walk through the land-mines, and do so in a way that will potentially raise some scary issues. In this case, though, I am not outright advocating a "fix it or else" approach (i.e. when I do this walk, I go it alone, and set up my environments in such a way as to allow me to get back to a safe state if something goes horribly wrong). What I can do, however, is make a map of the trouble spots that I do discover.

I recommend really strongly doing these explorations very early in the testing cycle. Take the application and poke and prod it in ways that may be totally foreign or unique to you (possibly call in a colleague who doesn’t have as much experience with the application and let them tweak it in unconventional ways). The net result, hopefully, is that you will discover some land mines. Often, these land mines are the very ones that were set years earlier, and the developer that wrote the code may have long since moved on to other areas or another company (note: I'm not trying to disparage developers with the phrase "Land mine" or say that they are destructive or the enemy. Far from it, but the gotchas that are hidden in even some of the best designed code are, indeed, potential hazards that deserve to be explored before someone else innocently finds them). Doing this early in the cycle can give the development and management team ample time to consider whether or not they will go the extra steps necessary to do a full sweep of the area the tester has uncovered. Sometimes, the land mines are clustered together, and fixing an areas takes care of a bunch of hidden problems with limited repercussions in other places. However, there are times when setting off to fix one issue causes a chain reaction and other land mines then become much more clear.

Many times, there are no easy answers in these scenarios, but at the end of the day, we who test need to learn to not fear the repercussions of finding land mines. We need to fear a customer stumbling into the mine field more than we fear ourselves stumbling in. We can be armed and we can isolate these issues before they have the potential to become a dissatisfied customer… and a dissatisfied customer that vents on the Internet can be *way* worse than a software land mine. By doing our best to make sure that our development teams understand these risks, and that we spell out the map as clearly as we can, they can then proceed with the clearest information possible and make informed decisions. Should those informed decisions also include steering clear of the land mines, well, we just have to accept that… and perhaps remind them of where they are again later, when there may be a better focus to clear the no man’s land one again.