Monday, October 31, 2011

The Tyranny of Unintended Conformity?


I just finished the Instructor's Course for AST (in an unusual twist of fate, I can now say that I have completed the last step I should have done to let me actually be a Lead Instructor, even though I've already been one for quite some time... hey, things happen ;) ).


Nevertheless, I found it to be a very worthwhile experience, because I had a chance along with several other Instructor candidates (including quite a few I've taught over the past year plus) to reconsider how I "teach" what I know. Many of the things that I do and recommend, I've discovered, have unintended consequences, and I had a chance to see one of them first hand.


Here's an interesting thought. When does a "good suggestion" become a call to conformance? When you repeat it enough. I hadn't realized that something I did as a suggestion to students could actually be used as a blunt instrument and help enforce laziness. What could this be you might ask? My suggestion that people use the question in their answers. How? By structuring their answer around the question. The benefit? In my opinion, it make it easier to see if the question is actually being answered. It also saves me having to jump to another screen to review the question. It's just something that I figured would save people time and effort.


How can this go wrong? When people start using an "unintended metric" to penalize people who don't do it. Nope, that was not my intention at all, but I was shown, convincingly, that it does happen. When people look for a lazy out, this is a great way to do it. When they cannot effectively answer or offer feedback for the merits of an answer, they will instead hack points away because the answer "didn't match the requirement of including the question in the answer"... a "requirement" that never existed in the first place!


Do we find ourselves at times making up requirements that don't exist, merely because we think they would be good? I'll reiterate, I think the idea I suggested is a good one, it can be genuinely helpful to developing an answer and getting the thoughts down in a format that is direct, answers the call of the question, and stays on target and topic. That would be my preference, but I have to draw the line at requiring people to do it the way I would prefer when their way is every bit as good (or maybe even better).


Having a guideline to help with structure and ideas, I think, is a good thing. Having rigid formalism just for the sake of formalism is not my intent, but I can see how asking for something too often will instill the idea that there is a requirement, and then everyone clings to it as though it were gospel. I'm going to look to doing better on that front going forward.


How about you? Do you find things that you think are potentially good ideas becoming rigid rules you never intended?

Exercise 1: A Good First Program: Learn Ruby The Hard Way: Practicum


The whole point of the first exercise (Exercise 0, as befits a computer-centric topic; all arrays start at 0 :) ), was to install the gedit text editor, figure out how to work with it,  run some sample commands in irb, and navigate around the file system and directories. Again, for those of us who were using systems back in the early 80's or earlier, this interaction was the only way to deal with a computer (graphical interfaces would come later). It's good for people to get familiar with these tools because the book is focused on using these tools specifically, and not relying on anything that gives additional help or adds any wrappers.

The first real coding exercise is a simple one, and it deals with one object and the parameters for that object. This program has the user enter a number of "puts" statements. They may all look the same, but some are just a little bit different. Let's take a closer look at the exercise "by the book" (note, all pictures will enlarge if you click on them).

- Type the following into a single file named ex1.rb. This is important as Ruby works best with files ending in .rb.

- Then in Terminal run the file by typing:

ruby ex1.rb



What You Should See

$ ruby ex1.rb
Hello World!
Hello Again
I like typing this.
This is fun.
Yay! Printing.
I'd much rather you 'not'.
I "said" do not touch this.
$
[yep, I have a match :)]

If you have an error it will look like this:

 ruby ex1.rb
ex1.rb:4: syntax error, unexpected tCONSTANT, expecting $end
puts "This is fun."
          ^
[no error this time around]

Extra Credit

The exercises have "extra credit" scenarios, and yes, as I am trying to do these by the book, I will print my extra credit ideas (and yes, you can snicker at them all you want to, or write back to me how lame my approaches are, it's totally cool :) ).

- Make your script print another line.


- Make your script print only one of the lines.




- Put a '#' (octothorpe) character at the beginning of a line. What did it do? Try to find out what this character does.

[See the next to last image and I've added a bunch of octothorpe characters explaining what they do and when they were put in and why. Long story short, they prevent a line from being interpreted by the ruby interpreter. They are good for entering in comments or stopping certain lines from being run.]

So yeah, pretty master of the obvious kind of stuff, huh? The funny thing is, most people do really well this early in the game because, well, it's basic and it's easy and they get bored really quickly. The early going is very simple. Misleadingly so. It's kind of like skiing. Learning how to snowplow is quick and basic and you can learn how to do it in a few minutes and ski the bunny hill with no problems. All feels perfectly normal, until you get on the intermediate slope and realize "oh wow, this suddenly got a lot tougher!" Coding is the same. Don't get complacent in the very early stages, it's deceptively easy, but many important fundamental practices take root here. Pay attention to them and the jump to later programming is not so painful or so abrupt. Work through the simple examples and pay closer attention to why you are being asked to look at these super simple examples.

Sunday, October 30, 2011

Exercise 0: The Setup: Learn Ruby the Hard Way: Practicum


So I have two systems that I can play with on this, Mac and PC. Since the Mac version has some limitations that I can't really get around (I have to keep certain versions of the app at the versions installed since I'm runnning an active testing environment in Ruby 1.8), for this challenge I'll be using Windows and using the Ruby Version Manager to install the app.

The first step is to go and get the "gedit" text editor. Why gedit? Because it has little in the way of extra help beyond spelling and language support.

There are some basic tweaks we can make to gedit once we have downloaded the editor, such as setting the tab to 2 spaces, inserting spaces instead of tabs when hitting the tab key, and making it so that the line number is visible. Other than that, there is little in the way of the cool tools that many of the IDE applications have. To tell the truth, I really am starting to like some of the features in RubyMine that I've played with. For these practicum posts, we shall be doing this with the preferred editor (all things being equal).

Here's the steps straight from the Learn Ruby the Hard Way site:

- In your Terminal program, run irb (Interactive Ruby). You run things in Terminal by just typing their name and hitting RETURN.

- Hit CTRL-Z (Z), Enter and get out of irb (actually, it doesn't work this way on my command prompt. typing "exit" or "quit" does exit irb).

- Learn how to make a directory in the Terminal. Search online for help (mkdir LRtHW)

- Learn how to change into a directory in the Terminal. Again search online (cd LRtHW).

- Use your editor to create a file in this directory. Make the file, "Save" or "Save As...", and pick this directory (heh, I edited this blog post in the editor, see below :) ).

- Go back to Terminal using just the keyboard to switch windows. Look it up if you can't figure it out (alt-tab and the arrow keys :) ).

- Back in Terminal, see if you can list the directory to see your newly created file. Search online for how to list a directory. (dir, and there it is :) ).

Warning: Windows is a big problem for Ruby. Sometimes you install Ruby and one computer will have no problems, and another computer will be missing important features. If you have problems, please visit: http://rubyinstaller.org/ (So far so good!)

Ta-daa!!!

OK, admittedly, that was not a big deal, but for many just getting the stuff installed can be a big deal and a challenge if they are not used to packages and gems and running the Ruby Installer, but once it's downloaded, then you have a lot of flexibility in how you can use the tools. As stated, though, I'm going to be as literal as I can be, and you will see the literal details as I see them and as I work with them. If I get stumped, I'll say so. If something doesn't work the same way, I'll say so. Tomorrow, some very rudimentary code, done in the spirit and in the system Zed recommends. My pledge is to do this by the book, and therefore, that's what I'm going to do :).

Saturday, October 29, 2011

A New Chapter and a New Challenge

Well, now that it's been announced to the students in the current AST Instructor's Course, I guess their's no reason to not mention it here. While I was in Madison a couple of weeks ago for the AST Board Meeting, one of the orders of business was the need for someone to take over as the Chair of the Education Special Interest Group  (EdSIG) within AST. I'd heard of this need for quite some time, but I kept my mouth shut, because, really, what do I now about Education? I'm not a teacher (well, not really), I certainly don't have any academic credentials, or any fancy title to go with my name. I certainly don't have much in the way of academiese in my vocabulary; I'm much more comfortable speaking "dude".

Yes as I kept think about the situation and the needs, I realized that I may actually have more of an understanding of these things than I gave my credit for. I finished my Bachelors degree, my last two years, entirely online via distance learning courses. I went through 22 online courses of varying quality levels and went through a total immersion process to get the most out of them. Were they exactly like university classes held on campus? Nope, but they had their own interesting challenges, and I'll dare say I learned quite a bit from all of them. I realized that this experience dovetailed well into the way that AST delivers the BBST Classes through online facilitation.

What's more, I was one of the few people who has taught Foundations and Bug Advocacy, and was participating in the pilot program of the Test Design class (which starts next week, btw, and yes, I'm excited to be participating in it :) ), plus I'm scheduled to help teach the March session of the Test Design class. Who else could say they've been teaching all three of the classes (well, OK there's a couple others, but I was one of them)?

As I mulled these over in my head, I decided to do something brave or crazy... time will tell on that one :). By the time we reached the end of the discussion, I decided that there needed to be a Chair for the EdSIG... and if no one was going to step up to the plate, well why shouldn't I throw my hat in the ring? So that's exactly what I did, an for better or worse, the board accepted my offer :).

So over the next few months, I will be learning the ropes of what the EdSIG entails, but most specifically how to administer and run the BBST course series for AST's members and participants. That's a big chunk of where my involvement will be, and due to that, it means my direct involvement as a regular instructor will likely diminish (though I'll still be involved with all of the classes). What this does mean is that there will be a need for more instructors to help teach. We have a fresh cadre of newly minted Certified Instructors for courses, and it's my hope that they will step into the role of helping teach the upcoming classes in 2012. It's also my hope that many of you out there will consider taking the classes if you haven't already, and help me to deliver the best software testing training to be found anywhere on the planet (that's a bold boast, I know, but I happen to believe it :) ).

Why the Hard Way is Easier: Learning Ruby the Hard Way: Practicum

Why the Hard Way is Easier

Zed starts out by explaining that this was the way all programming was done before the copy-paste from electronic sources became so common. People took a book, put it next to their keyboard, and physically typed out the examples. To make this successful, you have to type it exactly and get it to run. No shortcuts, not philosophizing, no copy-paste. Zed also makes very clear that this is a foundational book and approach. Will you learn all of the ins and outs of programming by doing this? No, but you will develop some solid Foundational chops to build on going forward, and that's really the goal. LRtHW aims to teach three things, and from an Adult Learner's perspective, these are not emphasize enough (I'm sorry, I can't use words like "Androgogy" in a sentence and keep a straight face. Maybe that's my failing). Those three things are:

- Reading and Writing
- Attention to Detail
- Spotting Differences

Reading and Writing

This is the point of physically typing in the sequences of characters. Some of them are weird. Some of them are not common for people to use in their every day communication. that's why the idea of not copy-pasting is so important. You have to do it with an eye towards repeating every character so you can appreciate why those characters are being used. there is a reason. Physically typing them gives your brain that brief but ever important option to say "hey, why are we using that?"

Attention to Detail

By typing in everything exactly, we will likely not type in exactly what we think we are typing in (still with me on that?). The simple fact is, we are not as careful as we believe we are. Our brains are very helpful; it works to help us tease meanings out of things we have familiarity with, and when we make a mistake, it's very helpful in working to get around the error, so much so that w don't really notice if we make a mistake. Fortunately, run time engines and compilers are notoriously picky; they know what they can and can't interpret, and erors help us start to see what we may have missed or typed incorrectly.

Spotting Differences

IDE's have lots of tools to help a programmer spot the difference between two code examples. This book expects users to see them with their own eyes. It's a skill that's important to develop and very helpful in the long run (let's face it, sometimes the IDE isn't even an option when you are debugging a remote server that is exhibiting problems, so best to not entirely depend on them).

Do Not Copy-Paste

Again, copy-paste is tempting, but Zed solemnly warns that users will not get the benefits if they just copy-paste the code. The tedium, the time spent typing all of this out is like building muscles. There's simply no way around putting in the repetitions necessary to fatigue (and thus make stronger) the muscles, whether they be in our hands or our minds. Read, Understand, Apply Persist, Achieve. It works for the Hardgainers in Bodybuilding, it should work for the Hard-gaining Programmer's, too :).

A Note On Practice And Persistence

The first time I stepped on a snowboard, even though I'd ridden skateboards for much of my life, it was a foreign thing, an my body didn't know how to do it. For several hours, I fell repeatedly, I could barely manage to keep upright, and I was nothing like the graceful kids who were trucking past me effortlessly  But I kept at it, and then, at some point, it clicked. I found that magic place where my balance and center of gravity made sense. I was able to do rudimentary turns, and after a few days I started to get to the point where I could handle steeper terrain, some jumps and other aspects of the sport that just a few days earlier seemed impossible. You don't reach this point without putting in the time.

I've gone through several programming boks and dabbled in several languages, so why wasn't I a proficient programmer afte all these years? Because of two things. First, I used the languages to do what I needed to do an then when I had those tools in place, I just used the tools. The net result was that I had lots of projects where I figured out how to do something and then stopped. To get good at something, though, you have to keep learning about it and pursuing it. That will be the difference with me and Ruby. I'm now in a place where daily Ruby needs and efforts are becoming more and more real (and measurable) parts of my work. Therefore, I can't just build a tool and blindly and half-understandingly maintain it. I need to develop the knowledge necessary to build stuff from scratch and then be abe to maintain, grow or restart different projects. The simple hack approach won't cut it going forward, so I have to go for it and make this all work. A bit every day (and a missive writing about it) I hope will do that :).

Friday, October 28, 2011

Practicum Revisited: Learn Ruby the Hard Way

Books are great, but they tend to linger and hang around as references when you need something. Screencasts are cool, but again, you need to remember to go to them and work on them when you can. Coding is like language study (and since in a way it is linguistics, that should come as no surprise). I remember back in 2009 I went into overdrive to learn how to write in Japanese. I spent each day practicing Kana and Kanji, and while I learned a lot, I had one major drawback... I had no one to converse with on a regular enough basis to make the skills stick. Additionally, I tend to do better when I prepare things as though I'm going to teach them to someone else, or at least have a dialog about the ideas I'm looking at.


My goal is to do something similar, and to that effect, I'm going to start another extended project, and invite you all to come along for the ride. Each morning, I will be doing a write up on "Learn Ruby The Hard Way". This is not just a pseudo-journalistic lark... it's actually a hard deliverable I agreed to for my performance review :). There are several goals that I will be working on and that I want to learn and think about, and so I'm going to experiment with a "total immersion" project. Also, this is a bit of a study in "Androgogy" or to see how well and under what circumstances an adult actually learns. I'm also big on the potential of "high shame goals"... meaning if I don't follow through, I deserve to have the entire TESTHEAD readership bag on me (remember my post this morning about Motivation? This is the equivalent of a double Espresso shot of it (LOL!) ).

Also, just as with the BOOK CLUB posts, I realize that this may not excite everyone, so I will be aiming to do two posts a day, the other posts covering different topics. The games will commence tomorrow morning. For those excited, I hope you will join me for the festivities. For those not excited, well... you have been warned (LOL!).


What Motivates You?!

This morning, Episode #68 of This Week in Software Testing went online, and in many ways, this was one of my favorite episodes to produce. We still deal with the challenges of optimizing Skype for multiple conversations (it's getting better) and it was tough to decide what to keep and what to trim out, but I think the result is worth it.

This show also celebrates the first appearance of Jonathan Bach as a contributor. We've been trying to get Jon on the show for months, but various scheduling issues and other factors have prevented it from happening. We're glad he could join us for this, especially since we cover one of my favorite topics, Motivation.

It's a challenging thing to deal with what motivates people. We often get it wrong. What motivates me may or may not motivate you. Some people are motivated by money. Some are motivated by attention or fame. Some just want to see an idea of theirs take shape. Some want to grow internally and spiritually. Some just want to have fun.  Many want varying combinations of all of the the preceding.

I discovered that, in a way, my biggest motivation is this... I want to be part of a vanguard movement.

This really doesn't surprise me. It was my primary motivation as a musician, above and beyond just playing and dreaming of being a rock star. I really liked the idea of representing San Francisco and the San Francisco rock scene. More than just wanting to be a rock star, I really wanted to be a rock star "that came from San Francisco". Ultimately that dream fell short of my ultimate and intended goal, but I'll dare say I went a lot farther with it than many people I knew that had similar dreams did. To be fair, many more went much farther than I did, of course, but the motivation was still there, and that motivation defined my involvement.

In Scouting, I've held a lot of leadership positions, ranging everywhere from Tiger Cub Den Leader, Cub Scout Den Leader, Cubmaster, Scoutmaster, Venturing Crew Advisor, Explorer Post Advisor, Order of the Arrow Chapter Advisor, and Assistant District Commissioner (sort of a Board of Directors for the Scouting movement in a given area). Again, much of this came from my desire to be involved and my self identification as a Scouting leader and wanting to make a difference in the lives of families and in my community.

In an earlier post I mentioned that I became part of "the movement" that was the burgeoning snowboarding scene in Lake Tahoe during the early part of the 1990's. That was a core part of my identity for many years, and it still is in a lot of ways. While I never had any real thoughts of ever "turning pro", I did align with and aspire to associate myself with the ideals of the snowboarding movement. In fact, one of my first stabs at editorial writing came through this era. I wrote a number of feature article for an online snowboarding magazine called "Cyberboarder" (don't laugh, this was in 1995, *everything* was cyber this or cyber that... OK, go ahead and laugh, it's cool :) ). I started competing in the late 1990's (as a Masters level competitor, i.e. the over 30 division). I won a few medals, placed in a few slope-style events, and podium'd in several races, actually winning a Gold Medal in a regional event in 2004 in the Giant Slalom. I wrote about my experiences and published them in a series of articles (about 30 of them) called "The Geezer X Chronicles" (you can enjoy my early writings for "Cyberboarder" and "The Geezer X Chronicles" via "The Way Back Machine" if you so choose :) ).

What's my point with these examples? Feeling a kinship with these communities, and actively seeking that kinship, gave me identity. Identity gave me purpose and a mission. Mission gave me drive. Drive helped me produce.

For many testers, I believe that the problem is that there is not quite a sense of belonging, or that many testers are unaware of a broader community. Those who will read this are probably already aware of this broader community. The greater challenge is getting those who are not aware to become aware of it and encourage them to be a part of it.

For me, looking back, the real and true motivator in my life, and the one that has had the most sustaining power, has been kinship with a group of people. Continuous communication and involvement has helped me develop drive and passion, and with that drive and passion, opportunities appear. Other opportunities are spawned by how we respond to the initial opportunities we are offered. If you are wondering how to develop your test mojo, my answer is to find other testers who similarly want to improve. Become a community, or attach yourself to the broader testing community via Twitter, LinkedIn, Forums, Weekend Testing, Associations, whatever. The sooner you become part of a community that you care about and cares about you, the sooner you will kick your own motivation into overdrive. Or at least, that's what did it for me :).

Thursday, October 27, 2011

In Praise of the "Beta" Book

Begin Disclaimer:

I am not receiving any financial backing from, nor do I get any other tangible benefit from Pragmatic Publishing, other than access to their books and an occasional review copy of titles to post on TESTHEAD. In other words, Pragmatic Publishing is not paying me anything to say this, nor am I obligated to say anything positive or negative about Pragmatic Publishing.

End Disclaimer

Something interesting is happening at the moment. Actually, it's been a business model that Pragmatic Publishing has been doing for quite awhile, and I think befits is name quite well. I have become a fan and a regular practitioner in its "Beta Book" program.

What's a Beta Book, you might ask? Well, it's the idea that you can buy a book as it is effectively being written, and have access to the content months in advance of anyone else. Note that the key word here is "buy". When you participate in PragPub's Beta book program, you buy the book for its advertised price. You can also choose the formats you would like to receive the book. You can purchase a print version and the electronic version, or an electronic only version. I so far have opted for the latter, and I am definitely becoming a fan of this approach.

The first reason is that, I find the act of technical writing interesting. Let's face it, if I didn't, I wouldn't be writing a (somewhat) technical blog. To this end, I enjoy seeing the creative process as it unfolds. Second, with the PDF option for the beta books, I can receive them quickly, put them into my Dropbox folder, and have access to them anywhere when I'm online (and on my local drives when I am not).

The real benefit, however, comes with the ability to contact the publisher and author with feedback on the chapters as they are developing. Realize, when you get access to these Beta books, you may well be getting a first beta version that is 50% of the book. Yep, a lot of it may be yet to be completed. In most cases, the first several chapters are completed, so that you can examine the topics from their initial introductory level and work through. Later on, updates will provide errata fixes, new chapters and content, and incorporate feedback from readers and reviewers.

This is really cool for me, as it lets me get my hands on a book and work through the ideas and see how they play out in real time on my systems, and it gives me an opportunity to ask for clarifications, or to question if the content as printed is clear. Yes, there are typos. There can be odd page formatting. There are occasionally pictures, charts and graphs that just say "Coming Soon". Each time that there is an update, you will receive either an email message or a tweet (or both, in my case) whenever a title has been updated. Log into PragPub and download the latest version (always handily numbered so you can tell which version is which).

I know some of you might be itching to ask “OK, Michael, then what books are *YOU* currently reading in Beta format?" I'm glad you asked (and for those who didn't, I'm going to tell you anyway, so you get something for nothing... awesome, huh :)?).

The Cucumber Book

This title is exactly what it implies to be, it's a book dedicated to the ins and outs of Cucumber. While "The Rspec Book” dabbles a bit in Cucumber and focuses most of its attention on the underlying plumbing framework that is Rspec, this book puts most of its attention on Cucumber itself and the various technologies that it can be implemented with (command line aps, Selenium/WebDriver, Capybara, etc.). The Cucumber Book uses Ruby as its underpinning, which makes me happy because that's the language my company uses and the one I'm spending most of my time learning. It's currently in its 7th beta issue, and has grown substantially since I first received Beta version 1.0.

Technical Blogging

I was drawn to this title because, well, TESTHEAD attempts to be a technical blog. There's a lot in this book that matches what I do, and quite a bit that I could do more of or implement that I haven't yet, but would like to. This title is currently in its 3rd Beta revision.

Build Awesome Command-Line Applications in Ruby

Come on, with a title like that, what's not to love? Actually, I don't really know, as I just picked up the first Beta version of this book yesterday, and have yet to get very far in it, but I like the idea of what it's looking to cover. As an old school shell script-er, I appreciate a lot of the ability to do things from the command line with little to no extra intervention on my part. Many of the Ruby examples offered in various books are either very rudimentary or they are focused on applications to run on the Web (a la Rails). I like the immediacy of interacting with the command line because you can quickly see if a small script is doing something useful and expand on it little by little and get very quick feedback. This looks like it will be a nice companion volume to "Everyday Scripting With Ruby", a book I am working my way through as well.

Is the Beta Book approach for everyone? I'd say that the benefits outweigh the disadvantages for me, but for some that want to have everything up front to work with, the holes in the material may be disconcerting. It also means that you may have to wait for the key chapters that are specific to what you want to do at that point and just wait it out until they are available. For others, that may be a big plus, in that you have the chance to shape a book that may be of great interest to you. I consider myself part of the latter camp. It's also cool in the sense that, when a book hits the street proper, you may have already read, worked through and reviewed several iterations of it (a boon to would be book reviewers :) ). In short, I'm happy to be a supporter of this approach to book development, and yes, expect full and proper reviews of al three of these titles when the books are "production" released, as well as any others I sign up for.

If it seem like "Whoah, how was Michael able to read and review a technical book so fast?!", now you know one of my secrets. Guess what, you can take advantage of it, too, if you want. There is no special waiting list; anyone can participate in PragPub's Beta Book program. If you do, let me know how it works for you. Personally, I love it, and I wish more publishers would offer a similar opportunity.

Wednesday, October 26, 2011

The Pains of Prevention

It's about time for a little light hearted humor and some more of Aaron Scott's "Two Leaf Clover".

Everyone that uses anything has run afoul of Murphy's Law once or twice... or more. The old adage "whatever can go wrong, will go wrong, and at the worst possible moment". Also, Murphy's Law is not contrarian; washing your car in the hopes that it will rain doesn't work.

What can be even more frustrating is when you look into things and you try to be proactive, and then realize that sometimes the prevention is worse than the disease (at least that's how it seems on the surface). We had this experience in preparation for our now annual trip to Southern California to spend Thanksgiving with my siblings. Last time, we were anxious because some needed work for the car had been delayed, and we felt anxious about such a long road trip. This time, we decided we would be proactive and get everything checked out on our car well in advance. Granted with a 10 year old car that has almost 80K miles on it, we figured that there would be some work needed. We just weren't anticipating $2,000 worth!

Ouch!!!

Of course, there's two ways of looking at this. We could press our luck and run the risk of something happening, which it might not, but hey, it just might. Or we can bite the bullet and invest now to clean up all of the issues and have the peace of mind that we covered our bases.

Granted, $2,000 feels a bit steep, but how much would it cost me if our car broke down somewhere between San Luis Obispo and Long Beach. What would we have to pay for a tow, a delay of game with our family, and then the same potential bill or maybe even more?

Does this sound a little bit familiar to you testers out there? Wouldn't it be great if we could know ahead of time the areas that were going to be hairy and scary, so we could adequately plan for and invest up front to deal with the potential "road trip"? Well, the answer is, we can, but we have to engage to do it.

If you are in an Agile shop, you don't have any excuse not to do this, you are embedded with the development team and should have the same access to their materials and information. If you don't speak up and get involved early, you deserve the technical debt that's going to come your way. If you are in a more traditional shop, granted, you have a cultural battle to fight, but you can still fight and win. You have to use the same analogy; do we want to do preventative maintenance or get towed and pay the big bucks? When presented that way, many project managers, and development managers will loosen up and say "OK, you have a point, let's get you involved earlier".

There are a lot of things that are out of our hands and we won't know what hit us until after the bruise shows up, but that doesn't need to be true all the time. No, we can't plan for everything, but we can plan for a lot of things, and many of them will yield big dividends with small outlays of time and attention. By being proactive and focused up front, we may even get enough of a head start to easily keep apace with developer and project management demands... well OK,  that may be a pipe dream :).

Tuesday, October 25, 2011

The End (?) Of a Painful Journey


Well, saying it’s "the end" would not be accurate, but it is the end of the major part of it. This is also (I promise :) ), the last entry for what I hope is a long time regarding "the broken leg".

Yesterday I received very happy news… based on the X-rays, my bones were 90% healed, and due to the physical therapy that I had been doing for the past 4 weeks, I had demonstrated enough range of motion that I could do away with “the Boot”. I had never been so happy to hear those words… until I had to put on a pair of shoes and practice walking with them once again.

Yes, I am in the process of relearning how to walk on my own power. No crutches, no cast, and it is humbling. I think I am doing fine and then out of nowhere, my strength gives out in my right leg and suddenly I’m walking with a pronounced limp. The bones are fine; it’s the surrounding musculature that’s the problem. Since it’s been in a cast for 8 weeks, my right leg is significantly atrophied compared to my left leg (my right calf is two inches smaller at the moment than my left calf… and yes, I checked ;) ).

Relearning how to do something that is supposed to be automatic can be really frustrating, but it’s also very enlightening. For the tester, it gives us a chance to, with fresh eyes, examine something that is truly automatic for most people. What is the optimal step distance for a given muscles range of motion, especially when it’s functionality has been restricted? I’ll bet you never thought to ask that question before. I’m asking it regularly. I’m experimenting, getting feedback, and then from what I learn, changing my approach for further feedback. Again, my body is an exploratory testing laboratory (I know that sounds weird, but work with me here).

I had a chance to see the progress and view the original X-rays, the ones I had only seen when I was in great pain and about to go under general anesthetic for my surgery. It felt good to see them in context and see how much had changed. Yep, for your pleasure and morbid curiosity, I’m including them here:
Here's the original Tibia break

Here's the original Fibula fracture
Here's the plate armor holding it all together

Whether we like it or not, life tends to throw us curve balls. We can choose to dodge them, or we can choose to deal with them head on and learn about ourselves in the process. What have I learned? Well, I’m mortal, and I can break. I don’t have a limitless supply of energy. I can’t ignore gravity. Sometimes, it’s good to let other people do things for you. When you are lying in bed for three weeks, it’s easy to lose 25 pounds. The world will not end if you are not in the drivers seat for everything. Speaking of drivers seat, it is surprisingly easy to drive an automatic vehicle with your left foot. I decided to not try out my theory on a stick shift ;). Oh, and the most important thing I learned… the world doesn’t wait for you, it just keeps moving. It’s up to each of us to decide when we want to get back in the stream, but ultimately, we have to if we want to be at all effective in our lives. It may take awhile to get back in the game, but get back we all must, at some point. I hereby announce I’m back in the game and reporting for training, sir!

Monday, October 24, 2011

Too Close to the Source?

I received an interesting email today from a reader who highlights something that's a challenge for any blogger. The deal is, you never really know how many errors or typos are in a post until you hit the Publish button. At that point, issues that you thought you proof-read, spell checked and read aloud two or three times come to the surface, and leave you with the feeling that "OK, I look dumb for missing that!"

The contributor made a very good point in their observation, however, and that is the fact that, for many of us who are blog writers, we know what we are writing and why we are writing it. We have a filter that often overrides what we are typing, and our brains fill in the blanks for us. Anyone who has been on Facebook for any period of time has probably seen a status update showing how a jumbled mass of words can be "unscrambled" by our brains and interpreted quickly. The fact is, we miss a lot, especially if we do longer entries. My review of Jerry Weinberg's "Perfect Software" was nine pages; this entry is less than one page. Where are the typos more likely to be? In the longer essays, of course, because we fill in the blanks more readily.

Remember how I said that I don't blog to show how good I am at something, but rather, I blog because I know deep down I'm actually pretty bad at it? I also do it to remind myself that sometimes a tester cannot be their own tester when they are reviewing their own code. Not only are they often *way* too critical of what they are doing and overcompensate and over-edit themselves, but they often miss things that are plain as day to someone who casually reads their posts.

One of the challenges and frustrations is the fact that I like to edit in a plain text editor because it's fast to take down notes in. If I import it into Word to do a spell check, sometimes it picks up obvious typos, but it doesn't pick up when I've made an entry and it auto-corrects the entry for me and chooses the wrong word. Well, just correct it. When I catch it, I do. It's when I don't that it's embarrassing.

Another technique I use is the "read aloud" method. This is when I stop and read through my post as though I were giving it as a talk. Honestly, I find lots of issues when I do this, but even with this technique, I miss things. It covers a lot of stuff and helps me find things where when i say it, I think "Now wait a minute, that's not right", but every once in awhile a word will just jump through and I'll miss it.

They say you never get a second chance to make a first impression, and I have to remind myself that many people who read my blog may be thinking to themselves "wow, a tester who doesn't even proof his posts... LAME!" The truth is, it's not that I don't proof them, it's the things that I miss when I do proof them, which I guess adds up to DOUBLE LAME!!! Be that as it may, the point is, I am oftentimes too close to my subject and I suffer at times from "blue room" syndrome. It's a condition where people who edit audio for extended periods get so caught up in the small fixes and tweaks that they don't even know what they are listening to any longer. The context has been removed. The same is true when we write. We think we're doing thorough review, but we're so down in the soil with the seeds that we're missing the flowers overhead and the weeds right next to us.

None of this is asking anyone to cut me slack. I'm writing this as a reminder to myself, and a warning to others. As an Army of One, I don't really have a reviewer or anyone who can review these on a schedule that's realistic, so I'm left with my own wits and the kindness of strangers. This is my way of saying "if you see me doing something boneheaded, please let me know." I won't take it personally, in fact, you'd be helping me open my eyes to something I've missed. Testers testing testers... it's a beautiful thing :).

The Asymptote of Perfection

This is a bit of an odd entry, but considering the time of year and all it entails, why not :)?

I have an agreement with each of my three kids. One time, I will go all out and I will make them a Halloween costume, from scratch, of just about anything they want to be. Usually this comes from my children's love of Anime and the lack of ready made items being available in the States, or if they are, they are insanely expensive. It also comes from the fact that, as I have jokingly said over the years, "I am a man with a sewing machine and I am NOT afraid to use it!" My son got this treatment when he was in grade school; I made him a full "Soma Cruz" costume from the "Castlevania: Aria of Sorrow" video game.

The reason I make a caveat of only doing it once for each kid is twofold. First, I want to have them appreciate the time and effort it takes to make these things, and ultimately develop the skills to do it themselves, but really, there's a much more down to Earth reason. I get psychotically hung up on "Perfection". It's a battle in my heart of hearts I know I can't win. Still, I give in to it every year, and I consume hours and hours trying to make the best costume I can for each kid. There's not enough time in the day to make them the way I want to, and also, my skill level is additionally limiting, but I do have the benefit of getting a little better each time.

The picture is my younger daughter, and the costume is "Hatsune Miku", which is in and of itself an interesting story, but if you want to know the background of the character and the technology that represents her in the musical sphere, I encourage you to check out her Wikipedia page and the page dedicated to the project that created her, Vocaloid. Anyway, that's not the point of this. The point is that, as I have looked at what I do as I make these costumes for each kid, I discover something on a technical level I can do better, and I discover something about the quest for perfection.

For most costumes, you can put in a little bit of effort and you can piece together something that looks OK. With a little more effort and time, you can make something that looks really good from a distance. With a substantial additional amount of time and effort, you can make something that looks really amazing close up, but that still doesn't quite come up to what you have in mind. From here, one of two things happens. Either you exhaust your technical capabilities and accept that you've done as much as you can, or you put in an extreme amount of time to make something that you think is as close as you are going to get to perfect, and nobody is going to notice the difference.

I know all the corners I had to cut to make my daughters outfit work; all of the tricks I had to use, and areas I just didn't know enough about or didn't have the equipment to do a top notch job. Having said all that... so what? My daughter loves the outfit, and her friends are floored that she has a one of a kind costume that no one else will have (well, no one else in her immediate circle; I'm sure. If she goes to a cosplay convention, then she'll see outfits that will blow mine out of the water).

 My point though is that, when we aim for perfection, we experience a law of diminishing returns. We can go from mediocre to good with a small amount of planning and effort. We can go from good to great with a lot of effort. We can go from great to spectacular with a tremendous amount of effort. From there the next step takes Herculean efforts and seriously, no one can tell the difference. The sad fact is most people don't notice much beyond moving from mediocre to good, and really many people don't even care that you've moved away from mediocre. That's dispiriting, but it's true. Perfection is a noble goal, but sometimes it can get us so worked up and we can spend so much time trying to achieve it we become blinded to the fact that we've long since gone past any benefit that anyone else will notice or appreciate.

This is where I think it's important to step back to the old Cub Scout Motto, which is "Do Your Best!" For some, their best will look like "good", for some, it will look like "spectacular" and for a few, it will go beyond anything that many of us can comprehend or realistically achieve without totally reshaping our lives to aim for it. I believe seeing mastery in the things that we do is valuable and an important step towards anything we want to pursue, but we also have to realize that mastery is a process, and it's a long one. If I want to be a master costume maker, then I better set my sights on making a lot of them. As that's not something I'm realistically going to be able to spend my time doing, I have to accept the fact that my little shortcut tricks and that "good enough to be seen from a distance" really is good enough.


By the way, I have one costume left in me... my older daughter has told me she'll claim it one of these years, but when her time comes, she wants me to make her a full Yuna costume from Final Fantasy X.



Heaven help me (LOL!).

Friday, October 21, 2011

Book Review: Perfect Software and Other Illusions About Testing

There are few members of the testing community that have been as active as long and with as much staying power As Gerald (Jerry) Weinberg. Jerry use to sit in a room with hardware that, at the time, represented about 10% of all the computer power in the world.

Coming from that to our world today, it's no wonder that we may treat computing as ubiquitous, all encompassing, and something we often take for granted.

What's also often taken for granted is that software is tested, or not tested. "Perfect Software and Other Illusions About Testing" tackles many of the myths and the challenges that surround the practice of software testing. Some common questions from the preface of the book:


  • Why do we have to bother testing when it just seems to slow us down?
  • Why can't people just build software right, so it doesn't need testing?
  • Do we have to test everything?
  • Why not just test everything?
  • What is it that makes testing so hard?
  • Why does testing take so long?
  • Is perfect software even possible?
  • Why can't we just accept a few bugs?



So how does "Perfect Software" handle these questions? Let's find out.

Chapter 1: Why Do We Bother Testing?

No program is perfect. The simple fact is, human beings are fallible, we make mistakes. As long as we are human, we will have need of testing and testers. The truth is, we do testing all the time. Think of using one web browser over another. Why do I prefer Chrome over Firefox? Is there a reason? For me, the flow of Chrome is more natural and the response time and layout feels faster. How did I get that feeling? Through testing. Granted, I didn't write formal test cases and publish a report, but I tested and found what worked for me. Jerry makes the following points: we test because we want to make sure the software satisfies customer restraints. We test because we want to make sure the software does what people want it to do. We test because we want to make sure that the product doesn't impose unacceptable costs to the customer. We test to make sure that customers will trust our product. We test to be sure our software doesn't waste our customers time. There are lots of other reasons, but all of those make the point that it is the customer we are working for, and if we want the customer to have confidence in our product, there needs to be testing. Also, there will be different levels of testing based on the risk associated with the product in question. The risk of a bug in web browser, while annoying, is totally different compared to the risk of a bug in an embedded heart-monitor pacemaker. In the web browser, some time may be lost or a page may not load right. In the pacemaker, a bug might cause someone to die. Both products have risks, but they are hardly comparable. We'll test very differently an with much greater rigor on the heart monitor than we will on the web software.

Chapter 2: What Testing Cannot Do

Testers gather information and present it with analysis as to what they have seen and with regards to their understanding of how things should work. That's it! That's testing, plain and simple. Yes, we can do a lot of things to help us get to that point, but ultimately that's what we do. We don't ensure quality. We don't block bugs, we don't fix bugs (well, maybe some of us do if we are good programmers, but then we are not testers at that point; we are software developers). Testing also can't be done in zero time and for free. Gang, software is a cost center. It's an expense. It always will be. Unless you sell testing services, testing will be an expense towards doing business. It also takes time to process the needed information, and often the information will have repercussions, some of them good, but a lot of times bad. Also, testing is not fixing. Fixing requires development to actually move on the information testers provide. Testing also hits an emotional center. It's easy to think decisions are made with rational thought and deliberation. Often, nothing could be further from the truth. Testing cannot overcome politics, procedures or cronyism, but it might help identify where it is ;). In short, if you are not going to act on the information provided, don't bother to test.

Chapter 3: Why Not Just Test Everything?

Let's be clear, there is no way, with man or machine that we can "test everything" in an application. From a purely code standpoint, even simple nested loops could have billions of combinations. Exhaustive testing of all functions and all permutations would take thousands of years (do the math, it's true). What's more, the state of a machine is never the same twice, or the same on every machine. There are many parameters that can make a test that passed in certain instances fail in others. Are you sure your testers can find every one of them? Not likely. Also, absence of evidence does not mean evidence of absence, so testers can never make that assumption. The Holy Grail of testing is to find that "just enough" amount of testing that will reveal the "largest amount of bugs possible". Testing is always about getting a sample set and making the sample set tell you as much as possible. If you are methodical, a small sample set may tell you most of what you need to know and may find a lot of bugs, but it may not.

Chapter 4: What's the Difference Between Testing and Debugging?

Testing is a fuzzy discipline. When we look for new information and try to find areas where the program has issues, that's testing. When we have found an issue and want to isolate it, is that testing? Actually, it's pinpointing, and while it's valuable, it's an additional task above and beyond testing. Determining the significance of an issue is likewise important. Is it testing, too? Is locating where in the code the issue is testing? Is repairing the code and working through scenarios testing? Is troubleshooting to make sure that the areas that need to be covered are covered, is that testing? Once the code is deployed and the testers look at the software en toto as deployed to find new and fresh information... OK, now we know that that is testing. How about the rest? the repairing of code and debugging we feel confident in saying that's not our area, but what about the rest? In small organizations, many of these roles are handled by a couple of people, or in some instances, one person. In larger organizations, there's supposed to be an organizational and process hierarchy... and here's where it gets weird. Confusion about the differences of these roles can lead to conflict, resentment, and failed projects. Different tasks require different skills. Lumping them all in with testing distorts planning, estimating, and work focus, and can cause lingering problems.

Chapter 5: Meta-Testing

Meta-information is information about the quality of information. Yeah, go over that a coupe of times :). What Jerry's saying is that we can learn a lot about the state of a project by the way that the information itself is treated and valued. Do you test to spec? Yes. Awesome! Where are the specs? Well we can't find them... you see where this is going, right? There's an issue with a bug database. When it gets more than 14,000 bugs, it starts to slow down to unusable levels. Any consideration as to what development process is generating 14,000 bugs for a project? A tester discovers a problem, but it's not "in their area" of testing, so they don't record it. A tester is diligently checking the scroll bar functionality of a web application, not realizing that the scroll bars are part of the web browser, not the application. Why doesn't the tester know this? Why does the organization let them keep on in their ignorance? A company discovers a lot of bugs one week, an then makes the announcement that the product is almost ready to ship, because "we've found so many bugs there can't be much more". We tested a product with ten users, and it should handle 100 users. Take the stats for the ten, multiply by ten and we should be golden. Testers present information, but the development team and project management team ignores their information. And so on. Just by looking at these situations, you can see there's so much more going on than what people think is going on.

Chapter 6: Information Immunity

Information is the key deliverable item of a tester. The problem? It can be seen as threatening. Bugs == issues, and issues == potential embarrassment, missed schedules, possibly bad reviews and reduced revenues. Scary stuff for some, so what happens? We tend to block out the information we don't want to hear. Information Immunity can stop dead anything of value we as testers may be able to provide. So we need to get to the heart of the fears first, and then figure out how to counteract them (later chapters deal with that). There's a survival instinct that comes to the fore when we're about to break one of our rules (those who know of my love for Seth Godin's Linchpin, well, here's where "The Lizard Brain" and "The Resistance" show themselves in full bloom). We will get defenses when we find those rules at the risk of being broken; we will not look smart, we will not execute perfectly, we will not make our deadline. We repress details that will be embarrassing, we get used to the way things are and we become complicit in going along with the program (bad tester, no donut!).

Chapter 7: How to Deal With Defensive Reactions

People get defensive, it just happens. We also tend to be less than gracious when we are called on it. So we need to use some of our tester skills to help over come these issues. First we need to identify what the fear is, as fear is what usually drives defensiveness. from there, thinking critically will help us determine what might be behind "the rest of the story". From there, it's time to spend some time focusing on how to counteract the fear and help them either overcome it or deal with it.

Chapter 8: What Makes a Good Test?

How do you know that you are testing well, or that your testing is actually being effective? Honestly, you can't tell. That's a dirty little fact, but it's true. There's no way that we can really say "testing is doing well" because we really don't know how many things we are missing or how far away we are from discovering a devastatingly bad issue. We can't find every bug. We can't test every possible scenario. So we have to sample, and that sample set is as good or as bad as our intuition. We really don't know if we did good testing until after the fact. In fact, we may never really know if bugs that were in the product will ever surface, or if they do, they may do so because new hardware and/or system software may be the cause to finally bring it to the fore. Does that invalidate our testing that we once thought was good, but is now "bad"? Perhaps we could enter some intentional bugs, but even then that depends on our knowledge of the "undiscovered, hidden bugs". Doesn't make a lot of sense, though it would certainly be a good exercise to see if your testers actually find them. Also, testers are often judged on the number of bug reports they file. Is that fair? Does that mean they are good testers, or does it mean they have an especially buggy project? One does not necessarily validate the other. In short, while you can't really determine what makes for good testing, it is possible to ferret out practices that lead to or are suggestive of "bad" testing.

Chapter 9: Major Fallacies About Testing

As stated in the last chapter, there's no way to really know if you have done "good" testing, but there are lots of ways to avoid doing "Bad" testing. Here's some examples. when you BLAME others, you tend to not see the rest of the potential issues. Stop the BLAME and look critically, and the issues may be both easier to see and easier to manage. If someone tells you that you need to do EXHAUSTIVE TESTING, you'll need to step back and explain/demonstrate the impossibility of such a task (really, lots of people don't get this). Get out of the idea that TESTING PRODUCES QUALITY. It doesn't. It provides information. QUALITY comes from developers fixing the issues that are signs of low quality. Do it enough, and you will have good quality ("for some definition of good quality", and a hat tip to Matt Heusser for that ;) ). By approaching an application through DECOMPOSITION, we thing testing the parts will be the same as testing thew whole thing. It's not. The parts of a system may work fine by themselves, but the customer will see the whole system, so DECOMPOSITION doesn't buy us anything if we don't test the full system as well. The opposite is also true, by approaching an application through COMPOSITION, we may miss many independent actions of modules that make up the whole. The ALL TESTING IS TESTING and EVERYTHING IS EQUALLY EASY TO TEST fallacies can also be stumbling blocks to good testing. Unit testing and integration testing are not the same thing. They provide different information. Stress testing and Performance testing may sound like the same thing, but they are not. Also, let's please put to bed the ANY IDIOT CAN TEST fallacy, as once we get into real, active exploratory testing, where previous tests inform and provide avenues for new tests (some of which were never considered before) the any idiots quickly drop off the testing train, leaving the active and inquisitive testers to follow through and complete the job.

Chapter 10: Testing Is More Than Banging Keys

There is more than just typing on the keyboarding and running through steps. Even when tests have been automated and they follow the lines they always have, I watch and see what happens, because I can tell when something looks out of place or isn't behaving the way that we think it should. Note, I'm not touching any keys, I'm watching what's going across the screen. It's my mind that's doing the testing, not my hands. Jerry describes a concept he calls the White Glove Test. A company had all of their testing standards in a manual in the library, and no where else. The dust on top of the manual showed that no one in the organization had touched, much less read, the manual for a very long time. Another good approach and one that many organisations I've worked with use is Dog Food Testing, meaning the developers live on the environment they helped create and actively use the products they code for. I lived for years behind a Dog Food Network at Cisco Systems; any change that was going to be rolled out had to bake there for awhile first. Very instructive, and often very frustrating, but it had the great effect of helping us see issues in a different light and much quicker than otherwise. Testers need to be tested, too (stay with me here :) ). Sometimes we see things that we ant to see, or we are overly critical in our results, so it helps to have another tester evaluate the information another tester has found (think of it as two reporters corroborating a story). Additionally, demonstrating a product but avoiding the areas where an issue might appear is not testing. It may be deft navigation, but it's not providing any new information to inform a decision. Most of these, as you can see, have little to do with banging on keys.

Chapter 11: Information Intake

We are always taking in information, but that information has little benefit if we can't finesse out its meaning and how to actively use it to communicate our findings. We often confuse data for information. They are not the same thing. Data is just the stuff that comes at us. Information is what we tease out of the data. From the testing perspective, there are areas that are ripe for information, but they may well be areas that developers don't want us to be looking. Too bad. Mine away. It's also important to have the right tools to mine that data for information (note, they nee not be expensive; the test tools that I currently use cost nothing, and often times I fall back on the simpleness of shell scripts.

Chapter 12: Making Meaning

Jerry starts off with an idea that he calls the Rule of Three... in the context of test data interpretation, "If you can't think of at least three possible interpretations of the test result, you haven't thought enough". We can have several interactions that seem on the surface to say "we have found a few bugs" but by looking more closely and inquiring, we could find several interpretations of what is being seen by the tester and how it relates to the product, and the developers and managers will also suss out their own meanings based on their needs, biases and interpretations. It's also vital to know what you should be expecting before you start interpreting the data you've collected. Even if you don't know what to expect, you can find out what the people that matter want to have it do, and from there, you can then start interpreting the data (note: this is how heuristics work, and why they are so valuable to testers. None are infallible, but they are all useful to a point :) ).

Chapter 13: Determining Significance

When we think of the significance of an issue, there are lots of things that determine what the significance actually is. What is significant to one person may be inconsequential to another, or at least could have a totally different spin. Spin is actually the practice of assigning significance. What might be a devastating bug could be portrayed as a unique feature if talked up in the right way. Significance can be prone to personal agendas, and therefore bias can easily creep in. When we take the time to recognize and filter out as much of the subjective details as we can, and look at things as objectively as possible, then we are able to attach a more appropriate significance to issues, and then determine how we want to proceed and what actions we need to take.

Chapter 14: Making a Response

Many times we chalk up projects not coming to fruition as being the result of bad luck. Yet if we look closely enough, most projects seem to have issues of "bad luck", too, yet those projects shipped. What was the difference? The difference was with how management and the team chose to work with their processes. Management and the way they respond to issues may well be the best indicator as to whether a project succeeds or fails. Usually, there is way to optimistic a projection as to how long projects will take. the old joke "the first 10% of the project will take up 90% of the time. The remaining 90% will take up the remaining 90%". No that's not a typo, and that's not bad math. That's the fact that sunny testing estimates often take nearly twice as long to complete. More realistic expectations are seen by many as too pessimistic, but they very often prove to be more on the mark than not. Yet we still think the sunny outlook will be the right one "this time".

Chapter 15: Preventing Testing from Growing More Difficult

The great irony when it comes to testing is that as software become more ubiquitous, covers more areas of our live, and is becoming more indispensable, the task of adequate testing is growing ever harder. Projects are growing larger, more complex, and have more features than ever before. and with this complexity, the odds of there being problem areas go way up. How to combat this? Try to keep single areas of functionality as simple as possible. Note, I don't mean bare, I mean as simple as is necessary. This fits in with the Software Craftsmanship's movement as well. Simple does not mean weak and incapable. it means do what is necessary and important with as little clutter and waste as possible. Our testing should also follow suit, and allow us to keep focused on doing good testing. Having up to date tools, having frank discussion about potential problem areas, and doing what we can to not extrapolate results from one area to tell us how another areas is doing.

Chapter 16: Testing Without Machinery

The best testing tool there is is the human brain. That's what does the real heavy lifting. While computers can take out some of the tedium, or make short work of lots of data and help to get down to the important bits of data that provide real information, a computer can't make the important decisions. It's the human brain that makes real meaning out of the work that computers do. So often we put too much emphasis and focus on what test automation can do. True, it can do a lot, and it can really be helpful with some of the longer running challenges or the truly repetitive steps, but automated testing can never replace the decision capabilities of a human brain.

Chapter 17: Testing Scams

Testing tools are a big business, and many of them are hyped and sold with a lot of promises and expectations. In most of the instances, they rarely live up to the hype. Oftentimes, they can be a larger drain on your budget than not having tools at all. Demonstrations are often canned and sidestep many of the bigger challenges. Going from the initial examples and moving on to doing real work is problematic, and almost never as easy as the demonstration suggested. The simple fact is that you never get something for nothing, and if claims for a product seem to be too good to be true, you can bet they are. Also, the likelihood that there is a totally free solution available that is comparable to the expensive tool being offered is quite high, but again, realize that even free, open source tools have their prices, too. Usually, they require time and energy to actually learn how to use.

Chapter 18: Oblivious Scams

It's possible to be scammed without spending a penny (well, directly spending a penny, that is. Indirectly, lots can be spent). We often get lulled into believing that certain actions can help us speed things along or make us more efficient, and oftentimes they just slow us down even more than we were before we started tinkering. Postponing doing documentation tasks may seem to save us time, but we still have to document them, and the likelihood of doing an accurate job of it gets more difficult the farther away from the issue we get. Wording things ambiguously can get us in big trouble; leaving things open to interpretation may indeed have the wrong interpretation made. Not reporting issues with the mistaken believe that we are "being nice" can come back and bite us later on. Nick Lowe has this one right, sometimes "You Gotta be Cruel to be Kind". It can be all too easy to project our own fantasies of what should be into our testing, and therefore, the ones who cam us are ourselves.

In addition, each chapter ends with a "Common Mistakes" section that handily summarizes and also puts into clear focus many of the issues that testers have to deal with, and the ways that organizations (and testers) can help to change the culture towards better software, since we already know that "Perfect Software" does not exist and never will exist.

Bottom Line:

I've always said that I respect and admire Jerry Weinberg because of the many books of his that I have had a chance to read. I adore Jerry for writing this book. As one who has been in the trenches for almost two decades, this book doesn't just speak to me, it screams to me. this has been my reality more times than I have wanted to admit. It also lets me know that I am certainly not alone in my understanding of some of these situations and dilemmas. For those who want to maintain the illusions of testing and have platitudes that say what you are doing is fine or to encourage you to align with "Best Practices", then this book is not for you. If, however, you want to see what the reality of software testing is all about, and approach your testing with clear eyes and clear understanding, "Perfect Software..." is a necessity.

Thursday, October 20, 2011

TWiST on Test Coaching

This Week on "This Week in Software Testing" we pick up with Scott Barber, Virginia Reynolds, and Lanette Creamer to talk about test coaching, what coaching is, how it differs from mentoring and training (and we had a varied discussion on that one :) ), and how to do it effectively.


This was an interesting conversation, in the sense that all of us, at some stage or another, offer coaching or mentoring to others. We may not consider it a formal process (and that's part of the discussion) but even if we don't formally enter into an agreement, coaching and mentoring is a regular process in our broader community, whether it be through direct methods like Weekend Testing, BBST or Miagi-do, which we discussed in the podcast, to professional consulting gigs where we come in to work directly with individuals and teams. We also need to look at LinkedIn, the Software Testing Club, Twitter and other avenues that help connect testers to other testers and encourage them to improve their sills and up their game.

Anyway, if you don't want to hear me keep talking about it, feel free to go listen to Episode #67 for yourself :).

Wednesday, October 19, 2011

Blocked on [X]? Then Do [X]!

So today I reach the end of the experiment with is the book club review of 'How to Reduce the Cost of Software Testing" 21 posts in 21 days. On top of that, I decided I didn't want to just be a total one-trick pony for three weeks, so I committed to writing an additional blog post each of those 21 days.

An interesting thing happened. Was each of those extra blog posts a winner? Nope, but then blogging is often hit and miss. You never really know which posts will grab people's attention, but you can sometimes guess. At first I feared I'd run out of stuff to talk about, but somehow I was able to come up with something, and often enough somethings to actually get a few days ahead and schedule several posts. It's a testament to Jerry Weinberg's idea of Fieldstone gathering; stuff is all around us, we just have to have the eyes to look and then follow through with the energy to pick the stones up to use them.

I often find it more difficult to write when I've taken a few days away from the blog. Then, unless I am finding myself dealing with a timed topic (a class, a meet-up, a podcast or something that needs to be talked about that day), it can be a real struggle coming up with something to write about. Lesson from the last three weeks is that you don't see stones when you aren't looking for them. Likewise, you don't see topics and ideas for those topics unless you are actually writing.

This also applies to my struggles with coding. I sound like such a  broken record here, but again, I need to remind people that I am not writing about coding and tech writing because I'm really good at it, I write about it because in reality I'm rather mediocre or exceptionally bad at it. However, the writing about it gives me an avenue to explore it in a different way when my brain is telling me "OK, dude, really, I've kind of had it with the (Cucumber, Ruby, Rspec, JavaScript, CSS3, fill in the blank)". It lets me actually see what I know and it gives me more reasons to keep applying what I am learning, even if that learning is terribly slow.

So yes, your painfully optimistic TESTHEAD friend is suggesting that, to fix what is ailing ya', do more of what is ailing ya' :). Thus, you can expect to see more posts about  (Cucumber, Ruby, Rspec, JavaScript, CSS3, fill in the blank) in the near future. Bet on it :).

BOOK CLUB: How to Reduce the Cost of Software Testing (21/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Ths entry covers Appendix D, and this is the final entry in this series.
.

Appendix D: Cost of Starting up a Test Team by Anne-Marie Charret


For some organizations, it’s entirely possible that you don’t need a test team. You may have a culture of ownership of quality and your development team may be doing a very good job at being testers. It’s possible that your customer support team may be fulfilling the roles of testers (and I myself can state that many really excellent testers came out of the tech support ranks or spent significant time doing technical support; it helps train them to be customer focused and look at problems from their perspective).


However, what if that’s not enough? What if your development team and your technical support team aren’t able to handle all of the testing needed? What if you have decided you would like to see the quality and performance of your application increase? Are you sure that a creating a test team is the solution to your problem?

Some might want to see that processes are followed, that quality issues are addressed. That’s all good, but will adding a test team confirm that the company’s policies are followed? It might, but then again, it might just add another group that doesn’t communicate or use the process. Before testing can be called on to fix a problem, it’s really helpful to determine what the problem actually is.


A cost effective test team is one that meets your organization’s needs. Many times testers are brought in to solve a problem, but what they are addressing isn’t the real problem. Instead, they are addressing a symptom that points to a bigger issue. Why are there quality issues? What’s really the cause of them? Why do some companies need a test team and other seem to do well without them? Testing is not a simple commodity; it can’t just be “plugged in” and left to run. It’s strongly influenced by the culture and beliefs of a given company. Test teams taken from one company and dropped into another will not perform exactly the same (even with all members being the same people). The company itself shapes the test team to its value system over time.

Cem Kaner describes software testing as:

“An empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test”


The term “stakeholder” means anyone who cares about the quality of the product. Stakeholders may be in any department (sales and marketing especially). Having these people or entities in mind as you test will inform the testing, what you do, and how you do it.


It’s possible you see the value of a test team, but don’t have the budget for one at the present. It still would be valuable to go through and see what you would need in the way of testing resources, and after investigating the potential benefits of incurring that expense (remember, testing doesn’t actually make money) you may decide that, down the road as the company grows, there is a benefit to developing and growing an internal test team.

“Why do you want a test team?"

A common reason why companies want a test team is they believe the tester will be the enforcer of software quality. That’s a common perception, and frankly, it’s a dangerous one. Michael Bolton, in his talk "Two futures of Software Testing" explains that:


“Although testers are called the quality gatekeepers…
• they don’t have control over the schedule
• they don’t have control over the budget
• they don’t have control over staffing
• they don’t have control over product scope
• they don’t have control over market conditions or contractual obligations


Testers cannot enforce quality, that’s not their mandate. Even if it is their official mandate, they still cannot practically follow through on it. A test team can *influence* overall quality to improve by doing the following:

• Testers can find bugs and inform developers
• Testers can identify risks that threaten the value of the product
• Testers can highlight visibility and structure issues within the team (they just can't fix them)
• Testers add to the overall knowledge of the system

Setting a realistic expectation for what the test team can and cannot do is essential to their success.


So you’ve decided to take the plunge and create a test team. What will its make-up be? Do you want an in house test team? Do you want an independent contract team that is off-site? How large do you want your test team to be? As I’ve said many times, my test team at my current company is dynamic. It has one dedicated resource (i.e. me) and at times we can call on others to help the process, often other people in our company in different roles, and especially our technical support people. As a dedicated and solo tester, I often sit with the developers and get to see what they see and understand a much as possible their environments and challenges so that I can help them meet their quality objectives.


Another approach is to contract with a company that has an external testing lab. They will be hired to do the testing and to report back on the status of a project. There are benefits to this. The external organization has the equipment, tools, and experience to handle a variety of testing challenges that an in house team might not have. They also can be used when they are needed, and when they are not, they are not part of your permanent payroll. The disadvantage is that they may not have as much familiarity with your organization and your expectations the way an embedded tester might. There is also the cost of time delays with turnaround of results, reporting results, and then following up based on the information provided. This can get to be significant if the external test team is half a world away.


One of the challenges any test team will face is that of the balance between manual and automated testing. A quote I’m fond of (paraphrased and not attributed, sorry) is that “the human mind is brilliant, articulate, inquisitive and slow. The computer is inherently stupid, lacking in any ability to think for itself, but it is very fast. Put together, the abilities of both are limitless”. Manual testing and automated testing (or my much preferred choice of words “computer aided testing”) need to go hand in hand. All manual testing may yield good results, but it may be too slow to be practical. Fully automated testing will be enormously expensive to implement, and taking the human out of the equation may cause you to miss more bugs than you would with manual testing. The point is, your test team will need both.


When does it make sense to build a test team? Ask yourself and the stakeholders the following question:

“Is my company willing to take the risk for shipping the product as is?”

A test team can identify risk, but they can’t prevent it. Developers who will need to fix code. Project Managers need to allocate time and resources to fix problems. All parties need to realize that testing is not a panacea; they can communicate about the state of a product or service, but it’s the development team that ultimately fixes it.

Tuesday, October 18, 2011

When "The Movement" Was All That Mattered


I recall with amusement an interesting day I spent last spring, when the boys in my Scout Troop went up to go have a day up at Diamond Peak, a Ski and Snowboard area up on the North Shore of Lake Tahoe. It was interesting because, as they were all getting together to decide what they wanted to do, I noticed something I never thought I'd see in my lifetime... twelve boys looking at equipment, some boys getting skis, some boys getting snowboards... and none of the boys sniping at each other about what they were using.


This isn't going to make any sense unless I back up a bit. Back in the early 1980's, very few places allowed snowboards. In fact most of us who grew up on skis in the 70's never even saw a snowboard (Burton started making the first branded snowboards in 1977). In the 80s a few of the smaller mountains started to allow them but seeing them on a big resort mountain was practically unheard of (they were mostly backcountry in those days). That all changed in 1990, when Squaw Valley became one of the first full service big destination mountains in the USA to allow snowboards without restriction. The whole mountain was open to snowboards and snowboarders, and the floodgates of complaints opened, and the culture war started. And with that, (and association with people who were inveterate snowboarders), I joined their ranks. I started riding, and after awhile, I became obsessed with it. Not just snowboarding but the whole "movement" of snowboarding.


It was almost like I was part of a political faction. Forget Democrat or Republican, I was a "snowboarder", man! Any mountain that was anti-snowboard, I disassociated with. Any mountain that was pro-snowboard, I gave my money and allegiance. I really didn't look to carp or cop attitude to skiers, and a part of me wanted to find a way to just show that we were responsible riders and we could share the same space, but the more derision we received from skiers (and believe me, in the early 90s, we received derision) the more committed we became. At some point towards the end of the 90s, we looked to have won the war. Snowboarders on a typical Tahoe day outnumbered skiers, the gear and the craft of riding had gone through the roof, and it was like we owned the place.


Then a funny thing happened. We grew up. We had kids. We developed careers and other things to focus our attention on. And of course, we taught our kids our love of the sport of snowboarding, but somehow we managed to skip the vitriol, or at least that's how it seemed to my eyes. I'm not saying that there is no animosity between skiers and snowboarders, but among this next generation, choice of equipment just doesn't seem to matter all that much. Maybe it's because skiing liberally borrowed from snowboarding and developed parabolic skis. Maybe it the fact that more kids are cross discipline, but it just doesn't seem to matter what you have strapped on your feet, as long as you are having a good time on the hill.


Sometimes when I look at the software testing industry, I still feel a bit like I'm in the skier vs snowboarder debate. If you are a context drive tester, does that make you a snowboarder? If you are an SDET, does that make you a skier? If you are Agile, does that mean you Telemark (and really, check out a Telemark skier, they put all of us regular run of the mill skiers and snowboarders to shame). Is it worth arguing about it? Does it really matter all that much? Sure I have my preference with regard to context driven testing, but does that mean someone from another discipline is unable to do good work, to contribute to the overall discipline, to basically make their way down the hill? What of us who hybridize our approaches. Is a Parabolic skier somehow not as pure? How about a hardboot snowboarder (which I do from time to time, and because of my broken leg and hardware, I may have to do a lot more often, if not permanently). I think there is benefit to not being rigidly aligned with any of the schools, as there are skills and opportunities that are better suited to each discipline. In my kids world view, they don't care what you have on your feet, just that you want to get down the hill. Will we ever get there with testing? Time will tell, I guess :).