Tuesday, May 11, 2010

The Devil is in the Details

Last week, I told you all about my reasons for why I refer to this blog as “The Mis-Education and Re-Education of a Software Tester”. While it’s a real reflection of my frustration and vow to do something about it, the blog would be somewhat less than useful if I didn’t tell you specifically what I was doing about it. Last week, I started my first of what I hope will be many interactions with the Association for Software Testing (AST). I started and went through the first week of the “Black Box Software Testing: Foundations” course.


This is the course that many of you may have seen on the internet, with materials were designed by Cem Kaner and James Bach, as well as others. What’s really cool about what AST does is that, every few weeks, they actually construct a group of people to teach this class and have students enroll in it. From there, it’s a full University level course on software testing, with an emphasis on black box testing. Cem provides the lecture materials and has recorded the lectures, and in addition, he also comments on the course as well. This is a little geeky, and sure, I'll own that, but actually participating in a course with the guy that many call the “Godfather of Testing”... yeah, that’s kinda’ cool :)!!!


So what does this have to do with the title of my post? Well, it’s been an interesting week and a half, to say the least. While I was prepared to say “I know there’s a lot I don’t know”, I didn’t realize just how much that really was. One of the skills that a software tester needs to have is the ability to learn how to avoid “traps”. Often, we set up expectations and we decide what we are going to do based on what our interpretation of what requirements are, what the requirements are meant to do, and that, if we follow them, we will do good testing. What I discovered on the last couple of “quizzes” is that one of the most important skills testers can develop is that of being a very critical reader. How do I know this? Because I floundered spectacularly on two quizzes that I thought I had nailed. Why did I do so poorly? Because I missed key details in what I was reading.


Without giving too much away (and as a way of encouraging others to participate in this class if you get a chance to; membership in AST is $85 a year and gives the opportunity to participate in these types of learning opportunities for free or a greatly reduced price depending on the class), the multiple choice tests are worded in a different manner than how most people are used to dealing with. In most multiple choice questions, you can read them, eliminate obvious duds, guess at the rest and, chances are, you will do well on the quiz or exam. Not so here. The questions are worded in such a way that there is potentially multiple right or wrong answers, and so you then have to choose from, in some cases, up to ten different choices, only one of which is correct, but many of the numbered items are combinations of the previous answer choices. This means that “guesswork” will not help you here. You have to really go through each question and tease out exactly what they say, what they might imply, and then look at every single answer and determine if it’s just one of the choices, or a combination of the choices. I shall confess that this threw me for a loop.


To even more fully emphasize this, one of the questions had us discussing a function where a value was stored in a program, it was converted to a numeric type, and then a second action manipulated the number. We were asked to give a detailed breakdown of what we would do to test the scenario, which I did. It was only after I read the other class members answers that I realized “uh oh, I may have read the requirements wrong”. Sure enough, after everyone had a chance to answer, the instructors offered clarification that verified, yep, I missed something, and that something was what could have led my testing down a totally different direction.


The instructors then made clear that this exercise, and the resulting confusion, was by design. We as testers often fall into this trap (and I was not the only one that fell into it). We read a requirement, we think we understand fully what it is saying, and then we go off and test it. Upon further review, we realize what we were testing doesn’t seem to be doing what we think it should be doing. After much hand-wringing and possible confusion, we go and talk to the developers about what we are seeing, and what we have done, only to realize that, whoops, we misread the requirements, or the requirements were vague, so we made a decision based on what we perceived the need to be, rather than what the need actually is.


This means that the onus is on us testers to make sure that we have enough clarity about what we are doing early enough in the process so that we don’t start off down paths that will be dead ends, or worse, long roads that take us far away from our goals. To be fair, even with these detours, we can find out information that it vital and relevant to the quality of a product, but really, it would be much better to find out first what we really need to focus on so that we know, for sure, what the stakeholders really want to have us doing, and what they expect the product to do.


After that, hey, road trips are great :.
Post a Comment