Wednesday, March 25, 2015

Register NOW for #CAST2015 in Grand Rapids, Michigan



After some final tweaks, messages being sent out to the speakers and presenters, keynotes and other behind the scenes machinations... we are please to announce that registration for AST's tenth annual conference, CAST 2015, to be held August 3-5, 2015 in Grand Rapids, Michigan, is now open!

The pre-conference Tutorials have been published, and these tutorials are being offered by actual practitioners, thought leaders, and experts in testing. All four of them are worth attending, but alas, we have room for twenty-five attendees for each, so if you want to get in, well, now is the time :)!

The tutorials we are offering this year are:

"Speaking Truth to Power" – Fiona Charles

"Testing Fundamentals for Experienced Testers" – Robert Sabourin

"Mobile App Coverage Using Mind Maps" – Dhanasekar Subramaniam

"'Follow-your-nose' Testing – Questioning Rules and Overturning Convention" – Christin Wiedemann

Of course, there is also the two day conference by itself that you can register for, and yes, we wholeheartedly encourage you to register soon.

If you register by June 5th, you can save up to $400 with our "Early Bird" discount. We limit attendance to 250 attendees, and CAST sold out last year, so register soon to secure your seat.

Additionally, we will be offering webCAST again this year. Really want to attend, but you just can't make it those days? We will be live streaming keynotes, full sessions, and "CAST Live" again this year!
Come join us this summer for our tenth annual conference in downtown Grand Rapids at the beautiful Grand Plaza Hotel August 3-5, or join us online for webCAST. Together, let's start “Moving Testing Forward.”

Monday, March 23, 2015

More Automation Does Not Equal Repaired Testing

Every once in a while, I get an alert form a friend or a colleague to an article or blog post. These posts give me a chance to discuss a broader point that deserves to be made. One of those happened to arrive in my inbox a few day ago. The alert was to this blog post titled Testing is Broken by Philip Howard.

First things first; yes, I disagree with this post, but not for the reasons people might believe.

Testing is indeed "broken" in many places, but I don’t automatically go to the final analysis of this article, which is “it needs far more automation”. What it needs, in my opinion, is better quality automation, the kind that addresses the tedious and the mundane, so that testers can apply their brains to much more interesting challenges. The post talks about how there’s a conflict of interest. Recruiters and vendors are more interested in selling bodies, rather than selling solutions and tools. Therefore they push manual testing and manual testing tools over automated tools. From my vantage point, this has not been the case, but for the sake of discussion, I'll run with it ;).

Philip uses a definition of Manual Testing that I find inadequate. It’s not entirely his fault; he got it from Wikipedia. Manual testing as defined by Wikipedia is "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases”.

That’s correct, in a sense, but it leaves out a lot. What it doesn’t mention is the fact that manual testing is a skill that requires active thought, consideration, and discernment, the kind that quantitative measures (of which automation is quite adept at handling) fall short with.

I answered back on a LinkedIn forum post related to this blog entry by making a comparison. I focused on two questions related to accessibility:

1. Can I identify that the wai-aria tag for the element being displayed has the defined string associated with it?

That’s an easy "yes" for automation, because we know what we are looking for. We're looking  for an attribute (a wai-aria tag) within a defined item (a stated element), and we know what we expect (a string like “Image Uploaded. Press Next to Continue”). Can this example benefit from improved automation? Absolutely!

2. Can I confirm that the experience for a sight-impaired user of a workflow is on par with the experience of a sighted user?

That’s impossible for automation to handle, at least with the tech we have today. Why? Because this isn’t quantifiable. It’s entirely subjective. To make the conclusion that it is either an acceptable or not acceptable workflow, a real live human has to address and test it. They must use their powers of discernment, and say either “yes, the experience is comparable” or “no, the experience is unacceptable”.

To reiterate, I do believe testing, and testers, needs some help. However, more automation for the sake of more automation isn’t going to do it. What’s needed is the ability to leverage the tools available, to help move off as much of the repetitive busywork we know we have to do every time, and get those quantifiable elements sequenced. Once we have a handle on that, then we have a fighting chance to look at the qualitative and interesting problems, the ones that we haven’t figured out how to quantify yet. Real human beings will be needed to take on those objectives, so don’t be so quick to be rid of the bodies just yet.

Friday, March 20, 2015

The Case for "Just a Little More" Documentation

The following comment was posted to Twitter yesterday by Aaron Hodder, and I felt compelled not just to retweet it, but to write a follow up about it.

The tweet:



"People that conflate being anti wasteful documentation with anti documentation frustrate me."

I think this is an important area for us to talk about. This is where the pendulum, I believe has swing too far in both directions during my career. I remember very well the 1990s, and having to use ISO-9001 standards for writing test plans. For those not familiar with this approach, the idea was that you had a functional specification, and that functional specification had an accompanying test plan. In most cases, that test plan mapped exactly to that functional specification. We often referred to this as "the sideways spec”. That was a joking term we used to basically say "we took the spec, and we added words like confirm or verify to each statement." If you think I'm kidding, I assure you I'm not. It wasn't exactly that way, but it was very close. I remember all too well writing a number of detailed test plans, trying to be as specific as possible, only to have it turned back to me with "it's not detailed enough." When I finally figured out what they really meant was "just copy the spec”, I dutifully followed. It made my employers happy, but it did not result in better testing. In fact, I think it's safe to say it resulted in worse testing, because we rarely followed what we had written. Qe did what we felt we had to do with the time that we had, and the document existed to cover our butts.

Fast forward now a couple of decades, and we are in the midst of the "Agile age”. In this Agile world, we believe in not providing lots of "needless documentation”. In many environments, this translates to "no documentation" or "no spec” outside of what appears on the Kanban board or Scrum tracker. A lot is left to the imagination. As a tester, this can be a good thing. It allows me the ability to open up different aspects, and go in directions that are not specifically scripted. That's the good part.

The bad part is, because there's sparse documentation, we don't necessarily explore all the potential avenues because we just don’t know what they are. Often, I create my own checklists and add ideas of where I looked, and in the past I’ve received replies like “You're putting too much in here, you don't need to do that, this is overkill." I think it's important for us to differentiate between not overthinking and over planning, and making sure that we are giving enough information and providing enough details to be successful. It's never going to be perfect. The idea is that we communicate, we talk, we don't just throw documents at each other.  We have to make sure that we have communicated enough, and that we really do understand what needs to happen.

I'm a fan of the “Three Amigos” model. Early in the story, preferably at the very beginning, three people come together; the product owner, the programmer and the tester. That is where these details can be discussed and hammered out. I do not expect a full spec or implementation at this point, but it is important that everybody that comes to this meeting share their concerns, considerations and questions. There's a good chance we still might miss a number of things, because we didn't take the time here to talk out what could happen. Is it possible to overdo it? Perhaps, if we're getting bogged down in minutia and details, but I don't think it is a bad thing to press for real questions such as “Where is this product going to be used? What are the permutations that we need to consider? What subsystems might this also affect?" If we don't have the answer right then and there, we still have the ability to say “Oh, yeah that's important. Let's make sure that we document that.”

There's no question, I prefer this more nimble and agile method of developing software to yesteryear. I would really rather not go back to what I had to do in the 90s. However, even in our trying to be lean, and in our quest for a minimal viable products, let’s be sure we are also communicating effectively about what our products need to be doing. My guess is, the overall quality of what comes out the other end will be much better for the effort.

Thursday, March 19, 2015

It's Not Your Imagination, It's REALLY Broken

One of the more interesting aspects about automation is the fact that it can free you up from doing a lot of repetitive work, and it can check your code to make sure something stupid has not happened. the vast majority of the time, the test runs go by with little fanfare. Perhaps there's an occasional change that causes tests to fail, and they need to be reworked to be in line with the present product. Sometimes there are more numerous failures, and those can often be chalked up to flaky tests or infrastructure woes.

However, there are those days where things look to go terribly wrong, where the information radiator is bleeding red. Your tests have caught something spectacular, something devastating. What I find ironic about this particular situation is not the fact that we jump for joy and say "wow, we sure dodged a bullet there, we found something catastrophic!" Instead, what we usually say is "let's do a debug of this session, because there's no way that this could be right. Nothing fails that spectacularly!"

I used to think the same thing, until I witnessed that very thing happen. It was a typical release cycle for us, stories being worked on as normal, work on a new feature tested, deemed to work as we hoped it would, with a reasonable enough quality to say "good enough to play with others". We merged the changes to the branch, and then we ran the tests. The report showed a failed run. I opened the full report and couldn't believe what I was seeing. More than 50% of the spun up machines were registering red. Did I at first think "whoah, someone must have put in a catastrophically bad piece of code!" No, my first reaction was to say "ugh, what went wrong with our servers?!" This is the danger we face when things just "kind of work" on their own for a long time. We are so used to little hiccups that we know what to do with them. We are totally unprepared when we are faced with a massive failure. In this case, I went through to check all of the failed states of the machines to look for either a network failure or a system failure... only none was to be found. I looked at the test failure statements expecting them to be obvious configuration issues, but they weren't. I took individual tests and ran them in real time and watched the console to see what happened. The screens looked like what we'd expect to see, but we were still failing tests.

After an hour and a half of digging, I had to face a weird fact... someone committed a change that fundamentally broke the application. Whoa! In this Continuous integration world I live in now, the fact is, that's not something you see every day. We gathered together to review the output, and as we looked over the details, one of the programmers said "oh, wow, I know what I did!" He then explained that he had made a change to the way that we fetched cached elements and that change was having a ripple effect on multiple sub systems. In short, it was a real and genuine issue, and it was so big an error that we were willing to disbelieve it before we were able to accept that, yep, this problem was totally real.

As a tester, sometimes I find myself getting tied up in minutiae, and minutiae becomes the modus operandi. We react when we expect. When we get delivered to us a major sinkhole in a program, we are more likely to not trust what we are seeing, because we believe such a thing is just not possible any longer. I'm here to tell you that it does happen, it happens more than I want to believe, or admit, and I really shouldn't be as surprised that it happens.

If I can make any recommendation to a tester out there, if you are faced with a catastrophic failure, take a little time to see if you can understand what it is,and what causes it. Do your due diligence, of course. Make sure that you re not wasting other people's time, but also realize that, yes, even in our ever so interconnected and streamlined world, it is still possible to introduce a small change that has a monumentally big impact. It's more common than you might think, and very often, really, it's not your imagination. You've hit something big. Move accordingly.

Tuesday, March 10, 2015

TESTHEAD Turns Five Today

With thanks to Tomasi Akimeta
for making me into "TESTHEAD" :)!!!
On March 10, 2010, I stepped forward with the boldest boast I had ever made up to that point. I started a blog about software testing, and in the process, I decided I would try to see if I could say something about the state of software testing, my role in it, and what I had learned through my career. Today, this blog turns five years old. In the USA, were this blog a person, we would say I'm just about done with pre-school, and in the fall, I'd be getting ready to go into Kindergarten ;).

So how has this blog changed over the past five years? For starters, it's been a wonderful learning ground for me. Notice how I said that: a learning ground for me. When I started it, I had intended it to be a teaching ground to others. Yeah, that didn't last long. I realized pretty quickly how little I actually knew, and how much I still had to learn (and am still learning). I've found it to be a great place to "learn in public" and to, in many ways, be a springboard for many opportunities. During its first two years, most of the writing that I did in any capacity showed up here. I could talk about anything I wanted to, so long as it loosely fit into a software testing narrative.

From there, I've been able to explore a bunch of different angles, try out initiatives, and write for other venues, the most recent being a permanent guest blog with IT Knowledge Exchange as one of the Uncharted Waters authors. While I have other places I am writing and posting articles, I still love the fact that I have this blog and that it still exists as an outlet for ideas that may be "not entirely ready for prime time", and I am appreciative of the fact that I have a readership that values that and allows me to experiment openly. Were it not for the many readers of this blog, along with their forwards, shares, retweets, plus-one's and mentions in their own posts, I wouldn't have near the readership I currently have, and I am indeed grateful to all of you who take the time to read what I write on TESTHEAD.

So what's the story behind the picture you see above? My friend Tomasi Akimeta offered to make me some fresh images that I could associate with my site. He asked me what my site represented to me, and how I hoped it would be remembered by others. I laughed and said that I hoped my site would be seen as a place where we could go beyond the expected when it comes to software testing. I'd like to champion thinking rather than received dogma, experimentation rather than following a path by rote, and champion the idea that there is intelligence that goes into testing. He laughed and said "so what you want to do is show that there's a thinking brain behind that crash test dummy?" He was referring to the favicon that's been part of the site for close to five years. I said "Yes, exactly!" He then came back a few weeks later and said "So, something like this?" After I had a good laugh and a smile at the ideas he had, I said "Yes, this is exactly what I would like to have represent this site; the emergence of the human and the brain behind the dummy!"

My thanks to Tomasi for helping me usher in my sixth year with style, and to celebrate the past five with reflection, amusement and a lot of gratitude for all those who regularly read this site. Here's to future days!

Friday, March 6, 2015

Taming Your E-Mail Dragon

Over on Uncharted Waters, I wrote a post about out of control E-mail titled "Is Your Killer App Killing You?" If that may be a bit too much hyperbole, there is no question that E-mail can be exhausting, demoralizing, and just really hard to manage.

One area that I think is really needed, and would make E-mail much more effective, is some way to extend messages to automatically start new processes. Some of this can be done at a fairly simple level. Most of the time, though, what ends up happening is that I get an email, or a string of emails, I copy the relevant details, and then I paste them somewhere else (calendar, a wiki, some document, Slack, a blog post, etc.). What is missing, and what I think would be extremely helpful, would be to have ways to register key applications with your email provider, whoever it may be, and then have key commands or right click options that would let you take that message, choose what you want to do with it, and then move to that next action.

Some examples... if you get a message and someone writes that they'd like to get together at 3:00 p.m., having the ability to right there schedule an appointment and lock the details of the message in place seems like it would be simple (note the choice of words, I said it seems it would be simple, I'm not saying it would be easy ;) ). If a message includes a dollar amount, it would be awesome to be able to right click or key command so that I could record the transaction in my financial software or create an invoice (either would be legitimate choices, I'd think).

Another option that I didn't mention in the original piece, but that I have found to be somewhat helpful, is to utilize tools that will allow you to aggregate messages that you can review later. For me, there are three levels of email detail that I find myself dealing with.

1. E-mail I genuinely could not care any less about, but doesn't rise to the level of outright SPAM.

I am unsentimental. Lots of stuff from sites I use regularly comes to my inbox and I genuinely do not want to see it. My general habit is to delete it without even opening it. If you find yourself doing this over and over again, just unsubscribe and be done with it. If the site in question doesn't give you a clear option for that, then make rules that will delete those messages so you don't have to. So far, I've yet to find myself saying "aww, man, I really wish I'd seen that offer that I missed, even though I deleted the previous two hundred that landed in my inbox. Cut them loose and free your mind. It's easy :).

2. Emails with a personal connection that matter enough for me to review and consider them, but I may well not actually do anything with them. Still much of the time, I probably will.

These are the messages I let drop into my inbox, usually to be subject to various filter rules and to get sorted into the buckets I want to deal with, but I want to see them and not let them sit around.

3. That stuff that falls between #1 and #2.

For these messages, I am currently using an app called Unroll.me. It's a pretty basic tool in that it creates a folder in my IMAP (called Unroll.Me), and any email that I have decided to "roll up" and look at later goes into this app, and this folder. There's some other features that the app offers, such as Unsubscribing (if the API of the service is set up to do that), include in the roll up, or leave in your Inbox. Each day, I get a message that tells me what has landed in my roll up, and I can review each of them at that point in time.

I will note that this is not a perfect solution. The Unsubscribe works quite well, and the push to Inbox also has no problems. It's the Roll up step that requires a slight change in thinking. If you have hundreds of messages each day landing into the roll up, IMO, you're doing it wrong. The problem with having the roll up collect too many messages is that it becomes easy to put off, or deal with another day, which causes the back log to grow ever larger, and in this case, out of sight definitely means out of mind. To get the best benefit, I'd suggest a daily read and a weekly manage, where you can decide which items should be unsubscribed, which should remain in the roll up, and which should just go straight to your inbox.

In any event, I know that E-mail can suck the joy out of a person, and frankly, that's just no way to live. If you find yourself buried in E-mail, check out the Uncharted Waters article, give Unroll.me a try, or better yet, sound off below with what you use to manage the beast that is out of control email. As I said in the original Uncharted Waters post, I am genuinely interested in ways to tame this monster, so let me know what you do.

Thursday, March 5, 2015

All or Nothing, or Why Ask Then?

This is a bit of a rant, and I apologize for people who are going to read this and wonder what I am getting on about. Since I try to tie everything to software testing at some point and in some way, hopefully, this will be worth your time.

I have a drug/convenience store near my place of work. I go here semi-regularly to pick up things that I need or just plain want. I'm cyclical when it comes to certain things, but one of my regular purchases is chewing gum. It helps me blunt hunger, and it helps me get into flow when I write, code or test. I also tend to pick up little things here and there because the store is less than 100 steps from my desk. As is often the case, I get certain deals. I also get asked to take surveys on their site. I do these from time to time because, hey, why not? Maybe my suggestions will help them.

Today, as I was walking out the door, I was given my receipt and the cashier posted the following on it.


Really, I get why they do this. If they can't score a five, then it doesn't matter. You weren't happy, end of story. Still, I can't help but look at this as a form of emotional blackmail. It screams "we need to have you give us a five for everything, so please score us a five!" Hey, if I can do so, I will, but what if you were just shy of a five? What if I was in a hurry, and there was just a few too many people in the store? The experience was a true four, but hey, it was still pretty great. Now that experience is going to count for nothing. Does this encourage me to give more fives? No. What it does is tell me "I no longer have any interest in giving feedback", because unless it is something that says "Yay, we're great!" then it's considered worthless. It's a way to collect kudos, and it discounts all other experiences.

As a software tester, I have often faced this attitude. We tend to be taught that bugs are absolute. If something isn't working right, then you need to file a bug. My question always comes down to "what does 'not working right' actually mean?" There are situations where the way a program behaves is not "perfect", but it's plenty "good enough" for what I need to do. Does it delight me in every way possible? No. Would a little tweak here and there be nice? Sure. By the logic above, either everything has to be 100% flawless (good luck with that), or the experience is fundamentally broken and a bug that needs to be addressed. The problem arises when we realize that "anything less than a five is a bug", and that means the vast majority of interactions with systems are bugs... does that make sense? Even if at a ridiculously overburdened fundamental level it is true, that means that the number of "bugs" in the system are so overwhelming that they will never get fixed. Additionally, if anything less than a five counts as zero, what faith do I have that areas I actually consider to be a one or a two, or even a zero, will actually be considered or addressed? The long term tester and cynic in me knows the answer; they won't be looked at.

To stores out there looking for honest feedback, begging for fives isn't going to get it. You will either get people who will post fives because they want to be nice, or they will avoid the survey entirely. Something tells me this is not the outcome you are after, if quality of experience is really what you want. Again, the cynic in me thinks this is just a way to put numbers to how many people feel you are awesome, and give little to no attention to the other 80% of responses. I hope I'm wrong.

-----

ETA: Michael Bolton points out below that I made a faulty assumption with my closing remark. It was meant as a quip, and not to be taken literally, but he's absolutely right. I anchored on the five, and it made me mentally consider an even distribution of the other four numbers. There absolutely is nothing that says that is the case, it's an assumption I jumped to specifically to make a point. Thanks for the comment, Michael :).

Wednesday, March 4, 2015

Book Review: Ruby Wizardry



Continuing with the “Humble Brainiac Book Bundle” that was offered by NoStarch Press and Humble Bundle, I am sharing books that my daughter and I are exploring as we take on a project of learning how to write code and test software. Part of that process has been to spend time in the "code and see" environment that is Codecademy. If you have done any part of the Ruby track on Codecademy, you are familiar with Eric Weinstein’s work, as he’s the one who wrote and does updates to the Ruby section (as well as the Python, JavaScript, HTML/CSS and PHP modules).

With "Ruby Wizardry", Eric takes his coding skills and puts them into the sphere of helping kids get excited about coding, and in particular, excited about coding in Ruby. Of all the languages my daughter and I are covering, Ruby is the one I’ve had the most familiarity with, as well as the most extended interaction, so I was excited to get into this book and see how it would measure up, both as a primer for kids and for adults. So how does Ruby Wizardry do on that front?

As you might guess from the title and the cover, this book is aimed squarely at kids, and it covers the topics by telling a story of two children in a magical land dealing with a King and a Court that is not entirely what it seems, and next to nothing seems to work. To remedy this, the children of the story have to learn some magic to help make the kingdom work properly. What magic, you ask? Why, the Ruby kind, of course :)!

Chapter 1 basically sets the stage and reminds us why we bought this book in the first place. It introduces us to the language by showing us how to get it ("all adults on deck!”), how to install it, make sure it’s running and write our first "Hello World” program. It also introduces us to irb, our command interpreter, and sets the stage for further creative bouts of genius.

Chapter 2 looks at "The King's Strings" and handles, well, strings. Strings have methods, which allow us to append words like length and reverse written with dots ("string".length or 'string'.reverse). Variables are also covered, and show that the syntax that works on raw strings also works on assigned variables as well.

Chapter 3 is all about Pipe Dreams, or put more typically, flow control, with a healthy dose of booleans, string interpolation, and the use of structures like if, elseif, else and unless to make our Ruby scripts act one way or another based on values in variables and how those values relate to other things (such as being equal, not equal, done along with other things or in spite of other things, etc.).

Chapter 4 introduces us to a monorail, which is a train that travels in an endless loop. Likewise, the chapter is about looping constructs and how we can construct loops (using while as long as something is true and until up to the point something becomes false, along with for to iterate over things like arrays) to do important processes and repeat them as many times as we want, or need. Also, it warns us to not write our code in such a way to make an infinite loop, one that will just keep running, like our monorail if it never stops.

Chapter 5 introduces us to the hash, which is a list with two attributes, a key and a value,  as well as the ability to add items to arrays with shift, unshift, push or pop. Additionally we also learned how to play with ranges and other methods that give us insights as to how elements in both arrays and hashes can be accessed and displayed.

Chapter 6 goes into talking about symbols (which is just another word for name), and the notation needed to access symbols. Generally, the content of something is a string; the name of something is a symbol. Utilizing methods allows us to change values, convert symbols to strings, and other options that allow us to manipulate data and variables.

Chapter 7 goes into how to define and create our own methods, including the use of splat (*) parameters, that allows us to use any number of arguments. We can also define methods that take blocks by using "yield".

Chapter 8 extends us into objects, and how we can create them and use them. We also learn about object IDs to tell them apart, as well as classes, which allow us to make objects with similar attributes. We also see how we can have variables with different levels of focus (Local, Global, Instance and Class) and how we can tell each apart (variable, $variable, @variable and @@variable, respectively).

Chapter 9 focuses on inheritance, which is how Ruby classes can share information with each other. this inheritance option allows us to create subclasses and child classes, and inherit attributes from parent/superclasses.

Chapter 10 shows us a horse of a different color, or in this case, modules, which are a bit like classes, but they can't be created using the new method. Modules are somewhat like storage containers so that we can organize code if methods, objects or classes won't do what we want to do.  By using include or extend, we can add the module to existing instances or classes.

Chapter 11 shows us that sometimes the second time's the charm, or more specifically, we start getting the hang of Refactoring. Using or operators to assign variables, using ternary operators for short actions, using case statements instead of multiple if/elseif/else statements, returning boolean variables, and most important, the removal of duplicate code and making methods small and manageable.

Chapter 12 shows us the nitty-gritty of dealing with files. Opening reading from, writing to, closing and deleting files are all critical if we want to actually save the work we do and the changes we make.

Chapter 13 encourages us to follow the WEBrick Road, or how we can use Ruby to read and write data from the Internet to files or back to Internet servers using the open-uri gem, which is a set of files that we can use to make writing programs easier (and there are lots of Ruby gems out there :) ).

Chapter 14 gives some hints as to where you, dear reader, might want to go next. This section includes a list of books, online tutorials, and podcasts, including one of my favorite, the Ruby Rogues Podcast. It also includes interactive resources such as Codecademy, Code School, and Ruby Koans, and a quick list of additional topics.

The book ends with two appendices that talk about how to install Ruby on a Mac or a Linux workstation and some of the issues you might encounter in the process.

Bottom Line:

For a "kids book", there's a lot of meat in here, and to get through all of the examples, you'll need to take some time and see how everything fits together. The story is engaging and provides the context needed for the examples to make sense, and each section provides a review so that you can really see if you get the ideas. While aimed for kids, adults would be well suited to follow along as well. Who knows, some wizarding kids might teach you a thing or three about Ruby, and frankly, that's not a bad deal at all!

Thursday, February 26, 2015

Less Eeyore, More Spock

Today's entry is inspired by a post that appeared on the QATestLab blog called "Why Are Software Testers Disliked By Others?" I posted an answer to the original blog, but that answer started me thinking... why does this perception exist? Why do people feel this way about software testing or software testers?

There's a lot of jokes and commentary that go with being a software tester. One of the most common being "if no one is talking about you, you're doing a great job". Historically, Quality Assurance and Software Testing was considered somewhat like plumbing. If we do our jobs and we find the issues and they get resolved, it's like the plumbing that drains our sinks, tubs, showers and toilets. Generally speaking, we give the pipes in our walls next to no thought; that is, until something goes wrong. Once there's a back up, or a rupture, or a leak, suddenly plumbing becomes tremendously present in our thoughts, and usually in a negative way. We have to get this fixed, and now! Likewise, software testers are rarely considered or thought about when things are going smoothly, but we are often looked at when something bad has happened (read this as "an issue has made itself out into the wild, been discovered, and been complained about").

This long time perception has built some defensiveness for software testers and software testing, to the point where many testers feel that they are the "guardians of quality". We are the ones that have to find these horrible problems, and if we don't, it's our heads! It sounds melodramatic, but yes, I've lived this. I've been the person called out on the carpet for missing something fundamental and obvious... except for the fact that it wasn't fundamental or obvious two days before, and interestingly, no one else thought to look for it, either.

We can be forgiven, perhaps, for bringing on ourselves what I like to call an "Eeyore Complex". For those not familiar, Eeyore is a character created by A.A. Milne, and figures in the many stories of "Winnie the Pooh". Eeyore is the perpetual black raincloud, the one who finds the bad in everything. Morose, depressing, and in many ways, cute and amusing from a distance. We love Eeyore because we all have a friend that reminds us of him.

The problem is when we find ourselves actually being Eeyore, and for many years, software testing deliberately put itself in that role. We are the eternal pessimist. The product is broken, and we have to find out why. Please note, I am not actually disagreeing with this; the software is broken. All software is, at a fundamental level. It actually is our job to find out where, and advocate that it be fixed. However, I have to say that this is where the similarities must end. Finding issues and reporting/advocating for them is not in itself a negative behavior, but it will be seen as such if we are the ones who present it that way.

Instead, I'd like to suggest we model ourselves after a different figure, that of Spock. Yes, the Spock of Star Trek. Why? Spock is logical. He is, relatively speaking, emotionless, or at least has tremendous control over them (he's half human, so not devoid of them). Think about any Star Trek episode where Spock evaluates a situation. He examines what he observes, he makes a hypothesis about it, tests the hypothesis to see if it makes sense, and then shares the results. Spock is anything but a black raincloud. He's not a downer. In fact, he just "is". He presents findings and data, and then lets others do with that information what they will. Software testers, that is exactly what we do. Unless we have ship/no ship decision making authority, or the ability to change the code to fix what is broken (make no mistake, some of us do), then presenting our findings dispassionately is exactly what we need to be doing.

If people disagree with our findings, that is up to them. We do what we can to make our case, and convince them of the validity of our concerns, but ultimately, the decision to move forward or not is not ours, it's those with the authority to make those decisions. In my work life, discovering this, and actually living by it, has made a world of difference in how I am perceived by my engineering teams, and my interactions with them. It ceases to be an emotional tug of war. It's not a matter of being liked or disliked. Instead it is a matter of providing information that is either actionable or not actionable. How it is used is ultimately not important, so long as I do my best to make sure it is there and surfaced.

At the end of the day, software testers have work to do that is important, and ultimately needs to be done. We provide visibility into risks and issues. We find ways in which workflows do not work as intended. We noticed points that could be painful to our users. None of this is about us as people, or our interactions with others as people. It's about looking at a situation with clear understanding and knowing the objectives we need to meet, and determining if we can actually meet those objectives effectively. Eeyore doesn't know how to do that. Spock does. If you are struggling with the idea that you may not be appreciated, understood, or liked by your colleagues, my recommendation is "less Eeyore, more Spock".

Friday, February 20, 2015

If You Have $20, You Can Have a Standing Desk

I've had a back and forth involvement with standing desks over the years. I've made cheap options, and had expensive options purchased for me. I've made a standing desk out of a treadmill, but it has been best for passive actions. It's great for reading or watching videos, not so great for actually typing, plus it was limiting as to what I could put on it (no multi monitor setup). Additionally, I've not wanted to make a system that would be too permanent, since I like the option of being flexible and moving things around.

I'm back to a standing desk solution for both home and work because I'm once again dealing with recurring back pain. No question, the best motivation for getting back to standing while working is back pain. It's also a nice way to kick start one's focus and to get involved in burning a few more calories each year. I'll give a shout to Ben Greenfield, otherwise known as "Get Fit Guy" at quickanddirtytips.com for a piece of information that made me smile a bit. He recommends a "sit only to eat" philosophy. In other words, with the exception of having to drive or fly, my goal should be to sit only when I eat. All other times, I should aim to stand, kneel, lie down or do anything but sit slumped in a chair. The piece of data that intrigued me was that, by doing this, I'd be able to burn 100 additional calories each day. In a week's time, that's the equivalent of running a 10K.

For those who have access to an IKEA, or can order online, at this moment, the Ikea LACK square side table, in black or white, can be purchased for $7.99 each. I chose to do exactly that, and thus, for less than $20, including tax, I have set up this very functional standing desk at work:




The upturned wastebasket is an essential piece of this arrangement. It allows me to shift the weight from leg to leg, and to give different muscles in my back some work, and to rest other areas from time to time. I've also set up a similar arrangement at home, but added an EKBY ALEX shelf (I'll update with a picture when I get home :) ). This gives me a little extra height and some additional storage for small items. The true beauty of this system is that it can be broken down quickly and moved anywhere with very little effort, and is much less expensive than comparable systems I have seen. If you'd like to make something a little more customized, with a pair of shelf brackets and a matching 48" shelf, you can make a keyboard tray, though for me personally, the table height works perfectly.

What I find most beneficial about a standing desk, outside of the relief from back pain, is the fact that it is incredibly focusing. When I sit down, it's easy to get into passive moments and lose track of time reading stuff or just passively looking at things. When I stand, there is no such thing as "passive time". It's very focusing, and it really helps me to get into the zone and flow of what I need to do. For those looking to do something similar, seriously, this is a great and very inexpensive way to set up a standing desk.