Monday, September 30, 2013

Learn a Scripting Language: 99 Ways Workshop #89

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #89: Learn a scripting language. Use it to automate repetitive tasks or processes. Manipulation of text or data files for example.


I'm going to do something I don't normally do.. I'm going to punt. Well, not really. See, the thing is, I wanted to see if I could put together a workshop in a simple blog post that would discuss the nuts and bolts of scripting languages and why you should learn them. 


What I realized the several times I was trying to write this post was that this is, for real, not a topic that can be summed up in a simple 800-1000 word blog post. In fact, I could barely sum it up in a 50 post blog series on "Learn Ruby the Hard Way". Shell programming, scripting languages of any kind, any flavor... just take work, persistence and a lot of continuous and steady practice.


Thus, it's with that in mind that I am going to talk about what you can do, and share some sites and suggestions I have found to be helpful along the way (from someone who, admittedly, isn't much of a programmer, but I do what I can with what I do know ;) ).


Workshop #89: Download and install three different scripting languages, your choice. If you are using the bash shell or another variant, consider that one of the three. Find and address ten different challenges in your everyday work life that you can experiment with (data files, log files, web page downloads, etc.). The catch? Try to use each of the languages/approaches to solve the problem. 


This is what I refer to as the "Secret Ninja Cucumber Scrolls" method. Why? Because they were the first that I saw that made a point to take a topic (in this case Selenium and web automation using Cucumber and various tools) and made it so that each example was made to work in three different environments. There were advantages and disadvantages to each approach, but by the end of the process, each environment was able to accomplish the same task. What's more, if the user in question worked through each of the examples, they had a better appreciation of just how they could engineer a solution in not just one way, but multiple ways. 


What makes three the magic number? I think because there's enough natural variety out there with three options that it forces us to "think differently". When it's just one option, we do what we have to and if it works, great. With two options, we have to compare why it works better than another system. With three systems, we have to focus on what it is we are trying to solve, and see if we really understand what we are working towards, enough so that we can address it in what will likely be three unique ways. 


So with that in mind, realize the scripting languages I'm suggesting are not necessarily the "best" languages, but they are ones I would suggest for the fact that there's lots of avenues out there to learn them and frameworks to put into place and experiment with (that, and they happen to be ones I have personal experience with. If others you were expecting to see here aren't here, it's because I either don't know enough about them, or don't use them enough to be able to talk about them with anything resembling half a brain)...


JavaScript: At this stage, just about every site that teaches programming has a section on JavaScript, and frankly, it's the most ubiquitous language on the net, and will likely become even more so in the coming years. More than just JavaScript, there are frameworks that you can learn as well once you cover the basics. If jQuery, Angular, Ember, Backbone or Node are avenues of interest, then JavaScript needs to be a starting point.  


Ruby: Ruby is a stable language that has been around for two decades, has a rabid fan base, and is a platform of choice for many production web sites that use Ruby on Rails. It has a healthy ecosystem of tools and add-on "gems" that can be utilized to make many jobs a lot easier. 


Python: Another language that has been around for quite awhile ( I remember first hearing about it in the mid 90s), and has likewise become a popular choice for many testers due to support for Selenium and other testing frameworks. It also has a popular web site framework with Django, and the ability to build web apps and sites quickly and dynamically. 


bash: Wait, what?! You didn't think I was going to mention a handful of scripting languages and not include my own personal favorite, do you? Yes, I know that bash is technically not a "scripting" language, but work with me here. I think there is a lot of benefit to learning and practicing with the bash shell (or csh, ksh, zsh or whatever flavor you personally want to play with). Not only do you get to work with programming structures that are similar to the ones other scripting languages use, you may find that you can answer in one line of command line code what it might take dozens of lines to write in another scripting language (again, this comes back to the idea of "forget the tool, start with the problem"). 


Javascript (and various associated frameworks): Codecademy, Code School for the basics, with NetTuts+ being a good source for going into greater depth or viewing screencasts on more advanced topics and additional frameworks.


Ruby (and Rails):  Code School has a Ruby and Rails path that, while not free, is pretty comprehensive. Codecademy also has a track that covers Ruby well (just Ruby), and I would definitely mention Zed Shaw's "Learn Ruby the Hard Way" (you can look on TESTHEAD in the Practicum section and see my own adventure with this book and approach. I found it helpful, your mileage may vary :) ). Also, as Chris McMahon wisely pointed out in the comments, Brian Marick has a great book called "Everyday Scripting with Ruby" with a strong focus on using Ruby for software testing.


Python (and Django): Codecademy has a language specific Python track  and NetTuts+ has a compilation of "The Best Way to Learn Python". In it are several avenues that you can explore as you practice using the language. Also, though I cannot speak for it directly as I did the Ruby version, "Learn Python the Hard Way" from Zed Shaw is the blueprint for the Ruby book. I thought the Ruby book was very helpful, so therefore I'm extrapolating and guessing that, for those who want to use it to learn Python, it will also prove valuable to those willing to follow its approach and methodology.


bash: there's lots of places to look for this information, and lots of tutorials that you can dive into (as well as dozens of books), but I personally like the Linux Shell Scripting Tutorial and Advanced bash Scripting Guide. Alternately, a very nice book that gives a lot of great examples and possible jump off points is the bash cookbook (which I reviewed here).


Bottom Line:


Scripting languages, regardless of what you want to refer to them as, are programming. Let me re-emphasize that… 


You WILL need to LEARN to have a TOLERANCE for SOME amount of PROGRAMMING!


There's no way around it. This does not mean that you have to devote your life to programming, that you have to de-emphasize testing to program, or anything of the sort. You may, even after going through the variety of exercises for several months, decide that you just aren't cut out to be a programmer. If by "programmer", you mean a "production level writer of software programs that ship commercially", then you may very well be right. I'm certainly not. Fortunately, "programmer" means a lot more than that, and encompasses more than just shipping commercial software. 


Scripting languages (and there are many more that I didn't name here because of lack of my own interaction) have some advantages in that they can be executed and debugged more quickly than compiled languages can be, but the fundamental rules are still the same. Regardless of the language or methodology, some things are definitely going to need to be considered. You will make mistakes. There will be bugs. You will need to find them and fix them. You will need to practice. A lot. For a long time. That is, if your goal is to get good at using these languages. Sorry, no short cuts or simple solutions to offer here, just a lot of training for an extended period. On the bright side, you may find that you start to enjoy the journey ;).

Friday, September 27, 2013

Learn How to Use the Command Line: 99 Ways Workshop #88

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #88: Learn how to use the command line. Shell and batch scripts too.


I have to admit, these are getting a little more difficult to write in this stretch. Suggestions 87 through 90 could keep people busy for years in and of themselves, and coming up with a workshop idea to put in a blog post is proving to be quite daunting. Still, it's worth a shot, and I'm going to give it my best.


Workshop #88: Spend some quality time inside the terminal program of your operating system. Learn its ins and outs, practice typing in commands and parameters that can help control and direct them. Gather commands that can be executed in sequence together (i.e. batch them). For extra credit, do it for both UNIX/Linux and Windows.


Really, this is not a simple "do this in an afternoon" endeavor. Windows may have a pretty graphical gloss to wrap itself in (Mac, too) but at their heart are command lines that can be executed. The command line provides a lot of power, flexibility, and some syntactical requirements that have to be learned. 


Command lines (and the various tools that interact with them), primarily handle four functions. Those four things are:

- variable interpretation and substitution
- quoting
- syntax interpretation
- redirection

That's about it, but "that's about it" covers a really large field. Still, by looking at it from that perspective, it makes a lot of things easier to understand, and thus, it becomes easier to see what we can do with those options.

When we see something like:

"cat file1 file2 file 3 | grep '[0-9]\{1,2\}\/[0-9]\{1,2\}\/[0-9]\{2,4\}' | uniq | sort > file4"

we are, effectively, calling on all four of the functions (though it may not be so obvious). 

In the above example we are saying the following:

- take the three files I have named, and concatenate their output (show the contents of the three files one after another) together as one output stream. 

- "pipe" that output stream (take the output of one program and direct it as the input to another program) to "grep", and have "grep" look for a string. If grep finds the string, it will keep it. if it doesn't, that line will be removed.

[Yes, that quoted option after grep is a regular expression, or regex, like we talked about in the last suggestion. They are hugely helpful in examples like this. In this case, all that it means is "look for any date string that is formatted as "mm/dd/yy" or "mm/dd/yyyy"]

- take the output that was parsed by grep, and pipe it to a program called "uniq". 

-  uniq will receive each of the lines, examine them, keep only the lines that are "unique" and remove the lines that are not. 

- uniq will then pipe its output to the input of another program, called "sort" which, as its name indicates, will reorder the lines so that they are in alphabetical order.

- finally, sort will take its output and redirect (write) it into a file (file 4), where the input stream will be stored and saved.

So there you go, redirection, quoting, and syntax. Variable substitution is going on in this example, too, albeit under the covers. How? Well, the file names we provide are variables in their own right, as far as the commands are concerned. 

If we wanted to make it a bit more apparent, we could save the same command as a script file, and provide variables like this:

file1="$1"
file2="$2"
file3="$3"
cat ${file1} ${file2} ${file3} | grep '[0-9]\{1,2\}\/[0-9]\{1,2\}\/[0-9]\{2,4\}' | uniq | sort > file4

If we were to save this simple script in a file like "strip-out-dates", make it executable, and run with the names of three files, the script above would execute and work on them, like this:

% strip-out-dates example1 example2 example3

When it finishes, you'll see a file called "file4", and that file will have a combination of all the lines from the three files that you fed to the script that had valid dates in them.

This, again, is just a little bit of what you can do, but suffice it to say that, if you wanted to take a variety of commands like I have here and run them, or stack a bunch of commands you use over and over, then you can build up scripts that can allow you to run commands in the order you have placed them in the file.

Bottom Line:

The command line can feel really daunting, but in truth, it only does what it is directed to do and what it has the ability to do. The shell itself is rather limited. The commands that it can run, and the permutations as to how those commands can be formatted, now that can go on forever. The good news is that, even there, there are also some basic rules that we can use that can help us. Those rules and examples, though, will have to wait for the next post.

Product Review: Acronis True Image 2014

One of the fun things about writing a regular blog is that, over time, you get some interesting opportunities come your way. Many of them, for various reasons, I've not been able to follow up on, but every once in awhile, something either looks interesting, meets a need, or a part of me that might otherwise be reluctant says "oh what the heck, why not?"

It was in this guide that I received a request from Acronis Software and asked if I'd be interested in checking out and reviewing their newest product, Acronis True Image 2014. I figured "sure, why not?" If nothing else, it may be an interesting experience to try something new, and see what I could discover in the process.


I decided that this might be a fun experiment for a few reasons. First being the fact that I've been playing fast and loose with my PC. I used to be much more meticulous about things like anti-virus software, firewalls, backups, imaging, etc, but it's been awhile (since the MacBookPro became my main system). Still, there's a lot of stuff I do on the PC that I'd be rather upset if I were to completely lose ,so I considered this a timely and fortuitous call to action.


Installation was relatively quick, even on my now somewhat long in the tooth Windows 7 PC (a 2009 era Toshiba Satellite for those interested). It was with this well seasoned machine that I figured a more nuanced and interesting test could be conducted.


There's two levels to this application. The first is the stripped down, almost bare bones display of the most critical functions. For those who want to get in, do what they must and then get out, this is a nice setup. Clean lines, uncluttered, easy to navigate and understand. Ah, but what fun is a backup and restoration app if it doesn't give fine control to the one setting it up? If you are one easily smitten by such things... Adornis has you covered there, too.

Some nice features that True Image offers are as follows:

 
Complete Control of the level and Density of the backup.

If you want to create a very specific and cherry picked set of files that you need to back up, along with a set of partitions on your disk, you can do that. The control options are geared towards the simplest and quickest path, and if you want to "set it and forget it", it's very easy to do so. If you would like to go in and perform fine control and very specific file level interactions, you can do that as well.


Individual file selection from back-ups.

In addition to backing up either full disks or individual file level, the user has the options of performing a file level recovery from a disk partition, if that is desired. rather than have to go and pull down a full backup, the user can go in and select a single file and just pull that one option. Yes, this has been available for systems for years, but being able to do it this simply and this directly is new and, frankly, kind of nice.


Create bootable media that can interact with the True Image backups.

This option still requires a bit of futzing with the BIOS to set the boot order, but if you do, then you are just a USB stick or a CD/DVD away from getting your system up and restoring a True Image backup. again, the system allows for both "drop in simple" and "fine motor control" options, depending on how involved or detailed you choose to be.


Back up to the cloud.

Acronis gives the user a number of "DropBox" level options for online storage, and in this capacity, users of Acronis can utilize the True Image equivalent of Dropbox. While drop box create and drag options are cool and widespread, True Image has added the ability of pushing and pulling full backup images from the cloud. this way, if you'd like to get a dedicated hard drive out of the equation entirely, you can certainly do that. In addition, users can also use the sync functionality with multiple devices and keep all of their items up to date. Click "Publish" and the whole world can see it, or just those who you designate access to.

Some other interesting tidbits:

Try and Decide - Curious to see if an app will cause problems? try and decide lets you give it a run, and if it turns out to be more trouble than it's worth, you can roll it back.

Secure Zone - creates a special secure partition for your backups on your disk.

Boot Sequence Manager - Lets you boot your computer from a disk image if desired.

Backup Conversion - Move your backups from windows to Acronis and vice versa

File Shredder and Other Security Tools - exactly what it sounds like


Bottom Line:


The tool isn't magic, but as far as having a balance between ease of use and flexibility/control, Acronis hits more notes than it misses. Systems are of course going to be limited based on the hardware are features. My test runs were not what I would call "zippy", but I am perfectly aware of the fact that that may have more to do with my own system limitations and hardware than anything having to do with the Acronis application.

Also, I've only spent a few days with this app, and there's so much that I'm curious to play with and tweak with. I suppose the best endorsement I can give is that, after several days of poking and prodding, I feel like I want to know more and do more. In these days of short attention span theater, that says a lot.

tl;dr: nice app, simple to operate for the basics, but plenty of firepower to geek out with if users choose to. A nice balance overall.

Tuesday, September 24, 2013

Learn How to Use regex: 99 Ways Workshop #87

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #87: Learn how to use regex.


All right, we're starting to get into direct recommendations that were workshop ideas from before. That means we get to start getting more specific.

I mentioned in an earlier post that shell scripts using regex can help make for dynamic tools to parse information and clean up output for later use, but I didn't say a word about what regex actually is.

For those that know all this already, this may be of limited value. If this is new, this is by no means an exhaustive treatment of regex, but I'd be remiss to not at least offer some nuts & bolts and examples, so that you can see why this might be something you'd want to learn more about.


Therefore, without further ado...

Workshop #87: Determine what tool(s) you would like to use to practice working with "REGular EXpressions" and use that tool(s) to practice using them. 

I've heard this said a number of different ways, and since we are reading it, it probably doesn't matter much, but this is a personal thing, and you can decide for yourself how or what you want to call it. I'm a firm believer in the idea that, if something is an abbreviation of a set of words, then the pronunciation of the words as a whole needs to inform what the abbreviation will sound like. Since we are talking about REGular EXpressions, you will always hear me say it as "REG-ex" (hard "g"), not REJ-ex (soft "g"). What you choose to call it is totally up to you ;).

Anyway… regex works by letting the user provide a rule, and that rule is meant to help identify a sequence of alphanumeric characters. If we choose to look for an exact word ("apple"), we can just use the exact word, and tools like grep, sed, awk, etc. will find the literal match of that word.

That's OK, but more often than not, we want to look for items that will be dynamically assigned through variables or have a variety of ways to be found, not just absolute terms or phrases. What do we do then, if we don't know in advance what the value will be?

regex to the rescue :).
Here's some very quick regex examples, written from the perspective of bash/linux tools:

- a period (.) matches any single character.  '….' would match any four characters

- 'A.' matches an “A” followed by any character

- '.A.' matches any character, then an “A”, then any character

- an asterisk '*' means 'repeat zero or more of the previous character'.

- 'A*' means zero or more “A” characters

- '.*' means zero or more of any character (letter, number, symbol, a blank line, etc.)

- '..*' means any single character, followed by one or more of any character (so an empty line isn't an
option here).

- '^' means the beginning of a line.

- '$' means the end of a line.

- '^$' would mean any blank line.

- '\' is used as an escape character. This means if you want to look for any of the above examples literally (., *, ^, $), you would use it first, like '\. \* \^ \$'

- '[abc]' is a "range". In this case, it means "find any 'a', b', or c', anywhere on this line. Using the [] range option, we can also use the '^', but in this case, '[^abc]' means "show me lines that do NOT have the characters 'a', 'b', or 'c'. All inclusive ranges can also use a shorthand like [A-Z] or [0-9].

- '\{n,m\}' is used in bash and Linux (and elsewhere) as a repetition option. If we see something like '[0-9]\{2,3\}', this would mean "show me lines for any numeric sequence that has two or three digits. If we were to see something like '[0-9]\{3\}', that would mean "show me lines with exactly 3 numerals in sequence".

OK, with that, let's try something a little more interesting.

$ grep '[0-9]\{3\}-\{0,1\}[0-9]\{2\}-\{0,1\}[0-9]\{4\}' datafile

What do you think this might be? As printed, it might be hard to tell, but if we were to look at each element separately, we can probably figure it out.

[0-9] any sequence of numerals between 0 and 9

\{3\} limited to three numerals exactly

- a literal dash character

\{0,1\} repeated zero or one times

[0-9] any sequence of numerals between 0 and 9

\{2\} limited to two numerals exactly

- a literal dash character

\{0,1\} repeated zero or one times

[0-9] any sequence of numerals between 0 and 9

\{4\} limited to four numerals exactly


If you guessed that this is a regex to help find a US Social Security Number, you are correct (this example comes courtesy of the "bash Cookbook" by O'Reilly).


There are many more options to regex. This just scrapes the surface, but even with just this level of understanding, you can do a lot. A full blown tuorial on regex goes well beyond the scope of these posts, but if you would like to have a general, all purpose tutorial on regex, check out http://www.regular-expressions.info/tutorial.html


Also, there are a variety of regex "engines" available. Linux uses the POSIX engine. Programming languages use a variety of standards, many of them similar but with their own individual quirks. If you are using programming languages like Python, Ruby, PHP, Perl, etc. you will need to look at how regex is implemented in your language of choice.


Bottom Line:


regex is a core idea in a variety of scripting languages, programming languages and methods to make shell scripts much more dynamic. They take time to understand, and like anything other skill, repetition and practice goes a long way towards a better understanding of how to use them.

Monday, September 23, 2013

Book Review: The Practice of Network Security Monitoring


This certainly fell into my lap at an opportune time. With the various revelations being made about the NSA and its tactics, as well as the upsurge in attention being paid to network and application security in general, this book was a welcome arrival in and of itself. 


There's a lot of attention paid to the "aftermath" of security breaches. We see a lot of books that talk about what to do after you've been hacked, or tools that can help determine if your application can be penetrated, along with tools and recommendations for performing that kind of testing. 


Less often asked (or covered) is "what can we do to see if people are actually trying to get into our network or applications in the first place?" While it's important to know how we got hacked, I'd like to see where we might get hacked, and sound an early warning to stop those hackers in their tracks.


To that end, Network Security Monitoring (NSM) makes a lot of sense, and an important line of defense. If the networks can be better monitored/protected, our servers are less likely to be hacked. We cannot prevent all breaches, but if we understand them and can react to them, we can make it harder for hackers to get to anything interesting or valuable. 

It's with this in mind that Richard Bejtlich has written "The Practice of Network Security Monitoring", and much of the advice in this book focuses on monitoring and protecting the network, rather than protecting end servers. The centerpiece of this book (at least from a user application standpoint) is the open source Security Onion (SO) NSM suite from Doug Burks. The descriptions and the examples provided (as well as numerous sample scripts in the back of the book) help the user get a good feel for the operations they could perform (and control) to collect network data, as well as how to analyze the collected data. 

The tools can be run from a single server, but to get the maximum benefit, a more expansive network topology would be helpful. I can appreciate that my ops people didn't quite want to see me "experiment" on a broader network for this book review. After reading it, though, they may be willing to give me the benefit of the doubt going forward ;).

There are lots of individual tools (graphical and command line) that can be used to help collect and analyze network traffic details. Since there are a variety of tools that can be used, the author casts a broad net. Each section and tool gets its own setup, and an explanation as to how to use them. The examples are straightforward and easy enough to follow to get a feel as to how they can be used.

The last part of the book puts these tools into action, and demonstrates examples as to how and where they can be used. The enterprise security cycle is emphasized (planning, resistance, detection, and response), with an emphasis on the  last two items. NSM uses its own process flow (collection, analysis, escalation, and resolution). By examining a variety of server side and client side compromises, and how those compromises can be detected and ultimately frustrated, we get a sense of the value and power of this model.

Bottom Line:

My approach to learning about NSM in general comes from being a software tester, and therefore I'm very interested in tools that I can learn and implement quickly. More important, though is the ability to apply a broad array of options. Since I don't really know what I may be called on to test, this varied model of NSM interests me greatly. From an understanding level, i.e. an ease of following along and seeing how it could work and where, I give the book high marks. I'm looking forward to when I can set up a broader and more varied network so I can try out some of the more expansive options. 


On the whole, "The Practice of Network Security Monitoring" gets the reader excited about getting deeper into the approach, and looking to where they can get more engaged. As tech books go, it's a pretty fun ride :).

Friday, September 20, 2013

How Does Your Testing Add Value?: 99 Ways Workshop #86

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #86: See the bigger picture. How does your testing add value to your team, project, organization?

Wow, now that's a suggestion. It's also not a simple, cut and dried thing that we can talk about in a "do these three things and you will add value".


I can't give you a simple "go forth and prosper" kind of answer, but I can share some thoughts and realizations that have helped me decide what to do with this idea over the years.


Workshop #86: Run a regular inventory of your organizations values. What do they do? Who are their customers? Who pays the bills? How do you justify your cost to the team? Then work towards aligning your actions to the answers your get.


Whoa, dude! Isn't that a little out there? Maybe, maybe not. It's a set of questions everyone, in any role, should be asking. 

Contrary to popular belief, corporations and company's do not exist to hire people and pay them. We do not exist to be hired by corporations and companies to get paid. Yes, both happen, but it's a relationship based on a simple reality… companies exist to make money. 

If you are creating something that can be directly sold, and a profit is earned, then you are a producer, and you are what is referred to as a "revenue center". You directly help the organization make money. Unless you work for, or own, a company that directly sells software testing services, you are not a revenue earner for a company as a software tester. You are a cost. You may be a very necessary, very important, and very valuable cost, but make no mistake, YOU ARE A COST! Just accept that.

This is why it is critically important for software testers to really and clearly understand what they represent to a company or to a team. We are a hedge. We are insurance. We are information and analysis. Each of those things are vital, but they do not directly, unambiguously add to the company's bottom line. On the other hand, there is decades worth of evidence that shows our being there can protect revenue, and that company's with bad quality products can certainly lose revenue. Testers being there and doing good work can enhance product reputation (when we do our job and do it well). It helps us garner influence because our product is better than other options. 

A helpful, and sobering, comment was said to me about twenty years ago. It was when I was first at Cisco Systems; one of their senior engineers put it very succinctly: 

"We are not where we are today because we are better than everyone else. We are where we are today because we suck less than everyone else!" 

The point he was making with that comment was to illustrate that, yes, we had bugs, some of them quite bad, but we were doing a better job testing and finding ugly issues than anyone else out there at the time.

If we want to make clear how and where we can add value, we have to consider a number of important areas. For me, this can be summed up as follows:

- spend time with your customer support team (or at least their support system application) and get to understand the most sensitive and delicate parts of your organization. 

- read actual support requests and tickets, so that you can see where customers feel genuine pain

- make a test charter around those areas, and dig to see if what they are seeing is just the tip of the iceberg.

- develop domain knowledge of the industry you work in, and of the customers of your product. When I worked with Tracker Corp, that meant "learn as much about Immigration Law statutes as you can".

- make a point to be consistent, and provide clear and concise bug reports. Take the time to understand them and recognize which issues really rise to the level of "Oh, we have GOT to fix THIS!!!"

- pay attention to the conversations that executives have. Read through the press releases, shareholder meeting updates, anything that points to what the company wants to do both now and in the future. Get to really and fundamentally understand what they perceive as the roadmap.

- work directly with your team to cross-pollinate and share knowledge. Learn from each other and help others learn new tools, techniques and approaches.

- be willing to work yourself out of a job. Show integrity, purpose, and dedication. If there are better ways to do things, champion them, even if they might be seen as "but wait, this might lessen my own influence." It may, but chances are, you will have other opportunities to do other things.

- be honest with others, and most importantly with yourself. Know your limitations, as well as your strengths. Don't be coy about them. If you are struggling, ask for help. If you see someone struggling, help them.

Bottom Line:

Every company culture is different, every manager you ever have, and every direct report that ever reports to you, will be different and unique. At the root, though, they all want the same things, which is to do well enough to not have to worry, to have others treat them like human beings, to afford them the respect and dignity that goes with it, and to do work that actually matters. Ultimately, recognizing that, and working accordingly, will do wonders for your career outlook and engagement. In short, if you want to be valuable to your company, BE VALUABLE TO YOUR COMPANY! Know who you are, where you stand, and what you can do. 

From there, act on it.

Adventures in Context: Talking Commuting With a Newly Minted Driver

Recently, I had a chance to share with my son some ideas and comments about how testing is less about whether something passes or fails, but is about analysis of information and helping to make a qualified decision based on experimentation and feedback.

One of the games I love playing with my kids is "Armchair Economist". I'm a bit of a geek when it comes to experiments, continuous learning, and interesting discussions, as well as ways of looking at problems that might have slightly ambiguous answers. This is one of those times that playing "armchair economist" led to an interesting discussion about testing.

For years, my commute was an easy thing to deal with and calculate. I worked in San Francisco, which meant, basically, two things. Either I did a rail commute, or I drove. Both have their positives and negatives, but for the most part, driving was just not a reasonable solution because traffic and parking prices made it genuinely unpalatable. Since I had a meeting I had to be at each morning that started at the same time, commute decisions were easy. Go to San Bruno Caltrain station, get on train at the appropriate time, get to meeting on time. No fuss. 

When I changed jobs, and made the shift to working in Palo Alto, plus the fact that where I was working was very close to the CalTrain station, again, that was a no-brainer most days (because of the way that our company works with the City of Palo Alto, I do have a parking permit that gives me "free" parking in the Civic Center garage). Sometimes I have things going on that require me to drive in, but most of the time, train still makes the most sense.

Pat of this change-up involved me changing my schedule so I could drive my kids to school. I'd drop them off, and I'd go on to the next station (in Millbrae) rather than back track and go to the one in San Bruno. 

Now that my son has a driver's license, and a car of his own to drive, he's now handling the "driving of the kids to school" routine. He's also become very "price conscious" about how much fuel costs, and more specifically, just how much traveling and mobility that fuel actually gets him. Part of this was prompted by the fact that, since he's received his licence, he would drive me to the train station and drop me off on various days, and as part of this (including being my ride home) he was wondering how much it was costing him to "ferry me to the train station and back". 

This prompted a discussion… "Hey, Dad, how much does your monthly commute cost you?" 

I showed him the price tables, including some of the additional costs of parking, and at first, we both agreed, it looked like going to Millbrae over San Bruno is the better deal… but is it, really?

With that, I said "hey, let's be 'testerly' about this. Let's build a model." 

We sat down and we started with the obvious comparisons. 

I live in San Bruno. 

I work in Palo Alto. 

Caltrain divides its fare rate into zones, instead of an origin/destination price model. 

Travel from San Bruno to Palo Alto covers three transit zones ($179/mo. if you use the Clipper card) 

Travel from Millbrae to Palo Alto cover two transit zones ($126/mo. if you use the Clipper card)

Three zones is more expensive than two zones, $53 per month more. 

OK, so it's cheaper to travel from Millbrae than San Bruno. We're done, right? Well, not so fast…

Let's consider the parking situation. In San Bruno, although it can be hit and miss, and you may need to walk a bit farther on some days compared to others, it's entirely possible to park on Huntington Avenue for free. It's so much more possible that the San Bruno CalTrain lot usually only has a handful of cars in it on any given day. 

Cost for paid Caltrain parking? $5/day, or $50/month with parking permit, doesn't matter the station. 

OK, so what's the option for free parking in Millbrae? Actually, there's very little in the way of nearby/available free parking. The side streets have strict time limits or "no parking" policies. The closest "Free" parking is about a half-mile away. This is reflected in the parking levels in the Millbrae lot. By 9:00 a.m on any given weekday morning, the lot is completely full, out to the furthest parking spots. 

Translation: a $50 premium added to the price of the commute. So the difference now is just $3/month in Millbrae's  favor.

Hmmm… here's a thought. What's the difference between driving to San Bruno Caltrain vs. driving to Millbrae Caltrain? 

San Bruno: 4.2 miles (round trip)
Millbrae: 8.4 miles (round trip)

When I was driving my kids to school, that didn't make as much of a difference, since their high school was close to the Millbrae station, and I'd be going out that way anyway. It didn't make an sense to double back. Now? It's completely my choice, so I'm choosing to drive double the distance. I drive a traditional car, a 2001 Ford Escape 4WD w/ a V6 engine. Nice vehicle for off road play, camping and snowboarding, but it's strictly average when it comes to driving around town, to the tune of about 15-20 miles per gallon. To be conservative, let's use a 15 MPG value. If I were to take an average of 23 days for commuting in a given month, it's 97 miles total per month to the San Bruno station, 193 miles total per month to the Millbrae station. Gas right now is $3.99/gallon, and at 6.44 and 12.88 gallons respectively, the cost for gas (specific to my commute and nothing else) is $25.69 for San Bruno, $51.39 to Millbrae.

So where does that put us?

- San Bruno to Palo Alto Total Cost per Month: $204.69
- Millbrae to Palo Alto Total Cost per Month: $227.39

Net difference? $22.70 per month extra to use Millbrae instead of San Bruno.

-----

This is where I paused, and did my classic testers smirk and said "so, are we done?"

My son said "yeah, it actually costs less to commute from San Bruno, so you should commute from San Bruno."

Testers, I see you smirking (well, i don't see you smirking, but I know you are ;) ).

"Yes, on a pure dollar basis, it would make sense to say "Commute from San Bruno, you'll save money". Is it really that simple, though?" 

My son, to his credit, knows that, whenever I throw out the "Is it really that simple" comment, or something to that effect, that I'm about to lead him on a bit of a Socratic adventure. To his credit, he hasn't gotten to the point of rolling his eyes and walking away when I do this. Today, he didn't disappoint.

"Well, it would make sense if everything were the same, right? If every train stopped at each station, and if the schedule were the same, then the cost savings would make sense… but Caltrain doesn't work like that":

- None of the "baby bullets" stop at San Bruno.
- In fact, there's a lot fewer trains that stop at San Bruno.
- Also, trains that stop at San Bruno are either full stop trains or on limited stops.
- An average commute out of San Bruno to Palo Alto will take about 35-40 minutes (33 minutes being the fastest option on one specific train).
- The fastest commute out of Millbrae will take less than 25 minutes
- Even if we don't catch the baby bullet trains, there's more trains that stop at Millbrae than San Bruno in any given hour, about three times as many.


Here's where I smile a bit, since I see he's considering something. I leaned in and said "ok, so now that you've said that… which is the better value for commuting?" 

As expected, he thought about this for a while, and said "well, it depends on what matters more to you. If you care about the total cost, then commuting from San Bruno (and parking on the street for free) will be the better choice. If you care about flexibility, then commuting from Millbrae would make more sense. The difference is $23, or one dollar a commute day. If, in my schedule, I had to change up my routine for some reason, or needed to get there at a specific time, then sure, it would cost me $5 to park in Millbrae on that day, but I could still do that 4 times and come out ahead. $2.70, in fact. If I had to vary it more than 5 times, then commuting that month out of Millbrae and buying a monthly parking pass would be a better deal."

-----

I stopped the discussion at this point. I told him that, if we wanted to, we could go into serious minutiae and get measurements under various conditions to see total time performance (door to door, how long it took to walk to the platform from where we parked, total time difference on an average vs optimized schedule, etc.) but that this exploration was pretty good in and of itself. It gave us a lot of good information.

What was interesting was how he saw that the dollar amount, which would have been easy to quantify in isolation, masked a lot of nuance of the situation. I explained to him that the nuance is where a real thinking mind needs to make an analysis, a total calculus of the whole situation, and then make a judgment call based on all the information. As he learned, it's not all cut and dry. Additionally, there's a new construction project happening in San Bruno, and the station is going to be changing location in a few months. At that point, the "Free Parking" option may no longer be a viable option, and we'll have to reconsider this whole exercise again at that point.


So, what do you and your kids talk about ;)?

Thursday, September 19, 2013

Build Personal Development Time Into Your Week: 99 Ways Workshop #85

 The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.




Suggestion #85: Build personal development time into your week - 5-10% or approximately half a day a week sharpening your skills by reading, practicing or learning a new skill will pay dividends.


I am a huge advocate for personal development and figuring out ways to leverage it for your long term growth. There are many avenues that can be used to achieve and measure this. The TESTHEAD blog exists almost exclusively for this purpose for me. It's my permanent record of what I commit to doing, and it's a somewhat objective way to see how well I follow through on those commitments.


The two challenges that I see to personal development are:

- carving out the actual time to accomplish your goals

- making a meaningful accountability mechanism for achieving them.

Therefore, this workshop will be two-fold, with two specific suggestions for deliverables. One will be related to time, the other will be related to accountability.


Workshop #85: Set up a "Pomodoro" schedule, or some other way to specifically carve out time for professional development goals. Make very specific initiatives that you can tackle, and provide a way to summarize what you have learned. Utilize the power of the "Bold Boast" and get others involved in "keeping you honest", such as a blog post series or a dedicated space with review permissions for others.


Time:


No matter how we want to look at it, time is the single commodity that we can neither hoard nor profligately destroy. It happens when it happens, and it is constant. We can't bank it. We can't sell it. We gan't give it away. We can't even really manage it. The only thing we can do with time is use it, and that comes with a hard truth. There is a finite amount of Time and Attention any of us can give. We all have the same 24 hours as everyone else. Certain things are non-negotiable. We have to breathe, we have to drink water, we have to eat, and we have to sleep. Everything else is a choice (and don't say sleep is a choice; trust me, you will not function well for long if you do not get it).


This is all an elaborate way for me to be the master of the obvious; to give your time and attention to a goal, you have to willingly divert that time and attention from something else. Don't talk to me about multi-tasking. It's a myth beyond the trivial. Yes, you can have a TV program on in the background while you are surfing the net, I get that, but high quality, deep learning level initiatives require focused attention. Music in the background? OK, maybe... depends on what you're doing ;).


My personal favorite way to carve out time is to use a Pomodoro system. For those not familiar, it's a method where you set a timer and, while that timer runs, you commit 100% of your time and attention to a very specific task. The more specific, the better. I like using a little app called "Pomodairo", which is an AIR app that runs on both my PC and my Mac. It also gives me some space to make notes and track sessions over time.


There's lots of literature about the Pomodoro technique on the web, so I'll leave it to you to research exactly how you may want to implement it, but I'm going to talk about two methods that I like to use. One is what I call the "Full Flavored" approach, and the other is something I stole from Merlin Mann that I refer to as the "Procrastination Dash". When I do things I tend to enjoy doing, the 'Full Flavored" version is what I use. When I'm working on things that are not so pleasant, I use the "Procrastination Dash".


Full Flavored: this is a classic Pomodoro schedule, covering two hours.


- Set the active "on task" timer for 25 minutes.
- Make a very audible bell that rings when the "on task" time is over.
- Set a break time for five minutes. Likewise, set a very audible tone to tell when the break is over.
- Create four cycles for your Pomodoro.


Net result: four twenty-five minute blocks of time that are "on task", with twenty minutes of "break time" (four five minute breaks).


This is what I usually use when I write blog posts, work on presentations, or want to make sure that I don't get "too absorbed" in something.


Procrastination Dash: this is a modified Pomodoro schedule, covering one hour.


- Set the active "on task" time to ten minutes.
- Make a very audible bell that rings when the "on task" time is over.
- Set a break time for two minutes. Likewise, set a very audible tone to tell when the break is over.
- Create five cycles for your Pomodoro.


Net result is that you will have five ten minute blocks with five two minute breaks, or 50 minutes on task with ten minutes of break time.

This is my "irksome task" method, when I know I have to do something that I need to do, but I'm really not looking forward to doing it.


Given time and practice, don't be surprised if you notice items you put on the "Prograstination Dash" pile start to make their way into 'Full Flavored" sessions, because when you do this, onerous tasks either tend to go away completely, or they (over time) become easier to deal with, and therefore more engaging. Regardless of the method you choose, set the time, log the time, use the time, and finish what you start, as much as you can.


Accountability:


One of the things that prevents many of us from achieving a goal is the fact that there's really no pressing need to achieve it. That's one of the reasons "personal professional development" is called what it is. If my Director were to say "hey, we need to implement JMeter for performance testing, it needs to be online next week, and you are going to demo it for the entire company", that's plenty of external motivation. Net result, I'm very likely to hit that mark and do what's necessary to be successful. Sometimes, we get those mandates. Those are the easy ones.


Most of the time, though, the areas we want to improve, we want to, but we don't want to badly enough to really get us over the hurdles. The main reason for this? There's no external motivator, and there's no real urgency. It may be important, it may be valuable, but it may be somewhat unpleasant at first, and it may take a lot of time to get you to where you want to be. If you are a super self-motivated individual, this next suggestion may not do much for you. For others, and especially myself, it works wonders.


The "Bold Boast" is exactly what it sounds like. It's me declaring I will do something, doing so publicly, and setting up prominent (and very public) reminders of what I am doing, and why I am doing it. This series of posts about the "99 Things" eBook is the epitome of a Bold Boast. I announced to the virtual world, via my blog, that I was going to create a "workshop" for each of the 99 entries. I blogged I would do it, tweeted I would do it, I posted on my Facebook page I would do it, and I posted to Google+ that I would do it.


Why on Earth would I do such a thing?


Because doing this serves two purposes. First, it puts the world on notice that I'm going to do something. Second, it puts me on notice that I am "risking my reputation" if I don't follow through. The net result, almost universally, is that I follow through... at least somewhat. Many initiatives I have carried all the way to completion. Some initiatives I've gotten a certain distance, and then found I was stuck, or circumstance beyond my control prevented me from moving further at that point in time. Sometimes, we can make so many consecutive bold boasts that we can lose track... and so does everyone else. Therefore, use this approach sparingly, for when you really want to tackle something that would otherwise be intimidating. The tradeoff is your fear of failure vs. the value of your online reputation. For some, that's not enough motivation. For me, personally, it's the best incentive I can use on myself !


Bottom Line:


Our time and attention is the most valuable thing we possess, and ultimately, we are the one's responsible for deciding how it gets used. Will we have to sacrifice in other areas to do this? Sure. To borrow from Merlin Mann again (paraphrasing)... "we need to dedicate time to focus on the good stuff, and we need to dedicate attention to make sure that the stuff we focus on turns out to be good". No matter what we opt to do to meet our personal and professional development goals, we have to realize it will take time, and often a significant amount of time. We either stretch that time out in duration, or we compress the time with intensity. Sometimes one will win out over the other, but balancing the two will be much better for us (both health-wise and sanity-wise) in the long run.

Wednesday, September 18, 2013

Adventures in "Epic Fail": Live From Wikimedia Foundation

Ahhh, there is nothing quite like the feeling of coming off a CalTrain car at the San Francisco terminus, walking onto the train platform, breathing that cool air, sighing, and saying "yeah, I remember this!"


Palo Alto is nice during the day, a great place to walk around in shirt sleeves, but there's something really sweet about the cool that eastern San Francisco gives you as you wind your way through SoMa.


OK, enough of the travelogue... I'm here to talk about something a bit more fun. Well, more fun if you are a testing geek. Today, we are going to have an adventure. We're going to discuss failure. Specifically, failure as relates to Selenium. What do those failure messages mean? Is our test flaky? Did I forget to set or do something? Or... gasp... maybe we found something "interesting"! The bigger question is, how can we tell?

Tonight, Chris McMahon and Zjelko Filipin are going to talk about some of the interesting failures and issues that can be found in their (meaning Wikimedia's) public test environments, and what those obscure messages might actually be telling us.

I'll be live blogging, so if you want to see my take on this topic, please feel free to follow along. If you'd like a more authoritative real time experience, well, go here ;) :

https://www.mediawiki.org/wiki/Meetings/2013-09-18


I'll be back with something substantive around 7:00 p.m. Until then, I have pizza and Fresca to consume (yeah, they had one can of Fresca... how cool is that?!).

---
We started with a public service announcement. For anyone interested in QA related topics around Wikimedia, please go to lists.wikimedia.org (QA), and if you like some of the topics covered tonight, consider joining in on the conversations.


Chris got the ball rolling immediately and we started looking at the fact that browser tests are fundamentally different compared to unit tests. While unit tests deal with small components, where we can get right to the issues where components fail, with browser tests, we could have any variety of reasons why tests are failing.


Chris started out by telling us a bit about the environment that Wikimedia uses to do testing.

While the diagram is on the board, it might be tough to see, so here's a quick synopsis: GIT and Gerrit are called by Jenkins for each build. Tests are constructed using Ruby and Selenium (and additional components such as Cucumber and RSpec). Test environments are spun up on Sauce Labs, which in turn spin up a variety of browsers (Firefox, Chrome, IE, etc.) which then point to a variety of machines running live code for test purposes (whew, say that ten times fast ;) ).


The problem with analyzing browser test failures is trying to figure out what the root cause of failures actually is. Are the issues with the system? Are the issues related to timeouts? Are there actual and legitimate issues/bugs to be seen in all this?

System Failures

Chris brought up an example of what was considered a "devastating failure", a build with 30 errors. What is going on?! Jenkins is quite helpful if you dig in and look at the Console Output, and the upstream/downstream processes. By tracing the tests, and looking at the screen captures taken when tests failed, in this case there was a very simple reason... the lab server was just not up and running. D'oh!!! On the bright side, the failure an the output make clear what the issue is. On the down side, Chris lamented that, logically, it would have been way better for there to be tests that could have run earlier in the process to confirm if a key server was not up and running. Ah well, something to look forward to making, I guess :).

Another build, another set of failures.... what could we be looking at this time? In this case, they were testing against their mobile applications. The error returned "unable to pick a platform to run". Whaaah?!!!  Even better, what do you do when the build is red, but the test results report no failures Here's where the Console output is invaluable. Scrolling down to the bottom, the answer comes down to... execute shell returned a non-zero value. In other words, everything worked flawlessly, but that last command, for whatever reason, did not complete correctly. Yeah, I feel for them, I've seen something similar quite a few times. All of these are examples of "system problems", but the good news is, all of these issues can be analyzed via Jenkins or your choice of CI server.


Network Failures


Another fact of life, and one that can really wreak havoc on automated test runs are tests that require a lot of network interaction to make happen. The curse of a tester, and the most common (in my experience) challenge I face, is the network timeout. It's aggravating mainly because it  makes almost all tests susceptible to random failures that, try as we might, never replicate. It's frustrating at times to run tests and see red builds, go run the very same tests, and see everything work. Again, while it's annoying, it's something that we can, again, easily diagnose and view.


Application Failures


Sauce has an intriguing feature that allows tests to be recorded. You can go back and not only see the failed line in a log, but you can also see the actual failed run in real time. That's a valuable service and a nice thing to review to prove that your test can find interesting changes (the example that Chris displayed was actually an intentional change that hadn't filtered down to the test to reflect the new code,  but the fact that the test caught the error and had a play by play to review it was quite cool).


Theres an interesting debate about what to do when tests are 100% successful. We're talking about the ones that NEVER fail. They are rock solid, they are totally and completely, without fail, passing... Chris says that these tests are probably good candidates to be deleted. Huh? Why would he say that?


In Chris' view, typically, the error that would cause a test like that to fail would not be something that would provide legitimate information. The value of the test is such that, because it takes such a vastly strange situation to cause the test to fail, and under normal usage, the test never, ever fails, those tests are likely to be of very little value and provide little in the way of new or interesting information.  To take a slightly contrarian view... a never failing test means that we may be getting false positives or near misses. IOW, a perpetually passing tests isn't what I would consider "non-valuable", but instead should be a red flag that, maybe, the test isn't failing because we have set the test to not be able to fail. Having seen those over the years, those tests are the ones that worry me the most. I'd suggest not deleting a never failing test, but explore if we can re-code it to make sure it can fail, as well as pass.

Another key point... "every failed automated browser test is a perfect opportunity to develop a charter for exploratory testing". Many of the examples pointed to in this section are related to the "in beta" Visual Editor, a feature Wikimedia is quite giddy about seeing get ready to go out into the wild. Additionally, a browser test failure may not just be an indication of an exploratory test charter, it might also be an indication of an out of date development process that time has just caught up to. Chris showed an example of a form submission error that demonstrated how an API had changed, and how that change had been caught minutes into the testing.

So what's the key takeaway from this evening? There's a lot of infrastructure that can help us to determine what's really going on with our tests. We have a variety of classes of issues, many of which are out of our control (environmental, system, network, etc.) but there are a number of application errors that can be examined and traced down to actual changes/issues with the application itself. Getting experience to see which are which, and getting better at telling them apart, are key to helping us zero in on the interesting problems and, quite possibly, genuine bugs.

My thanks to Chris, Zjelko and Wikimedia for an interesting chat, and a good review of what we can use to help interpret the results (and yeah, Sauce Labs, I have to admit, that ability to record the tests and review each failure... that's slick :).

---
Thanks for joining me tonight. Time to pack up and head home. Look forward to seeing you all at another event soon.

Learn to Take Effective Notes: 99 Ways Workshop #84

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #84: Learn to take effective notes and document your testing in different ways - models, mind maps, sketches and other approaches will all help you gain insights and new perspectives on the system you are testing.


This particular post is timely, mainly because of a recent event that took place with the Australia/New Zealand chapter of Weekend Testing. In that particular session, we focused on test planning and ways that we would test a product with a limited amount of time and resources. We broke up into different groups and tackled the problem in a variety of ways. That session, its artifacts and the variety of approaches used, I think, illustrates this principle beautifully. We all used the tools that helped us to quickly and effectively gather our thoughts capture our ideas, and allow us to make decisions based on that information.


Workshop #84: Experiment with a variety of methods and see how they can help you approach issues and planning. Practice a staged model of information gathering and work through the steps of Capture, Analysis, Practice, Synthesis, Synopsis. 


It's one thing to say that there are a variety of techniques that we can use, but we don't know how powerful they can be until we put them into practice. The benefits of mind mapping seem great until you are in a meeting and you decide you want to mind map, but haven't practiced the skills. Whiteboard sketches are helpful, and can be a bridge between regular notes and the more modern mind map, but how do you capture your results? Traditional mote taking can be effective at times, but you can find yourself with too much information and having to discard large swaths to get to what you need.


Add to this the fact that many people have differing approaches to how they process information, how they chose to practice what they do, and ways to store information that does different things. Procedural information on how to perform a task is different from higher level conceptual details or logging data that can help pin-point errors.


I personally have a three-tiered method when it comes to note taking and looking at ideas. My first line of defense is what I typically call my "Sticky note on steroids". This is using the built-in app "Stickies", and I use it because it is genuinely the fastest way to grab anything I am working on. I frequently grab items from IRC, Skype or Screen sessions when I pair with other testers and programmers. Most of the people I work with know what I mean when I say "hang on, I'm dropping this to Sticky". On some days, the sticky can get to be as much as 15 typewritten pages long. 

I call this my first line of defense because, typically, this is where I capture what might be one-offs or items that are going to be used in the short term. As I go through testing activities, I will look to the Sticky first to see how often I reference something. I use font color, bolding, and other techniques to "heat map" the information I gather and use more frequently. If an item is in black text in the sticky for a long enough period, then I know that I have something that was useful for a little while, but I haven't come back to or I decided wasn't as valuable. This typically gets pushed down the pile or discarded after a time. The information that appears near the top, bolded, and in any variety of colors (but usually in red) means that I have something that meeds to be more permanently captured.


The next line is called my "private wiki". Working at Socialtext, there are a variety of workspaces and other items I use, some more private than others. I have a dedicated Wiki space where I go through and create full how-to guides on certain topics. Sometimes things that I use create short and potentially interesting, but limited, test ideas. This temporary wiki allows me to experiment, see what makes sense, what doesn't, and lets me play with parameters and consider approaches that might make for decent maxims to share. Again, the more often edited and refined, the more likely I'm working with something other people may find valuable.


The third line is the "public wiki" or group workspaces that we all share. We are all encouraged to actively go through and add to the "HowTo" documents. Additionally, if we find outdated information or details that do not line up with current reality, we are welcome to review, confer, supersede and even archive older data that doesn't meet the current reality. we refer to this as "gardening the Wiki", and it's everyone's responsibility to make sure that we are all sharing relevant and timely information.


Bottom Line:


The goal of al of these methods is to go through some simple steps. The cycle of Capture -> Analysis -> Practice -> Synthesis -> Synopsis is ongoing, and circular. Information can get stale rapidly if it is not consistently examined and the approaches applied. For many the capture part is the most important, but all steps needs to be examined and worked through to have the best effect. We can capture huge amounts of data, but if we don't actively practice the other steps, capture will be almost meaningless. Find what works for you, and make a habit at getting better.



Monday, September 16, 2013

Reduce Biases: 99 Ways Workshop #83

Reduce Biases: 99 Ways Workshop #83

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #83: Reduce biases & unintentional blindness. - halperinko


We all come into interactions with others with the world view that we carry. This isn't said to criticize, it's a fact of life. Each one of us have an exquisitely developed, and lifelong designed, map of the world. Every experience we have had, or will have, will etch lines, footnotes and guideposts to that map. However, just like traveling and having experiences can color our perceptions about where we go and what we will do based on those experiences, so our mental map is colored by where we have been and what we have seen in the past. At the root, this is where bias comes into play. 


Bias is seen as a nasty word. We recoil automatically when we hear that someone has a "bias". We automatically associate it with much larger issues, like bigotry, racism, sexism, etc. and for many, they see them as one and the same. They are not, but that doesn't mean that they are still not detrimental to providing a more objective view of situations, and our understanding of them.


Inattentional blindness is something that affects all of us, and we like to think that we can see it when it happens. Many of us have seen the video with the "Moonwalking Bear", but it goes beyond that. At a point, cognitive overload takes place, and the fact is, we will miss something, no matter how well tuned we are to "see what others don't see".


Workshop #83: Study up on bias and how it can affect you. Look for ways that you can "un-bias" whenever possible. Consider avenues and ways of examining a product that can limit inattentional blindness, or at least help make one aware of it. 



Bias is a huge topic. There are numerous biases that we can fall into without even realizing it. The most typical biases that will affect us are what are referred to as the "cognitive biases".


Below are some quick examples of common cognitive biases that come into play when testing software.




- Attention Bias - we focus on something to the extent that we miss seeing something else (i.e. inattentional blindness).

- Confirmation Bias - we see things in a way that tends to reinforce our own views.


- Consistency Bias - if something has happened before, we think it's likely to happen again.


- Distinction Bias - we tend to see things as desirable when viewed together vs. when they are viewed independently.


- Illusion of Control - we create a mental model of how things work, and we get enough confirmations over time that our model is "correct", even if it is not. This becomes an issue when we try something and we do not get the results back that we expect. Our entire model can fall apart at this point, and we'll need to start again.


For a much larger list of biases, check out the Wikipedia article here


Understanding biases is important, and can be the strongest aid in making sure we don't fall prey to them in the first place. It's important that we challenge our beliefs, that we try our best to stand on "scientific ground" wherever possible, that we look at situations dispassionately, and that we consistently ask questions and do all we can to avoid assumptions.


The "illusion of control" is one that can hit us at any time, so vigorously try to see if you can disprove an idea, rather than seek to prove you are right. If you are operating on a faulty mental model, this will likely help you find out faster than if you try to test things that support your model.


If we are concerned about attention, we need to practice the ability to shift focus, or de-focus, to see a larger picture. We need to remind ourselves to change focus within an app and not get too narrow, or consider too many steps in their "perfect path" framing. We also can benefit by seeing how many systems are interacting at any given time.


For Confirmation Bias, make a personal rule that each avenue you choose to explore, you will consider an alternate and opposite view. If you are asked to see if something works, start out with trying to see how to make it fail. If you have a belief that something will happen if you do X, then add Y into the mix and see if the outcome is the same. Deliberately try to discover other avenues where something may be used, and steer clear of "mandates" whenever possible (some mandates you may have no control over).


Bottom Line:


Bias is everywhere, and we all fall prey to it from time to time. We can't completely wipe bias out of what we do, but we can recognize it exists, and we can ask ourselves frequently if we are potentially being influenced by it. By paying attention, and considering that we might be missing something, we can then counteract it, and stop it from progressing. It takes time and practice, but awareness is the first step, so become aware, and then act accordingly :).