Showing posts with label cucumber. Show all posts
Showing posts with label cucumber. Show all posts

Monday, October 12, 2020

PNSQC 2020 Live Blog: Iron Chef Cucumber: Cooking Up Software Requirements That Get Great Results with Chris Cowell


This is our first track talk for the day. For those who are familiar with cucumber, it is both a cool way to do requirements outlining and a top-heavy tool to write tests with an additional plain language wrapper. Note that both of those statements are accurate and are not meant to be said as a positive or a negative. I've ween Cucumber used well and I've also seen people use it in sloppy careless ways. Chris Cowell seems to agree with me. In Chris' view, a bad Cucumber implementation is worse than no Cucumber implementation.

Cucumber is a free open source tool where you focus on behavior-driven development. Basically, how does the user interact with the program and how can we determine that proper behavior is seen.


Cucumber is based on scenarios and each scenario contains steps. Steps are typically written in a format that emphasized statements set up in a "Given, When, Then" series. Think of something like:

Given I want to log into the system

When I enter my username and password

Then I am on the welcome page

Of course, this is just a series of statements. they don't mean anything unless they have actions behind them. Cucumber has a number of harnesses and frameworks that it can interact with. Ruby and Java are perhaps the most well known but there are implementations for numerous other languages.

Rather than focus on the underlying code, let's consider Cucumber requirements specifically (and its underlying Gherkin language). there are a variety of ways to make scenarios and those scenarios can be made to be complex. Ideally, we would try to avoid doing that. The simpler the requirement and the more readable the requirement, the more likely that scenario will not be misunderstood. Adding too much logic or too many steps will make it difficult to determine if the requirement is actually being met.   

Cucumber is also not meant to be an exhaustive system or set of testing options. Plain English sentences are the goal. Look above where I created a simple login example. I could format this so that it looks at entering in multiple usernames and passwords. Is that a good series of tests? Possibly. Is that a good way to test behavior-driven requirements? Probably not. It's also better to focus on what is being done, rather than the exact steps or movements necessary to do every permutation. 

Personas are useful and can help make the descriptions more meaningful to all stakeholders. By giving names and personalities we can see how different users may require or expect different behaviors. 

Another anti-pattern is to avoid copying and pasting between scenarios. Granted, if done correctly, specific statements should be reusable in as many scenarios as possible. The key is to make sure that the statements made are as generic as possible and if generic doesn't cut it (as in there are terms that are specific to scenarios) then make the general statement as useful as possible without having to make multiple changes.

Scenarios can also have general statements that occur frequently. If you have scenario steps that are identical in different scenarios, it makes sense to extract them to a single location and use a keyword called "background". this background means any and all scenarios using that background will call that statement first and will call it with all of the scenarios listed beneath the background.


This all might seem a bit top-heavy and it's possible for some environments and well-established development environments, it may not make sense to wire up a full Cucumber test framework. Still, even if you don't actually run these Gherkin statements as literally automated tests, it is still useful to think of the phrasing and actions associated with Cucumber statements. these phrases and syntax may prove helpful when writing your requirements. Just using the base Cucumber syntax for writing out requirements may prove helpful. If you want to go the rest of the way, you could think of doing the rest and actually writing tests that use each requirement statement.


Wednesday, October 9, 2013

QA Craftsmanship, Unite - Live From Pay Pal, it's yet another Meetup!!!


Live from San Jose, it's... me :).

This time, I'm excited to say that there's a new group that's formed in the Bay Area called "QA Craftsmanship". It's meant to be a corollary to the Software Craftsmanship movement, and it's looking to be a way to bridge the gaps between the programming and testing realms. That's the theory, in any event. Whatever it's called, I'm happy to see dedicated testers who are looking at ways to up their game, and the fact they are doing it here in the Bay Area, double bonus!

This one came quickly together, as I got the invitation on Saturday to join the group, so here's a quick run-down on what we are talking about tonight:


The Althea Studies Platform is a smartphone and web based SaaS platform for creating longitudinal healthcare studies, i.e. studies that collect data over a period of time. Study administrators can define studies on Althea's web site and deploy the studies as native applications on the iPhone and Android and also on the web for use by study participants.

Learn how automated testing gets down with AltheaHealth's mobile tracker (Android/iOS) using cucumber & calabash

Truth be told, I've had much more experience with desktop and traditional web apps than with mobile, and Calabash is totally new territory for me, so I'm curious to see where this talk goes.

Melvyn is starting out with a quick intro to Cucumber, and explaining how the gherkin interacts with the step definitions as defined in selenium, capybara, etc.

Melvyn likes to use some basic rules when it comes to writing tests. One example he points out is "If I can click it, it must be there", so put into a more direct focus, don't code specifically to items on the screen, code based on what you want to do. Defining things by their actual structure (buttons, links, etc) means that if those elements change or are removed, feature files will need to be modified. I like this approach, and think it could save a lot of editing in the future.

At this point, Mishal Shah picked up the talk and carried the discussion forward. Using Jenkins, we can use a variety of plug-ins, and in this presentation, the Android Emulator can be set up as one of those plug-ins. You can also plug in you actual phones into the Jenkins server if it's available, or via port forwarding, you can remotely connect your actual Android device into the Jenkins server (ok, that's a trip, I've gotta' try that out :) ).


Calabash-android basically lets you run Cucumber features on an Android device or emulator. It is controlled from your command line, and as you run your tests, you can see in real time the steps run on the Android emulator. The Feature files and test steps work just like we would expect Cucumber to do on our desktops. 


Again, just as with their web counterparts, remember to make a distinction between creating step definitions that drive behavior vs. driving interactions with elements. All of the options hat we are familiar with when it comes to Cucumber, i.e creating hook files, making macro commands, and other options that we either love or hate about Cucumber and it's underlying structure, Calabash does very much the same thing.


One interesting thing with calabash is that they are using Frank, which allows users to look and see the element IDs and such. Frank will let the user get every element id currently loaded. Key takeaway and urging at this point; don't look for text, look for element ID's (it will also save me from having to go and change tests if the text is changed later).


query, touch and a variety of other commands will allow the user to directly interact with the elements and be able to enter text, select items, etc. This talk is also looking to be developed out to be a hands on workshop in the next few weeks. Hoping I can be there, but if not, I plan to spend some more time in the calabash-android github repository.

One other interesting tidbit (this may be common knowledge to many others, but hey I'm new here) is that calabash has a very limited scope as to what the framework will interact with. It's limited to the apk file and what it will generate. You can't jump over to another app, it's out of scope. That's a security measure, so that you can't hijack other applications. I find that interesting.

Oh, and while I'm on the topic of "interesting", I took some time tonight to give out some "Ministry of Testing" stickers, and for those curious, though I doubt it's the first one in the USA, I think this may well be the first one out in the wild in the Bay area other than me ;):


Thanks for joining me tonight, I hope this was interesting (I certainly found it to be :) ).

Saturday, September 22, 2012

#Agilistry or #PNSQC: Come Hear Me Speak!

This is a little self serving, but hey, I did the work for the papers and presentations to develop the topic, I want to have people hear it. That's not too much to ask, is it :)?

Next Thursday, September 27th, 2012, I will be giving a talk at Agilistry Studio in Pleasanton, CA on "Getting the Balance Right". This is an extension and follow-up/reboot of the talk I gave at CAST 2012 in San Jose.

This talk is being sponsored by the Agilistry Meet-Up Group, and there's still a spot available, so if you would like to attend, go and do your Meetup magic and come by :).

Some may be saying "that's great, but I won't be in the Bay Area on the 27th". Well, for those of you who will be attending PNSQC in Portland, Oregon October 8-10, 2012, you'll get a chance to hear me deliver this talk there, too, albeit in a modified format.

I am confirmed to be one of the presenters during the Poster Paper presentations, so I'll be delivering an "evevator pitch" of these ideas plus examples based on questions and feedback. Consider the Agilistry talk the full presentation, and the PNSQC talk a more tailored and dynamic presentation that I'll be delivering several times. There's also a chance that I may be able to deliver the full talk at PNSQC; we'll see how the final schedule shakes out.

Also, I will be participating in PNSQC's "Birds of a Feather" talk series on Monday afternoon about testing challenges and specifically how Miagi-do uses them to help develop and mentor testers, and how you can use the same techniques when creating test mentoring for your team (formally) or to improve and develop your own craft along with a few friends.

Hope to see you there, whichever "there" happens to work for you!

Tuesday, September 4, 2012

Book Review: ATDD By Example

ATDD is a relatively hot topic that has been getting more and more coverage both in the press and the blogosphere. I also have the benefit of knowing and have collaborated with the author of "ATDD By Example" over the past few years, so I could make this the shortest book review ever and just say "Markus Gärtner is my bud, he's awesome, his book is awesome, so go buy his book!" For those of you out there who suffer from "TL;DR", there ya' go, easy as that.

For the rest of you, you want to know what I really think, and I'm going to tell you what I really think. ATDD is a neat subject, it is a theoretical thing of beauty when it's explained at its simplest level, but what is it truly, and how does it work in a practical sense? Does it work in a practical sense? How can an everyday average tester involved in everyday testing work with this? And do I have to know Cucumber, RSpec and Ruby to have this book be worthwhile?

First and foremost, Markus explains the structure and the goals of ATDD very well. He brings his own experiences and makes examples based on things that exist in the real world, and while the examples are simple applications, generally speaking, they have enough meat to show how they actually work and demonstrate realistic issues that real developers and testers will actually face while trying to use ATDD.

Part I lets the tester follow along as Markus steps through a sample application. Many testers will chuckle when they see exactly what application he chooses; it's famous among the Weekend Testing crowd in particular; ParkCalc!!! He takes us through a very real and applicable workshop style approach, where testers, developers and the product owner determine the requirements, implement the requirements, and then create the tests, using Cucumber and Ruby for this first example. We see first steps, mistakes made, refactoring, and expansion of the application and requirements as we learn more and understand more of the domain, plus ways that we can recognize areas that we can reuse.

Part II takes us through a more elaborate example, testing the traffic light rules in Germany, this time using Java and FitNesse. By taking two different approaches and two different development environments, Markus makes the book relevant to multiple audiences, so that, instead of focusing on the tooling and the language, the reader focuses on the practices and methods used to make ATDD work.

Part III focuses on a number of topics that can help the everyday tester, developer or project manager get more out of ATDD. By stepping away from the tooling approaches of the previous two sections, Markus helps answer questions and deal with issues that are universal. Starting with developing examples to help drive the development process, as well as how to use them, format them and leverage them using pairwise testing, domain testing and boundaries, collaborating with the development team and providing testing acumen and input, making our automation as a literal analog of the requirements and specifications. In addition, taking the time to separate as much of the test details from the data that drives those tests (variables, keywords, etc.) can help make the tests we develop more robust, capable and long-lived.

Three appendices are provided, each covering basic details of three common ATDD testing frameworks; Cucumber, FitNesse, and Robot Framework. The reader will need to reference other documentation to maximize the use of these tools, but each Appendix will get the user in question up and running with the basics of all three approaches.

Beyond the examples, the main point that everyday testers will come away from this book knowing is that Acceptance Test Driven Development is Software Development, and they play a critical part in that process. If they do any type of test automation, they are developing software, and they should use the same practices, methods and methodologies that software developers use. Even if you are not specifically a coder, or you consider your skill set rudimentary, there is a lot to consider here that will help you get closer to understanding the development process and how you can contribute to it in your role as a tester.

ATDD by Example is a book that reward repeated reading. It's likely that you will get one message the first time through, and after practicing with the examples for awhile, you will give it a second pass and pick up many new things you didn't catch the first time. In short, ATDD by Example is a book that you will likely refer to on a regular basis until you get the concepts hard wired. Even then, there will be a lot of interesting tidbits that you will probably catch on as you read through it several times. Barring that, if you'd like to be more "quick on the uptake", then make sure to read Part III a few times, as it encapsulates much of the philosophy and methods that will be the most helpful to testers and developers looking to implement this approach.

Again, I could have saved you a lot of time by having you just read the first paragraph, but hey, now you know why I said it.

Wednesday, June 20, 2012

My Current Dilemma: Resolved (Sorta)!

Yesterday, I mentioned that i had some issues with making my tests for Cucumber be totally random if I wanted them to be. I also asked the test community out there to chime in with their thoughts and their approaches to how to handle this. I received a number of replies through Twitter and email, and it even got the attention of one of my team's developers, who when I told him what I was aiming to do, did some digging of his own... and here's what he found:



Basically, if you add this to your env.rb file, it will shuffle your feature file.

I've tested this, and it does indeed work. No tweaks, no file name or directory changes needed. The only thing is that it's either on or off, so my next step is to see what it will take to set this as a flag to rake. It would be sweet to just say something like "

rake cucumber:tag:machine:random


To have randomized runs, and not include the random tag to have them run in standard, alphabetical order.


I'm not 100% of the way there, but this is pretty good so far :).

Tuesday, June 19, 2012

My Current Dilemma: Randomizing Cucumber Tests

I'm working on a situation where I am trying to describe how to balance out activities such as Acceptance Test Driven Development, GUI automation and exploratory testing, and explaining where we can take ideas from each and work with them to help us get a balance between the three. This is the subject of a talk I'll be giving and am currently writing (which also explains my being more quiet than usual on here :) ).

One of the ideas I discuss, and that I actually do, is that I put a little bit of "What if" into my acceptance level tests. I have a suite of tests that I run to check and verify basic functionality for every build that we make and push to our respective machines (development --> demo --> staging --> production). All of my tests are written so that they can run on any of these environments. These are Cucumber tests, running on top of Ruby in a Rails environment.

One of the limitations about running Cucumber as an acceptance testing tool is the fact that the tests all run in alphabetical order in the feature directory. This is assuming you run your tests using rake; my setup is configured so that I can run a suite of tests with "rake cucumber:[@tagName]:[machineName]". For some added analysis and review, I currently have a bash wrapper script that allows me to tee the output to a log file, parse the log file for errors, set up a new suite of tests, and then rerun them. The goal is to get to where I can run these tests in a single pass and have no errors (legitimately, of course :) ). Usually, though, that doesn't happen. I typically have to make three passes before all of the tests pass and there's no more errors to parse in the log file.

In a pinch, that's OK, but the nerd in me doesn't like the fact that I have to tweak things like this. This caused me to start asking "What if"... What if there's a dependency I'm not aware of? What if there's something about the way that the tests are run and the way that we authenticate certain accounts that might leave "rat droppings" in the state condition? What if I tweaked with the order of the tests? What if I totally randomized the test order every run?

I've been able to address the first three, but the fourth has been a bit of a mystery. How can I set up a process where, without going in and renaming directories or files every time, can I make it so that the sixty or so scenarios tests that I run are always in a different order? I've seen different ideas, but most of them require making a weight for tests (which isn't really random), or setting up some kind of  permanent table with file name and line number to designate the test to be run (again, has to be manipulated each time, and adding tests creates overhead), or using bash's $RANDOM option, but again, this is getting me into doing bash gymnastics, which isn't really my goal here.

So I ask my fellow testing friends, especially those who use Cucumber and Rails, what do you do?

Wednesday, May 9, 2012

Cucumber Nerds: Can Color and Pipes Coexist?

I think I have gotten to a point where I have done all I can do to try to figure out a problem, and I'm coming up empty handed, so to my wonderful tester friends out there, especially those well versed in Cucumber, I need your help.

In a perfect world, I would come up with hooks to handle weird exceptions, and I would also have clean running tests every time. However, this is not a perfect world, and right now, getting the tests to reliably work is my priority. Also, I want to be able to rerun tests that fail and see if they are spurious failures or if there's a real problem.

How did i do this? I made a shell wrapper that would tee the output of the console to a rerun file, and then I would capture the failures, recreate a test suite of just the failed scenarios, and run them individually. Repeat until we get a 100% clean run or we confirm that something is genuinely broken.

So what's my problem? The lovely red/green/yellow color scheme that shows me what's happening at a glance disappears, replaced with a monochrome representation of all the steps. Do the tests run? Sure? Does the rerun whittle down my errors until I get to the essential problems? Yep, it does. I just want to have my cake and eat it too, if possible.

If I run without the rerun script (i.e. sans tee):


If I run with the rerun script (i.e. with the tee):


I get that this is because the system is trying to be smart and not deal with colors because the output is going to a pipe and not to the traditional console. My question is, how can I make my system stop being so smart? the -c (force color) option doesn't cut it.

Monday, January 30, 2012

Book Review: The Cucumber Book

One of the cool things about Pragmatic Publishing is the fact that they make it possible to get your hands on Beta books, meaning you get the chance to see a book as its actively being developed. The Cucumber Book was one of those books, and as such, I’ve had the benefit of looking at and reviewing this book for the past several months, and have watched it grow into the book that is today (and now available in print form).


Most people who have a passing understanding of Test Driven Development or Behavior Driven Development have likely heard of Cucumber. It’s a language that allows anyone who wants to define tests and requirements for applications the ability to do so in plain English (or fill in the blank language if supported). In truth, Cucumber isn’t really a programming language at all, but a symbolic phrase library that matches to various underlying commands and blocks of code (represented in Ruby in this book and referencing a variety of tools including Capybara, Rspec and others).


Matt Wynne and Aslak Hellesøy have put together a very readable and focused text that help the user get familiar with the basics of the language. The book also focuses the reader on understanding the underpinnings needed to create expressions that work with their respective technologies. Granted, if you are a tester and you want to take advantage of this framework, there is plenty in here to keep you busy. The Cucumber Book starts out by explaining what Cucumber is and the niche it is meant to fill (specifications based tests and requirements). If you are a developer, there is likewise plenty in here to keep you interested, too.


The process in the Cucumber book is heavy on examples and showing how the examples work. Yes, for those who want to know how to use the syntax and language specific details of Cucumber, that stuff is covered. What is also covered, and covered well, is the Behavioral Driven Development approach needed to effectively create tests and have them work effectively. Along with creating feature files and steps for those feature files, the underlying step definitions also have to be coded. Not only do they have to be coded, but they have to have assertions written that will effectively confirm if the step has passed, or if it fails, and why.


Since the book is primarily based on Cucumber, there is a large section that covers Cucumber fundamentals, including basic Gherkin (the underlying syntax that Cucumber uses), and the ability of using expressive options such as Scenario Outlines, Data tables, Doc Strings, tags, and dealing with some of the pain points seen in your tests (such as "flickering scenarios", where the tests pass some of the time but fail some times, too). More than just using Cucumber to define steps and have step definitions defined, the third part of the book deals with applying Cucumber to a number of different technologies; working with various databases, testing with RESTful web services, working with Rails, running tests and using capybara to simulate common browser actions and many other options that may come to play in your everyday testing life.


Bottom Line:


If you have ever been interested in looking at Cucumber and your testing environment is built around Ruby, then this will be an ideal book to use. If you are interested in deploying Cucumber in another type of environment, such as testing with Java or .NET, many of the ideas in this book will also carry over, but have a look at “The Secret Ninja Cucumber Scrolls” by David de Florinier and Gojko Adzic. It provides information about how to apply Cucumber to those environments. Regardless of your particular focus and environment needs, for a practical and effective book for learning and using Cucumber in a meaningful way, The Cucumber Book is an excellent addition to any tester or developer’s library.

Monday, December 19, 2011

Maintenance Mode

As I was standing with our design director and talking about some of the new features planned in the coming weeks and months, I was excited about the prospects, I was intrigued at what that would mean for me in my testing, but at the same time, I had a sinking feeling... I realized that "Oh no, I have a whole bunch of tests that are going to effectively be broken with these changes".

This is a typical situation. I can count on the fact that I will be doing some tweaks and changes on the sites that I work with, and that my scripts are not going to be evergreen. At the same time, it can be frustrating to have to gut whole sections and retool scripts. These are times when I must admit, I've been tempted to just throw up my hands and say "Gaah, what's the point?!"

Here's where I want to ask the other Lone Testers out there... how do you deal with "Maintenance Mode"? I understand when you are working as a tester and you have the advantage of an automator or an automation team, but what do you do when *you* are the automator, and the exploratory tester, and the regression tester, and the fill in the blank tester? I don't have the opportunity to hand off the maintenance work, and when I do the maintenance work, I'm not testing.

I will say that I do find a lot of the rework options and the ability to create "macros" in the selector.rb file to be very helpful in the maintenance steps. I like this option because I can focus on just making the language of the steps be business rules and say "look for the following", while having all of the individual steps grouped together. the biggest problem with that approach, though, is that I often have to "unfactor my refactoring" and plug in the original group of steps to see what's no longer working.  Don't get me wrong, each time I do this, I learn a little bit more, and I learn which refactored steps are actually effective and durable, and which ones I need to reconsider and, well, refactor the refactoring.

I am serious, though, for those who often find they have to make large scale changes to their scripts, how do you effectively balance your work load and focus so that you can do both, or do you just let everyone know "I can do X, or I can do Y, but if you think I can do X and Y at the same time, you're nuts!" Right now, I'm doing the latter. I'm open to suggestions, seriously :).

Tuesday, November 29, 2011

Cleaning Up Dirty Tests

One of the things I've been trying to master is the ability to make my tests stand on their own and not have dependencies on other tests. This seems like a standard thing to do, and we often think we are doing a good job on this front, but how well are we doing it, really?

As I was looking at some recent tests with one of my co-workers, I was really happy that they were passing as frequently as they did, but my elation turned to frustration the next build when a lot of tests were failing for me. The more maddening factor was that the tests that were failing would all pass if I reran them.

Wait, a little explanation is in order here. In my current work environment, I use Cucumber along with Rspec and Ruby. I also use a Rakefile to handle a number of tagged scenarios and to perform other basic maintenance things. When I run a test the first time, I get an output that tells me the failed scenarios. As a way of doing retesting, I tee the output to a rerun file and then I run a shell script that turns the failed scenarios into individual rake cases.

In almost all of my test runs, if I ran a suite of 50 scenarios, I'd get about 10 failures (pretty high). If I reran those 10 failure cases, on the second try I would get anywhere from 8 - 10 of them to pass. More often than not the full 10 would pass on a second run. As I started to investigate this, I realized that, during the rerun, each test was being run separately, which meant each test would open a browser, test the functionality for that scenario and then close the browser. Each test stood alone, so each test would pass.

Armed with this, I decided to borrow a trick I'd read about in a forum... if you want to see if your tests actually are running as designed, or if there is an unintended ordered precedence to their success rate, the best way to check is to take the tests as they currently exist, and swap their order in how they are run (rename feature files, reorder scenarios, change the mames of folders, etc.). The results from doing this were very telling. I won't go into specifics, but I did discover that certain tests, if all run in the same browser session, would get unusual artifacts to hang around because of the dedicated session. these artifacts were related to things like Session ID's, cookies, and other items our site depends on to operate seamlessly. When I ran each of these tests independently (with their own browser sessions) the issues disappeared. That's great, but it's impractical. The time expense of spawning and killing a browser instance with every test case is just too high. Still, I learned a great deal about the way my scripts were constructed and the setup and teardown details that I had (or didn't have) in my scenarios.

This is a handy little trick, and I encourage you to use it for your tests, too. If there's a way to influence which tests get run and you can manipulate the order, you may learn a lot more about your tests than you thought you knew :).

Tuesday, November 8, 2011

An Automation Mea-Culpa


For the last several months I have been working on trying to get together a fully customer facing web automation framework. Yes, this is the classic "all up front, what the customer sees" automation project, the holy grail of effective web interaction testing. And it's a bit aggravating at times, if I may be frank!

I'm saying this to be totally honest. There are so many moving parts to be aware of, so many little things that have to be considered, and even though I think that the Cucumber/Rspec/Ruby approach is actually pretty good for an overall framework, it still leaves a lot to be desired from a consistency and reliability standpoint (and yes, I'm totally willing to believe that it's my growing understanding of how Cucumber, RSpec, Capybara and Ruby all fit together that's at the heart of this). Still that's not the point of today's post. Actually, it's to give a little bit of empathy for our development brethren who have to maintain things that quickly become anything but simple.


A little background; the scripts that I write are not terribly complex, and at the moment, I've only had to research a handful of Rspec and Ruby statements to drive and work with Capybara and WebDriver beyond what was directly provided for me (quite a lot of functionality is just straight capybara calls with little in the way of modification necessary... kinda' nice, actually). Still, there's a fair amount of stuff that has to be configured to play well with others, as my tests have to work correctly and reliably on four different environments. We do frequent pushes from development machines to a demo machine, and then to a staging machine, and finally if all looks clean we push to production. This is a common format in many organizations, and my goal is to maintain just one set of test scripts and have them run effectively on all four environments. While the scripts themselves are the same, keeping four different environments in sync can be a challenge, and the last few days, I've been doing a fair amount of refactoring. One of the refactoring changes I made moved login credentials to a single file.

I thought I knew where all of these environments variables were and that the config was straightforward for the machines, but I was wrong. Because of that, when I thought I was using a test account for various tests, on one machine, the accounts were linked to each other... meaning my test account was also sending updates to my personal twitter account (fine earlier in the testing process, but unacceptable in this late stage). My thanks to a fellow tester who alerted me to the fact, and I sheepishly had to apologize because he had to alert me twice! The second time through, I scrubbed every line of every file, and found the error (and modified the script to make sure the same mistake didn't happen again).


Sometimes as testers we can get a little self righteous and bag on developers when they make bone-headed mistakes. Believe me, I've gloried in that same pastime, but today I'm seeing just how easy it is to think you've fixed something only to see that, in truth, you only fixed one symptom of a deeper problem. We testers like to believe that, were we in the developers shoes, surely we wouldn't make the same foolish mistakes. Well, guess what? Yes, yes we do, and they are just as embarrassing. Maybe even more so, because now I have no one else to blame. These are my scripts, my underlying code, my configuration files, my clever little clean-ups that, in hind sight, some don't look so clever after all. And this time it's my turn to play fix, test and try again... same as it ever was.

I still feel testing requires a diligent effort and a solid focus, but really, I'm starting to develop a bit of empathy for my developer cousins. Shipping is actually harder than it looks.

Monday, October 10, 2011

Ruby For Newbies?

In my never ending quest to find ways to better understand the things that I do, I like to find new approaches and different formats to help me appreciate what I'm learning. For many people out there, a book with examples is great. For others, a website tutorial works well. For others, videos or screen casts are able to scratch that particular itch the best. Me? I tend to like them all, because each has their strengths and limitations, and each scratches my brain a little bit differently.

So today's find, Ruby for Newbies, is brought to us by the folks over at Tuts+, an interesting source of a lot of tutorial content, screencasts and podcasts related to various technologies and areas of interest. The y have a listing of all their but the one's that fit my purposes today are a 13 part series dedicated to Ruby, which can be read as tutorials or watched as screen casts. The beauty of a screen cast is that you can rewind and watch as many times as you want. The series starts at the very beginning with installing Ruby on either Windows or a Mac, and then walks through several screen casts (each about a half hour long) to discuss various topics in Ruby and how to work with them. Note, there is a Premium service offered by Tuts+, but if you don't want to make that commitment, that's OK, Tuts+ offers a lot of content 100% free, including each of these screencasts (the source code can be downloaded by Premium members, so if that's a big deal to you, well, consider it :) ).

The Ruby for Newbies tutorials are written and presented by Andrew Burgess (@andrew8088 on Twitter, happy to give him a plug here :) ) and provide good information directly at the level to help you understand the concepts. There's only so much that can be covered in 30 minutes, of course, so there's certainly limitations to the format. The ideas are covered in a rudimentary way, you will of course need to do considerably more playing around with it if you would like to get more in depth and understand the details better. The tutorial that follows also provides concise details and the specifics of what the screencast covers.

Screen casts that I have personally found immediate value in were Ruby for Newbies: Testing Web Apps with Capybara and Cucumber and Ruby For Newbies: Testing with RSpec. I'm curious to see what more I can learn by looking over the earlier entries in the series. You may enjoy them as well, so if you are so inclined, give Ruby for Newbies a look.

Monday, October 3, 2011

Some Very Small Cucumber Tips

For those who have been watching my transition from primarily being a manual tester to this generations concept of automated testing (I did a bunch of shell scripting in the 90's and worked with Tcl/Tk as an automation framework for some time. This was before the rage of record and playback tools took over the testing world). Now my world revolves around Cucumber, which uses Ruby to complete the plumbing (at least in my environment, I realize Cucumber is language independent :) ).

Over the past few months, I've been tweaking with it and discovered it can do some cool things, but most important, I think it's important that people realize what Cucumber isn't. Many people will tell you that Cucumber is programming put into plain English (or fill in your favorite language). It's not. In fact without the underlying language plumbing (again, take your pick), the English syntax does absolutely nothing. You have to define every statement that you use. Additionally, every statement has to have coherent code underneath it that ties in with each other statement. So when you see something like this:

    Given I am on the home page
    And I click "Sign Up" within ".login"
    When I fill in "Username" with "<username>"
    And I fill in "Password" with "<password>"
    And I fill in "Confirm password" with "<email>"
    And I fill in "Email" with "<email>"
    And I select "<time_zone>" from "Time Zone"
    And I select "<i_am>" from "I am"
    And I select "<year_of_birth>" from "Year of Birth"
    And I select "<country>" from "Country"
    And I click "Create Account"
    Then I should see "1 error prohibited this user from being saved" within ".errorExplanation"

You have to realize that every one of those statements has an underlying set of Ruby statements that make it coherent. Oh, and for those wondering, what's that stuff in the "<>"? That's syntax you can use with Scenario Outlines, a cool little technique to make tests easier to code if they will be run multiple times with the exact same parameters. The way that you take advantage of it is to make an Examples table:

Examples:
|username|password|email|time_zone|i_am|year_of_birth|Country|
|testname1|mypass12|mytest12@gmail.com|Pacific|male|1979|Canada|


I realize this is probably "blinding flash of the obvious" to some people, but it's little things like this that make me smile and find different ways to use Cucumber that I hadn't immediately considered.

Another neat things that was shown to me was to create a file called selector.rb. By creating this file, you can gather together steps that are frequently run and make one "plain english step". I'm not entirely sure why it's called selector.rb other than to give the user a chance to take their common calls to html elements and simplify their naming. While the ability to do that is cool, what I really like is the ability to chain serial commands like the following:


Given /^I log in to facebook with good credentials$/ do
  And  %Q|I fill in "email" with "myusername@email.com" within "#login_form"|
  And  %Q|I fill in "pass" with "aP@ssw0rd" within "#login_form"|
  And  %Q|I click "Log In" within "#login_form"|


The beauty of the above statement is that I only need to make a line as follows:

Given I log into facebook with good credentials

Note, this isn't a free ride. If you put a bunch of lines into a single line statement. It can be maddening to figure out exactly which line is causing the problem, because the error message will relate to the entire grouping. For that reason, try not to overload your selector.rb file with lots of multi-line statements. It makes for elegant and short looking tests, but the system sill has to parse all of the substitutions and run them. If anything goes wrong in the block, the entire block fails, and so does the rest of the test.

So there ya' go, some master of the obvious stuff, but it's make my testing a lot easier and in some ways a lot more fun :).

Thursday, July 28, 2011

Simple Tweaks and Game Changers

This morning, I had the chance to sit down and play with my "new toy". As many of you know, one of the things I do each week is that I produce and edit the "This Week in Software Testing" podcast for Software Test Professionals. What you may not know is that I've worked with a portable and, for the most part, shoe-string environment, consisting of a simple USB microphone and a PC, using Audacity as the editing and production environment. This has been partly for necessity (I want to keep the environment portable) but also with the goal of keeping costs down (after all, when I started, I didn't know how long this would last or how many total episodes we would do). To this end, I've used a simple USB Logitech desktop microphone for the past year. For the record, it worked very well for very little money. I wouldn't call it a "studio grade" microphone by any means, but for what we've been doing, it got the job done :).

After producing a year's worth of shows, I decided it was time to make a jump up in microphone standards... actually, a slight mishap with the Logitech microphone necessitated the change.  By slight mishap, I mean I tipped over the mic stand I was using to hold it at a more natural level for me, and it sheared the plastic capsule. It still works, but the base now just dangles by the wire that leads to the microphone element. Could I fix it with some glue and some electrical tape? Sure, but with this happening, and looking at the future landscape for podcast recording and the time I invest in the process, I figured "oh what the heck, why not upgrade the microphone?!"

Sidereel does a fair amount of video production and also does some audio work that requires some "after the fact" voice over work. Since they are in the practice of doing daily recording and editing, I decided to ask what they use. It turns out they use a Blue Microphones Snowball, which is a mid priced USB microphone that plugs straight into a Mac or PC (or with an adapter can plug straight into a mixing board). It has three separate pickup patterns that can be selected and, in their estimation, is well suited for doing voice over work.

Due to the fact that I do a lot of keyboard checking and hand-level checking on my PC (i.e. my PC is my recording environment and my mixer, and I don't have the luxury of an isolated or quiet control room), I also decided to invest in a shock mount, which isolates the microphone from bounces and picking up transient noises or vibrations. Combined, this makes for a formidable tabletop microphone, stand and shock ring, with very near professional broadcast quality sound, and the whole rig cost less that $100.

It took me a little while to sit down with it and get its quirks worked out. It gives a much better sound pickup, which means my normal seating, projection, and delivery also had to be changed (lower quality mics have a certain charm in that they don't pick up all of the ambient noise or the little clicks and ticks of the movement of a speaker's mouth like this microphone does).

This reminded me that, often, when we add a new tool or a new piece of knowledge to our arsenal, we have the ability of really changing our game and doing things differently, and by necessity, we need do things differently. When I started using Cucumber to automate tests a couple of months ago, it by necessity required that I change my way of thinking about how to automate tests. The very nature of the tests I was responsible for (being able to make sure that our development environment and development changes capably made the transition from development machines out to the production environment, with two intermediate steps in between) also required that I look at my testing differently, and to realize that methods and approaches that worked for one group of developers and testers would, by necessity, require a different approach and level of thinking in this case. All it takes is one element, one change in our environment, and our adjustment to work with it can radically alter our approach and our every day efforts.

For the podcast recordings, I hope this will give me the opportunity to tackle more voice over options with better quality. In my day to day testing, it's my hope that I'll get to play with more that Cucumber, RSpec and Ruby can offer so I can be more effective in my active testing. New toys make for new ways of thinking. While the toy may be the catalyst, ultimately it's our brains and our reactions that make them work effectively... so go and do likewise :).

Friday, July 15, 2011

Danger: Blogger Who Reads!!!

Worry not, my Podcast grab bag will be going up later today, but since this is a blog mostly dedicated to what I at least hope will be based on education and re-education (the good kind, where we willingly learn new things, not the bad kind associated with Cold War era novels), I have decided to do some things to draw more attention to the books that I read and review. I once had an ambitious goal of reading and reviewing a technical book a week. I quickly came to the conclusion that was impossible, at least if I wanted to do the review or book any justice.

There are numerous areas where I draw attention to the books I'm interested in, am actively reading, or want to read. Unfortunately, the list seems to grow by the day, and the more books I find, the more books I learn about and want to do more with. This has led me to the practice of finding and requesting books that are still in development so that I can work with them before they are released. One company that does this very well, I feel, is Pragmatic Publishing.  The book that I am working through right now, and which is in Beta distribution, is "The Cucumber Book".

Cucumber is one of the more interesting ideas in testing that I've become directly involved in. It's not a panacea, but it is a way and a method to writing tests, and writing code for that matter, that focuses on the behavior of an application and testing to make sure that behavior is doing what it actually should be doing. The ideas behind Behavior Driven Development are very interesting, and as I am also working through The RSpec Book, (likewise from Pragmatic Publishing), I'm very curious to see how The Cucumber Book augments the information found in The RSpec Book.

Note, this is not a review... not yet, anyway. First, the book is not out yet, and is still being reviewed and formatted, thus it's like reviewing a restaurant while they are still doing the plumbing and hooking up the grills and ranges. Also, I don't review books until they are officially available to all, but I do like this format of Pragmatic Publishing to give access to those who are willing to buy the titles and have access as they are being written. Props for that :).

So my goal is to augment this site with more emphasis on the books reviewed to date, and make a listing and timetable for the books I've yet to review but want to. If you have titles that you think might be fitting (testing related, programming related, engineering related, scientific related, or heck, not related at all but you'd get a kick out of seeing me feature and review), let me know, and I'll see what I can add to my ever growing stack of titles :).

Thursday, June 30, 2011

My Walk Through "Mordor": Lessons in Automation

Today I had a chance to see just how well I was doing in my quest to learn Cucumber, RSpec and Ruby, and better move along the project of developing "Behavior Driven Smoke Testing"... naah, no cool acronym there, but it's been an interesting experience nonetheless. The Project that I'm working on is referred to internally as "Mordor", and it seems somewhat fitting. It was actually named by a co-worker with the idea that the "All-seeing Eye" would be checking on everything. Said co-worker was referring to the Eye of Sauron, but he, not really being a Tolkien fan (and he said so himself, I might add ;) ), confused the land of Mordor with the character of Sauron. I explained that the idea of a land surrounded by volcanic mountains that prevented invasion from without but also kept its inhabitants locked in seemed very fitting as a name for a smoke testing project.

As I demoed my tests for the first time in front of the whole engineering group, I had a chance to "defend" my own development work, so to speak. Overall, it was a good experience, and I learned a great deal about how to better implement and approach automated testing from different perspectives.

First, while I was given some praise for the detail and care I'd taken, I was also encouraged to not spend so much time on certain areas. These were smoke tests to check for broad and fast confirmation of functionality, not deep probing examinations of key areas. There would be time for those tests later, but it was wise to have ten tests that covered twenty functional areas rather than having ten tests that covered just one. Currently, I'm closer to the latter than the former, but I'm learning and getting better, so that helps.

Second, while data driven testing is a great method of reusing tests, it can be maddening in that it doesn't give the same kind of feedback a single linear test gives. It's certainly desirable to make tests extensible, but save that for after you have confirmed beyond a reasonable doubt that you have a clean test first.

Third, "red, green refactor" really is a strong as people say it is, and I am progressively worse as we go down that continuum. On the bright side, having gone through several dozen scenarios and scenario outlines now, I'm getting to the point where I don't have to look up each example, and I'm finding where repetitive actions can be grouped together into single line statements and stored in a steps file (you don't have to store just RSpec or ruby statements in a steps file, you can also chain existing cucumber feature statements into single line groupings and use them like a shared library. Yes, I realize that for many out there, this is total master of the obvious type stuff, but for me, this is the first time I feel like I'm really getting traction and making real moves with an automation framework.

Which brings me to fourth; a little every day. If I let a few days slip between making new tests, the rust accumulates fast, but if you give a little bit each day, even if it's just 30 minutes a day, you can keep the momentum moving in your favor. Fortunately, the past few weeks I've been able to get way more than 30 minutes a day.

It's been an interesting experience full of dark and light moments, things that are scary and make no sense, and things that work quickly once you get your flow going. Dare I say it, BDST, and writing tests in Cucumber, RSpec and Ruby is, gasp, actually pretty fun! I'm no great shakes just yet and I doubt that any hardcore Ruby developers need worry about me poaching their jobs just yet, but I am heartened by the fact that i seem to really be getting somewhere with this, finally. Who knows what the next few weeks will bring, but I hope it's ever more green and ever better refactoring. I'm looking forword to it, even if I must walk a dark path to get those skills :).

Tuesday, June 28, 2011

Book Review: The Secret Ninja Cucumber Scrolls

Most of my book reviews have been for titles that you can buy online or get from the public library. Today I want to talk about a book that is only available online and is free. This book is "The Secret Ninja Cucumber Scrolls: Strictly Confidential", written by David de Florinier and Gojko Adzic. Were it to be a pay to have book, it would be well worth it. The fact that it's free makes it a true gem.

First, some background. My exploration into Cucumber has been relatively recent, starting with shortly after the Selenium conference and accelerating with my company giving me an initiative to automate a range of user facing smoke tests. With this, the idea of Cucumber being the weapon of choice was somewhat decided for me, both by history and by recommendation from my teammates. There are a number of resources on the Internet regarding Cucumber, and there are also a couple of books out there related to Cucumber as well (most notably "The RSpec Book", which I'm also currently reading).

The challenge I've faced with many commercial books is that they usually, for sake of printed pages, have to focus on just one language or implementation, and leave the users to either match the implementation, or figure out their own environment on their own. Sometimes that's easy to do, and sometimes, due to inexperience or lack of familiarity with specific tools, a lot gets lost in translation. Additionally, a lot of books go into great and specific detail that would be helpful to an implementer of a framework, but tend to trip up more novice testers.

The Secret Ninja Cucumber Scrolls overcomes both of these issues quite handily. First, the book is written from three environment perspectives; Ruby, Java and .NET. As you might well guess, these environments have significant differences and architectural approaches, and as such, require different steps. The book itself is about 150 pages. When split between three environments, that means each environment gets about 50 pages worth of treatment. Because of this, the language specific tutorials and details tend towards the essential and most useful, which is a real blessing. For many testers, there isn't a need to have a mountain of prose and examples for every conceivable option. Get us up and running with the key insights and we'll gladly tinker around the edges to grow the set and the skills. The Secret Ninja Cucumber Scrolls adheres to this philosophy and does so very well.

Additionally, as you might guess, with its tongue in cheek title, the text and examples are likewise tongue in cheek. Most examples relate to how Chuck Norris can kick anyone's butt (I don't need to explain the Chuck Norris meme to this crowd, right? Yeah, I didn't think so ;) ). It's this fun irreverence and humorous style that makes the book enjoyable to read, and the split between the different styles makes it easy to hone in one what's important to me. Additionally, I really appreciate the fact that each area uses different tools and different frameworks and explains how to set each up, integrate the pieces, and work towards making each area work with clear examples. While I am not currently using Java or .NET for my testing, the fact that I can implement them in those environments and have the guidance necessary for them as well is comforting and encouraging. Who knows, I may need to do so some time down the road; it's great to know one book can cover all three to a good extent.

Will this be the be all and end all for Cucumber? Likely not. Will it be an excellent starting point? Absolutely! Do I plan to get more involved with it beyond here? I sure do, and this book has done more in a shorter amount of time than anything I've yet come across. If you are considering exploring Cucumber as part of an overall automation framework, start here. You'll learn what you need to to be productive very quickly, and with three environments to practice with, you're sure to find something that will work for your given area of focus.