Tuesday, November 8, 2011
An Automation Mea-Culpa
For the last several months I have been working on trying to get together a fully customer facing web automation framework. Yes, this is the classic "all up front, what the customer sees" automation project, the holy grail of effective web interaction testing. And it's a bit aggravating at times, if I may be frank!
I'm saying this to be totally honest. There are so many moving parts to be aware of, so many little things that have to be considered, and even though I think that the Cucumber/Rspec/Ruby approach is actually pretty good for an overall framework, it still leaves a lot to be desired from a consistency and reliability standpoint (and yes, I'm totally willing to believe that it's my growing understanding of how Cucumber, RSpec, Capybara and Ruby all fit together that's at the heart of this). Still that's not the point of today's post. Actually, it's to give a little bit of empathy for our development brethren who have to maintain things that quickly become anything but simple.
A little background; the scripts that I write are not terribly complex, and at the moment, I've only had to research a handful of Rspec and Ruby statements to drive and work with Capybara and WebDriver beyond what was directly provided for me (quite a lot of functionality is just straight capybara calls with little in the way of modification necessary... kinda' nice, actually). Still, there's a fair amount of stuff that has to be configured to play well with others, as my tests have to work correctly and reliably on four different environments. We do frequent pushes from development machines to a demo machine, and then to a staging machine, and finally if all looks clean we push to production. This is a common format in many organizations, and my goal is to maintain just one set of test scripts and have them run effectively on all four environments. While the scripts themselves are the same, keeping four different environments in sync can be a challenge, and the last few days, I've been doing a fair amount of refactoring. One of the refactoring changes I made moved login credentials to a single file.
I thought I knew where all of these environments variables were and that the config was straightforward for the machines, but I was wrong. Because of that, when I thought I was using a test account for various tests, on one machine, the accounts were linked to each other... meaning my test account was also sending updates to my personal twitter account (fine earlier in the testing process, but unacceptable in this late stage). My thanks to a fellow tester who alerted me to the fact, and I sheepishly had to apologize because he had to alert me twice! The second time through, I scrubbed every line of every file, and found the error (and modified the script to make sure the same mistake didn't happen again).
Sometimes as testers we can get a little self righteous and bag on developers when they make bone-headed mistakes. Believe me, I've gloried in that same pastime, but today I'm seeing just how easy it is to think you've fixed something only to see that, in truth, you only fixed one symptom of a deeper problem. We testers like to believe that, were we in the developers shoes, surely we wouldn't make the same foolish mistakes. Well, guess what? Yes, yes we do, and they are just as embarrassing. Maybe even more so, because now I have no one else to blame. These are my scripts, my underlying code, my configuration files, my clever little clean-ups that, in hind sight, some don't look so clever after all. And this time it's my turn to play fix, test and try again... same as it ever was.
I still feel testing requires a diligent effort and a solid focus, but really, I'm starting to develop a bit of empathy for my developer cousins. Shipping is actually harder than it looks.
Subscribe to: Post Comments (Atom)
I was only thinking the same thing myself last week: http://qahiccupps.blogspot.com/2011/10/tester-verify-thyself.html
Well put, James, I can feel for you. Today especially :).
Post a Comment