Some background, last night at the Selenium Users Group Meet-Up, I met up with Chris McMahon, Zeljko Filipin and Ken Pier, my Quality Director at Socialtext. After the Meet-Up, we headed out to a local pub to talk shop and relive old war stories, of which their were many. Chris and Ken worked together at Socialtext a number of years back, and they were telling me about a number of decisions, changes, issues and other such things that led them to, by necessity, create the automation methods that they did.
Socialtext does something rather cool, if I do say so myself. As a wiki, and the fact that the Selenium test harness that we use leverages the wiki framework that we provide, in addition to the fact that we aggressively dogfood our own product, we also automate the vast majority of acceptance test cases created for any story. These acceptance test cases are converted into code and stored in our wiki page hierarchy. What this means is that our product actually tests itself. It has its own framework, it's own structure (which closely follows Selenese and Fitnesse standards), and it's easy to run just one test case or thousands of them.
The discussion changed over to a current series of comments that, if a test doesn't find any new bugs, then it's ultimately waste to run them. Chris and Ken were commenting on the fact that, while new discovery may well not be at the forefront of an automation test battery, what certainly should be is the idea that anything new didn't break something older. That's why those tests are kept, maintained and run regularly. As Chris was explaining a particularly challenging issue, he pointed out that "that battery of Automated Browser Tests (seen by many as not really offering value)... totally saved my ass on more than one occasion!" After a good laugh, Ken leaned over to me and said "hey, that would make for a good blog post"... and I agree :).
Reducing waste is important, reusing as much as we can efficiently is also a huge win, but sometimes, there really is a value to the tried and true and boring tests that never find an issue. They're really not designed to. They're designed to safeguard and help you know if something new you did has rendered other work compromised. Throw that work away, and you might not have an opportunity to work your way out of a troublesome spot. Sure, refactor, re-purpose, make efficient, but don't fall for a false economy. Confirm that what you use is providing value and is actually being used. If so, you may find yourself being spared a lot of heartache.
There seems to be a growing divide over whether automation should find new bugs or should prove we didn't break anything that worked before. Maybe I'm old-fashioned, but I fall squarely into the latter camp. It's important to manage the growth of your automation assets and automate the right stuff, but I think a key value automation brings is in making sure the system still basically works after developers have added and changed functionality.
Regression Testing, what a concept! That is what the majority of the usage intent is for these "automation" tools. To build up a test bed of regression tests to run mulitple times to make sure what was working before is still working.
This helps to ward against ripple effect of changes. Let the machine do this mundane work, and let your human testers do more interesting and first round testing work.
It still kills me that people think an "automation" tool will do all their testing for them. We don't have a HAL 9000 yet.
But again, good post Michael. Right on spot.
"It still kills me that people think an "automation" tool will do all their testing for them. We don't have a HAL 9000 yet."
Unintentional, yet completely appropriate choice of words as a HAL 9000 would kill you.
Until you take humans out of the Coding step you will need them in the Test step of Software Development.
Post a Comment