Some background, last night at the Selenium Users Group Meet-Up, I met up with Chris McMahon, Zeljko Filipin and Ken Pier, my Quality Director at Socialtext. After the Meet-Up, we headed out to a local pub to talk shop and relive old war stories, of which their were many. Chris and Ken worked together at Socialtext a number of years back, and they were telling me about a number of decisions, changes, issues and other such things that led them to, by necessity, create the automation methods that they did.
Socialtext does something rather cool, if I do say so myself. As a wiki, and the fact that the Selenium test harness that we use leverages the wiki framework that we provide, in addition to the fact that we aggressively dogfood our own product, we also automate the vast majority of acceptance test cases created for any story. These acceptance test cases are converted into code and stored in our wiki page hierarchy. What this means is that our product actually tests itself. It has its own framework, it's own structure (which closely follows Selenese and Fitnesse standards), and it's easy to run just one test case or thousands of them.
The discussion changed over to a current series of comments that, if a test doesn't find any new bugs, then it's ultimately waste to run them. Chris and Ken were commenting on the fact that, while new discovery may well not be at the forefront of an automation test battery, what certainly should be is the idea that anything new didn't break something older. That's why those tests are kept, maintained and run regularly. As Chris was explaining a particularly challenging issue, he pointed out that "that battery of Automated Browser Tests (seen by many as not really offering value)... totally saved my ass on more than one occasion!" After a good laugh, Ken leaned over to me and said "hey, that would make for a good blog post"... and I agree :).
Reducing waste is important, reusing as much as we can efficiently is also a huge win, but sometimes, there really is a value to the tried and true and boring tests that never find an issue. They're really not designed to. They're designed to safeguard and help you know if something new you did has rendered other work compromised. Throw that work away, and you might not have an opportunity to work your way out of a troublesome spot. Sure, refactor, re-purpose, make efficient, but don't fall for a false economy. Confirm that what you use is providing value and is actually being used. If so, you may find yourself being spared a lot of heartache.