Monday, April 30, 2012

WOTA (Write Once, Test Anywhere)

In some ways, this has become the Holy Grail of my automated testing existence.

In the environment that I am currently testing, I have four distinct places where I can run our code. On a development system, on a demo machine, on a staging environment, and in production. I used to maintain four different systems, and those four different systems would float and require tweaking. In an attempt to try to get some sanity and reduce wasted effort, I decided that a better, more Don't Repeat Yourself (DRY) approach was to figure out how to move my scripts to a system that allowed me to create one set of scripts, and then optimize my environments and my test data so that those scripts could be run anywhere. This is the core behind my approach of Write Once, Test Anywhere (WOTA).

Now, when I say "my approach", I certainly don't mean that I'm the first one to think to do this, not by a long shot. I'm also having to make interesting tweaks to various config files so that I can effectively do this. Unlike the development team's tests, which really only have to focus on one environment to verify that their tests work (note, that's not a dig, it's a reality), mine have to work effectively on four different environments. This also helps in the sense that it allows me to see if Acceptance Tests and the methods used to write those tests (and their surrounding features) actually carry through as we apply them to consistently more complex systems. As we get closer to production, we abstract away from running tests on a dedicated workstation and a dedicated machine, and instead start calling on multiple machines that are structured in a cluster, with caching, external and distributed databases, load balancing, etc. The test themselves don't change, but often, we see different behavior with the same tests depending on the system we are testing.

Unlike the unit tests development uses, my scripts minimize specific JavaScript calls that can be made underneath the presentation layer. I use them if I must, but I try to avoid them so that I can focus on a more behavioral approach to tests, and checking to see if what I see would be similar to what a customer would see as we walk up the chain of environments. Keeping the scenario pool relatively simple, reusing accounts where possible, and using test data and persona criteria consistently on each system helps alert me when things don't appear correctly or if we have some investigation to do on different systems. The hope is that the tweaking necessary on production is minimal to non-existent; if I've done my job right, then tests really should just run on production and there should be little in the way of hiccups.

If this piece is a blinding flash of the obvious, well, it took me a bit of time to figure out a good way to do this. If you are still testing with different bases for each environment, seriously, consider looking at ways you can implement WOTA in your tests.

No comments: