Thursday, April 11, 2013

It Works in (X, Y, Z)... Live From SF, it's Selenium Meetup

I had a feeling today would be eventful. Note to self, do not drive down to Palo Alto when there is a meetup in San Francisco. It took me twice as long to make the trek as it would if I had just caught the 5:06 p.m. baby bullet to Millbrae. Still, I made it, with a little time to spare (food was a little delayed which worked immensely to my advantage, and much thanks to La Méditerranée for providing essential victuals and to Lookout Mobile Security for providing the venue).

One of the great things to hear in the introductions is that we spent close to ten minutes hearing from multiple companies with a simple and wonderful message... "we're hiring!". Trust me, after the 2000's, that line never gets old ;).

Tonight's topic is:


Works in (X, Y, Z): Parallel Combination Testing With Selenium, JUnit, and Sauce

David Drake, who is a lead SDET at Dynacron Group will be giving a talk about running tests in parallel using JUnit and Selenium, and utilizing the massive parallelization that Sauce Labs can provide.

Info about David:

David Drake is a lead SDET with nine years of experience in testing and automation, currently working at Dynacron Group, a Seattle consulting company. He maintains their parallel-webtest library for driving tests through Sauce Labs, and spends most of his time designing and using frameworks for performance and functional regression testing.

Selenium has a series of similar  issues. Lots of variables to deal with...

Operating Systems
Browsers
Devices (types and dimensions)
Languages (Localization)

Each one of these would make for a large suite of tests. to cover every possible permutation... close to impossible, at least serially. Still, with a bit of effort, they were able to put together a bunch of csv files to store all of the parameters and the various combinations to text. Messy, hard to read, ugly... yeah, they came to the same conclusion.

They shifted to Typesafe Config as a later step to allow them to create more complex structures that they could map to objects. A big problem came to the fore as they realized that multi-dimensional arrays with linked dependencies made for a really complicated and huge test suites.

To get the system to work they would read in the parameters, create a unique config combination based on a dense muti-dimensional array, convert the resulting configs into JSON, and then run the individual tests based on the config provided. The tests themselves are relatively lean, but the configuration options are huge, and as such, the number of tests run into the hundreds of thousands.

Parallelizing tests is really not hard, especially if you have something like a Grid farm or a Sauce array to work with. However, getting the tests to actually run right... that proved to be the bigger challenge. Yes, you can run multiple tests in parallel on multiple machines, but even in these isolated examples, they still saw odd behavior that wouldn't replicate when the tets were run serially.

Another big challenge with lots of tests running in parallel... how do you keep track of all of them?! The logic that they use is to make sure that descriptive method naming is used, as well as descriptive logging, so that even by looking at the log output of the code, you know what you are running with a minimum of mental parsing time. Had to smile a bit at this because it's very similar to what we are currently doing at Socialtext :).

One aspect to be aware of when talking about JUnit as the driver is that these tests are mostly organized at the class and unit level (makes sense considering, as its name implies, JUnit is a unit testing framework). Thus, many of these parallelization techniques happen with the same class. The real benefit comes when tests are required to be performed over many classes. With multiple classes, plus multiple configuration options, you get into an exponential increase of tests, but with the Sauce infrastructure, you can span out as many servers as you need (which I'll say right now, doing the math, that kinda' freaks me out!).

Areas that they are looking to improve on, and these are still challenges they are trying to deal with, are ways to remember the tested combinations, how to display the tested combinations, and how to parameterize the test runs in a way that utilizes JUnit's framework rather than custom code.

Tools discussed were Parallel Webtest, Typesafe Config, and Sauce method for seeding parallelization runs.

Something that I have to say put a smile on my face tonight. Often, when I hear about a variety of frameworks and the minutiae that goes into it, I often suffer from MEGO when hearing some of these presentations. the past two months have actually helped give me enough information and details that I was able to follow along with all of the details discussed. I'll not pretend I understood every line and statement, but I was able to follow along with at least 90% of it, and that feels really cool! It's also been a great boon that,the more often I come out, the more I retain and feel that I can actually contribute to the conversations. If someone werte to have told me six months ago that I'd be actively writing and looking for tips about how to optimize JUnit to write tests, I would have said you were nuts. I'll consider that a change for the better ;).



No comments: