Tuesday, April 22, 2014

Selenium SF Live: An Evening With Dave Haeffner

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

No comments: