Tuesday, February 22, 2011
San Francisco Selenium Meetup for 02/21/2011
Tonight’s session was specifically dedicated to dealing with Selenium problems, and issues that people have had with Selenium. The speakers for these events are usually volunteers in the community and likewise are developers and testers actively using the tools.
One of the cool things about this group is that the first thing they do is announce who’s hiring. Since so many people at these groups have the skills or are actively building them. Three companies announced they were looking and several with multiple open positions (what a wonderful thing to see, there seems to be a bit of a tech boom going on South of Market :) ).
Eric Allen from SauceLabs covered some topics regarding Selenium RC ‘s Proxy Server, starting with the architecture to help users understand how everything talks to each other. The talk went through details specific to capturing network traffic, which helps with debugging and even some performance monitoring. One of the cool little tools he talks about was trustAllSSLCertificates, which allows testers to create a "valid" certificate to test internal SSL setups. Note, this is specific to testing, and is not recommended at all for production environments (LOL!). If you have any questions for Eric about this, you can get to him at @ericpallen on Twitter.
Dan Fabulich from RedFin discussed an approach to test files on Disk with Selenium. The biggest issue with Selenium is "flakiness" (guess what? this was the #1 question from my development team). How do we get around the flakiness issue? Don't test the site, test files on disk directly! Say what?! Yep, instead of going to a site directly, create a system that works on files directly. Another benefit is consistent timing. By having the file loading on disk, you can reliably determine how long it will take to load the files. This approach also eliminates dependencies. If most of the testing is done through external services, each of the tests are going to fail, and having local files do the work removes all of those external issues. Another benefit of running files on disk is that you can eliminate what are referred to as "dirty tests". This eliminates failures that happen because tests cannot access a changed items somewhere external. There's lots of other options that Dan explained and I'm just not fast enough to type up all of them, but suffice it to say, this is an interesting idea as a supplement to testing and focusing on unit tests and local integration tests. Clever stuff :)! Oh, and RedFin... they're hiring :)!!!
Lalitha Padubidri and Leena Ananthayya from Riverbed discussed some issues surrounding WebUI automation. She discussed some challenges specific to Riverbed and some of the building blocks they use for automation. One of the interesting aspects about Riverbeds products is that they are not testing a web site, they are testing a network appliance. Ideally tests should be reusable, scalable and easy to learn. They use a lot of data driven methods. To do this, they use a lot of data abstraction methods that allow for a lot of the components to be made into widgets that can be called as needed. By using "factory design patterns", the code can be shared among many products and scripts. By using these techniques, they can transmute 50 basic tests out to 810 tests run across different browsers and products. they do have some gotchas that they are working with and around such as a lack of screen shot capture and selenium Grid reliability on VM's, but they are making strides. Oh, and if you haven't already guessed.. Riverbed... is Hiring :)!
Alois Reitbauer from dynaTrace discussed the idea of Extending Selenium for performance testing (sorry, dynaTrace is not currently hiring, but hey, 75% of presenters hiring is a pretty awesome percentage :) ). The example shown is just a simple test that goes out to Google and searches for DynaTrace. By adding three addition environment variable, dynaTrace's agent can record performance data (granted, this is of limited benefit if you do not have dynaTrace's performance tools, but it's kinda' cool to be able to see how Selenium can grab performance data and help automate performance criteria in tests). What's interesting to see from all of these presentations is that each group uses Selenium a little but differently, and how Selenium can help disparate tools work better together, or even leverage components from proprietary tools and make them work better (in this case, the idea that Selenium can be used to automate other tools in addition to web tests is rather compelling.
Granted, each of these discussions are being done in lightning talk fashion, so there isn't much time to go into great depth on each of these topics, but it's exciting to see what other groups are doing with Selenium and, yes, even openly discussing the challenges they have faced implementing it. Some of the methods are quite clever, and some are methods I never considered (gotta' admit, I'm actually really interested in doing a local file on disk approach to tests; maybe that will be the key secret to helping us look some of the "flakiness" aspects our developers have commented about. Is it a perfect solution? Probably not, but it's an interesting one ;).
Again, my thanks to everyone from the broader Bay Area Selenium community for making these events both memorable and accessible. While I can't make each and every one of them, I try to get to as many of them as I can. The community as a whole help to make these events possible, so my thanks to everyone who helps put on these events. I always learn something new, and even if I don't always understand everything discussed, if I can walk away with one fresh idea to try, that is a success in my book.