Showing posts with label Selenium. Show all posts
Showing posts with label Selenium. Show all posts

Wednesday, April 11, 2018

The Use and Abuse of Selenium - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

I realized that the last time I heard Simon speak was at Seleimum conf in San Francisco in 2011. I've followed him on Twitter since then, so I feel I'm pretty well versed with what he's been up to, but the title intrigued me so much, I knew I had to be here.

Selenium has come a long way since I first set my hands on it back in 2007.  During that time, I've become somewhat familiar with a few implementations and bringing it up in a variety of envirnments. I've reviewed several books on the tools and I've often wondered why I do what I do and if what I do with it makes any sense whatsoever.

Simon is explaining how a lot of environments are set up:

test <-> selenium server <-> grid <-> driver executable <-> browser 

The model itself is reasonable but scaling it can be fraught with disappointment. More times than not, though, how we do it is often the reason it's fraught with disappointment.  A few interesting tangents spawned here, but basically, I heard "Zalenium is a neat fork that works well with Docker" and I now know what I will be researching tonight after the Expo Reception when I get back to my evening accommodations.

Don't put your entire testing strategy in Selenium! Hmmm... I don't think we're quite that guilty, but I'll dare say we are close. Test the happy path. Test your application's actual implementation of its core workflows.

Avoid "Nero" testing: what's Nero testing? It's running EVERYTHING, ALL THE TIME. ALL THE TESTS ON ALL THE BROWSERS IN ALL THE CONFIGURATIONS! Simon says "stop it!" Yeah, I had to say that. Sorry, not sorry ;).,

Beware of grotty data setup: First of all, I haven't heard that word since George Harrison in "A Hard Day's Night" so I love this comment already, but basically it comes down to being verbose about your variables, having data that is relevant to your test, and keeping things generally clean. Need an admin user? Great, put it in your data store. DO NOT automate the UI to create an Admin user!

Part of me is laughing because it's funny but part of me is laughing because I recognize so many things Simon is talking about and how easy it is to fall into these traps. I'm a little ashamed, to be honest, but I'm also comforted in realizing I'm not alone ;).

Tuesday, April 22, 2014

Selenium SF Live: An Evening With Dave Haeffner

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

Tuesday, October 1, 2013

Parallelization and JavaScript: SF Selenium Meetup, Oct. 2013

Hey all, TESTHEAD is live at the San Francisco Selenium Meetup, and looking forward to another night of pizza, ginger beer and talk about testing in Parallel and testing JavaScript. What are YOU doing tonight ;)?


The following are from the Meetup site, so consider this pre-amble. You'll definitely get plenty of my ramble in a bit.


Optimization through Parallelization (David Borin)

Dramatically improve the scale and speed of testing through the use of parallelization techniques. This includes writing test modules that can be run independently, doing as much preloading of test data as possible, and using parallel test runners.

Cross Browser Compatibility Testing JavaScript (Alan Parkinson)

People often run their WebDriver based functional test suites across multiple browser combinations with the aim to test cross-browser compatibility. Functional tests provide poor coverage of JavaScript and long test execution times. I will demonstrate how you can reuse your existing JavaScript unit tests with Karma and utilizing your Selenium WebDriver test infrastructure for providing fast cross-browser JavaScript compatibility tests. If you're not already doing JavaScript TDD then the additional benefits highlighted in this talk will encourage you!

My thanks to Lithium Technologies for providing the space an the food. Speaking of food, I'm going to go get some now. Be back in a little bit.


Alan Parkinson is first out the gate with a talk about how to parallelize tests for JavaScript. The app that Alan's company creates is, by his own admission, mostly JavaScript. A key truth that Alan wants to get out to everyone... You don't need to run your Selenium Test suite with every browser (*). What?! Heresy, you say! Well, whats the reason we are testing against all the browsers? Is it rendering compatibility? Is it compatibility with the DOM? Is it to get lots of javaScript code coverage? According to Alan, no, there was about two thirds of their code that wasn't even being exercised. So how do we do this? According to Alan, the answer is is to create unit tests for your JavaScript... and run *THOSE* tests cross browser. Hmmm.... OK, I can see the logic to that.

Alan makes an interesting point when he describes the options between functional tests and JavaSctipt unit tests. An average functional test suite takes about five minutes to run. The JS unit tests... take about 11 seconds. Alan did mention that he had a qualifier, and here it is... not everything is going to be picked up by unit tests. There's far too many JavaScript files, and there's actual behavior aspects that unit tests will not actually be able to pick up. So some tests will have to be run cross-browser, but the point is, not as much as we might think.

So what test runner do they use? Alan talked a bit about Karma and their "Spectatular Test Runner for JavaScript". There are some challenges, to be sure, so to solve some of their challenges, the made a Karma WebDriver Launcher, which allowed them to leverage their Selenium WebDriver infrastructure. Alan took  a time out to run through Karma and show how they have set up their framework and which options to call. Granted, it's a small set of tests for the demo, but no doubt about it, it is remarkably fast. Running the tests in parallel on multiple browsers becomes a quick option compared to native Selenium WebDriver functional tests.

Karma provides code coverage, but to do that, you need to instrument your JavaScript (in their example, they do this with Istanbul). They can also generate code coverage reporting formats and the ability to come in and see what is actually covered and where.

Overall, the main takeaway that alan wanted to deliver was that functional tests are an expensive way to get browser compatibility coverage. By comparison, JavaScript unit tests can provide much more coverage, much quicker, and the beauty of it is... the programmers do the lions' share of the work with this model, since they would be doing it anyway. Again, interesting approach, and a clever approach to cross-browser testing (and to be clear, they do still run a number of functional tests cross-browser, but the total number of tests is significantly less, since they prefer letting the unit test framework go for it with the code coverage and atomic level functionality. Again, interesting idea, definitely want to explore the idea more :).
  
OK, next talk is up, and now it's time to talk a bit more about parallelization... imagine being able to run ten thousand test modules in under five minutes. The simple answer is that parallelization is necessary to get those numbers. Spin up a battery of virtual machines, spread the tests equally over the machine battery, and get your CI server to do all the work (living in a world that uses Jenkins and AWS, I can confirm that this method is real and does work.

This approach does require that the effort be made to make tests as atomic as possible, and that dependencies are, if not completely eliminated, at least minimized as much as possible. Also, you don't need hundreds of machines. With 4 servers, you can create a Jenkins master, a slave with 10 executors, Selenium Grid hub and selenium grid nodes with 15 browsers. I'll leave it to you to do the multiplication on that. The point is, with that setup, you can do a fair amount of parallel testing.



Several great takeaways for the talk were encapsulated in the "mantras" slide, so rather than repeat them, hey, it's right here :):






Tools for parallelization?



Wow, that went quickly! Glad to have the opportunity to come out and play once again. Lots of interesting ideas, and I definitely want to take a closer look at some of the ideas from these presentations. If I had to pick any one thing to consider, it would be to see how many tests are "overloaded" or have a lot of steps. Could those tests be puled apart? Would it make sense to run them separately (and subsequently, more in parallel)?

And with that, I bid all a good night. Happy testing :)!!!

Wednesday, September 18, 2013

Adventures in "Epic Fail": Live From Wikimedia Foundation

Ahhh, there is nothing quite like the feeling of coming off a CalTrain car at the San Francisco terminus, walking onto the train platform, breathing that cool air, sighing, and saying "yeah, I remember this!"


Palo Alto is nice during the day, a great place to walk around in shirt sleeves, but there's something really sweet about the cool that eastern San Francisco gives you as you wind your way through SoMa.


OK, enough of the travelogue... I'm here to talk about something a bit more fun. Well, more fun if you are a testing geek. Today, we are going to have an adventure. We're going to discuss failure. Specifically, failure as relates to Selenium. What do those failure messages mean? Is our test flaky? Did I forget to set or do something? Or... gasp... maybe we found something "interesting"! The bigger question is, how can we tell?

Tonight, Chris McMahon and Zjelko Filipin are going to talk about some of the interesting failures and issues that can be found in their (meaning Wikimedia's) public test environments, and what those obscure messages might actually be telling us.

I'll be live blogging, so if you want to see my take on this topic, please feel free to follow along. If you'd like a more authoritative real time experience, well, go here ;) :

https://www.mediawiki.org/wiki/Meetings/2013-09-18


I'll be back with something substantive around 7:00 p.m. Until then, I have pizza and Fresca to consume (yeah, they had one can of Fresca... how cool is that?!).

---
We started with a public service announcement. For anyone interested in QA related topics around Wikimedia, please go to lists.wikimedia.org (QA), and if you like some of the topics covered tonight, consider joining in on the conversations.


Chris got the ball rolling immediately and we started looking at the fact that browser tests are fundamentally different compared to unit tests. While unit tests deal with small components, where we can get right to the issues where components fail, with browser tests, we could have any variety of reasons why tests are failing.


Chris started out by telling us a bit about the environment that Wikimedia uses to do testing.

While the diagram is on the board, it might be tough to see, so here's a quick synopsis: GIT and Gerrit are called by Jenkins for each build. Tests are constructed using Ruby and Selenium (and additional components such as Cucumber and RSpec). Test environments are spun up on Sauce Labs, which in turn spin up a variety of browsers (Firefox, Chrome, IE, etc.) which then point to a variety of machines running live code for test purposes (whew, say that ten times fast ;) ).


The problem with analyzing browser test failures is trying to figure out what the root cause of failures actually is. Are the issues with the system? Are the issues related to timeouts? Are there actual and legitimate issues/bugs to be seen in all this?

System Failures

Chris brought up an example of what was considered a "devastating failure", a build with 30 errors. What is going on?! Jenkins is quite helpful if you dig in and look at the Console Output, and the upstream/downstream processes. By tracing the tests, and looking at the screen captures taken when tests failed, in this case there was a very simple reason... the lab server was just not up and running. D'oh!!! On the bright side, the failure an the output make clear what the issue is. On the down side, Chris lamented that, logically, it would have been way better for there to be tests that could have run earlier in the process to confirm if a key server was not up and running. Ah well, something to look forward to making, I guess :).

Another build, another set of failures.... what could we be looking at this time? In this case, they were testing against their mobile applications. The error returned "unable to pick a platform to run". Whaaah?!!!  Even better, what do you do when the build is red, but the test results report no failures Here's where the Console output is invaluable. Scrolling down to the bottom, the answer comes down to... execute shell returned a non-zero value. In other words, everything worked flawlessly, but that last command, for whatever reason, did not complete correctly. Yeah, I feel for them, I've seen something similar quite a few times. All of these are examples of "system problems", but the good news is, all of these issues can be analyzed via Jenkins or your choice of CI server.


Network Failures


Another fact of life, and one that can really wreak havoc on automated test runs are tests that require a lot of network interaction to make happen. The curse of a tester, and the most common (in my experience) challenge I face, is the network timeout. It's aggravating mainly because it  makes almost all tests susceptible to random failures that, try as we might, never replicate. It's frustrating at times to run tests and see red builds, go run the very same tests, and see everything work. Again, while it's annoying, it's something that we can, again, easily diagnose and view.


Application Failures


Sauce has an intriguing feature that allows tests to be recorded. You can go back and not only see the failed line in a log, but you can also see the actual failed run in real time. That's a valuable service and a nice thing to review to prove that your test can find interesting changes (the example that Chris displayed was actually an intentional change that hadn't filtered down to the test to reflect the new code,  but the fact that the test caught the error and had a play by play to review it was quite cool).


Theres an interesting debate about what to do when tests are 100% successful. We're talking about the ones that NEVER fail. They are rock solid, they are totally and completely, without fail, passing... Chris says that these tests are probably good candidates to be deleted. Huh? Why would he say that?


In Chris' view, typically, the error that would cause a test like that to fail would not be something that would provide legitimate information. The value of the test is such that, because it takes such a vastly strange situation to cause the test to fail, and under normal usage, the test never, ever fails, those tests are likely to be of very little value and provide little in the way of new or interesting information.  To take a slightly contrarian view... a never failing test means that we may be getting false positives or near misses. IOW, a perpetually passing tests isn't what I would consider "non-valuable", but instead should be a red flag that, maybe, the test isn't failing because we have set the test to not be able to fail. Having seen those over the years, those tests are the ones that worry me the most. I'd suggest not deleting a never failing test, but explore if we can re-code it to make sure it can fail, as well as pass.

Another key point... "every failed automated browser test is a perfect opportunity to develop a charter for exploratory testing". Many of the examples pointed to in this section are related to the "in beta" Visual Editor, a feature Wikimedia is quite giddy about seeing get ready to go out into the wild. Additionally, a browser test failure may not just be an indication of an exploratory test charter, it might also be an indication of an out of date development process that time has just caught up to. Chris showed an example of a form submission error that demonstrated how an API had changed, and how that change had been caught minutes into the testing.

So what's the key takeaway from this evening? There's a lot of infrastructure that can help us to determine what's really going on with our tests. We have a variety of classes of issues, many of which are out of our control (environmental, system, network, etc.) but there are a number of application errors that can be examined and traced down to actual changes/issues with the application itself. Getting experience to see which are which, and getting better at telling them apart, are key to helping us zero in on the interesting problems and, quite possibly, genuine bugs.

My thanks to Chris, Zjelko and Wikimedia for an interesting chat, and a good review of what we can use to help interpret the results (and yeah, Sauce Labs, I have to admit, that ability to record the tests and review each failure... that's slick :).

---
Thanks for joining me tonight. Time to pack up and head home. Look forward to seeing you all at another event soon.

Thursday, July 18, 2013

Live Blog: UI testing in Java - How, When and Why

Another day, another San Francisco Selenium Meetup :).


Tonight's venue is Walmart Labs, and as a special treat, I finally get to attend a MeetUp close to home (Walmart Labs is right here in my home town of San Bruno, CA).

Mathilde Lemee is the featured speaker this evening, and she is the creator of FluentLenium, which is an open source Java wrapper for Selenium.


This will be presented in live blog format, which means, as usual, it will be updated regularly throughout the evening. Click refresh to get the most recent updates. If you see "End of Updates" at the bottom of the post, you'll know the live blog has concluded. I'll be picking up with the real topic in just a bit.


UI testing in Java - How, When and Why



General Abstract from Meetup.com
Would you like to automate your acceptance tests against multiple browsers and multiple servers? How about make your UI tests run faster? And remove the boilerplate on them? Browser automation tools to the rescue! In this session, I'll share how you can gain back development time by using FluentLenium, an open source Java wrapper around the Selenium API. We'll take a brief look at what is new in the UI testing javascript ecosystem (such as GhostDriver & CasperJS), and then I’ll share with you some rules for writing better UI tests.

Mathilde Lemae Bio
After being a freelancer for many years, Mathilde Lemee joined Software AG (Terracotta) in 2012 as a R&D engineer in Java, working on ehcache and bigmemory. She co-founded the Duchess France Chapter, an organization to connect and give visibility to women in Java technology, in 2010, where she organized a lot of events (Hadoop, Mahout, Mockito, Cache …). She blogs on http://www.java-freelance.fr about performance, best practices and testing. She is a regular open source commuter and creator of FluentLenium, a wrapper around Selenium that provide a Fluent API which is used in other open source projects, like Play!2. She is a regular speaker at various Java conferences in Europe (Paris JUG, Devoxx France, Mix-it, Softshake) and she's also a mobile educative game editor (http://www.aetys.fr) on iPad/iPhone and Android.



After a bit of a pitch from WalMart Labs about their current offerings and job openings (yep, they're looking for people both on the coding and testing side, if anyone's looking).

Mathilde started the main presentation with some quick facts we all know about Selenium and WebDriver features, as well as a quick run through of the various API's for language support. A quick poll of the room showed the majority of users were using the Java language bindings and API with Selenium (not really a surprise, considering this is a talk about UI testing in Java ;) ). Additionally, a number of concerns and issues around scripting to JavaScript and jQuery showed a need, in Mathilde's mind, to make an abstraction layer, and by extension, a wrapper for Selenium to help with some of the inconsistencies she highlighted. That wrapper is called FluentLenium, which is specifically the point of the talk (oh, and Ponycorns, lots of Ponycorns... my Sidereel compatriots would probably appreciate that ;) ).



A number of examples are being shown to demonstrate the syntax and ways that FluentLenium can add options such as searching, filtering, regular expressions etc. The idea is that native Selenium/WebDriver is always available, but FluentLenium add a number of additional commands and structured statements that are meant to more naturally express statements.

One of the nice features described is the fill option, which is designed to work directly with forms. While the statement isn't much shorter, the actual syntax is clean and easy to read. It's a nice step towards self documenting code, at least as presented. Yes, it's still Java, but it's cleaner and nicer than a lot of the Java code I've seen (or written ;)... pictures will be posted in a bit).


Mathilde demonstrated how she has made a wrapper to support page object patterns as a way to tame over-exuberant Java expressions. Again, I'll give her credit, the code is easy to follow as written. It feels deceptively easy, but more to the point, it kind of reminds me of Capybara as it relates to Selenium. Not an exact parallel, but hey, I calls 'em as I sees 'em.



An interesting method that Mathilde demonstrated is called await. This is a nice and compact way to ask an app to wait for an element to appear. Again, I give props for the ease of reading, and the intuitive nature of the statements.

There's a nice addition called @SharedDriver, which allows for the WebDriver object to be launched just once and shared among tests. That alone can be a huge time saver. Shared Driver can also be allocated per class or per method. It's up to you.


For those who are used to using Cucumber and Gherkin, yes, FluentLenium supports it. FluentLenium allows for creating step definitions using the syntax and methods. Here the similarity with Capybara is even more apparent.

Overall, this looks like an interesting tool. I'd be interested in seeing some more examples and playing around with it some. Having said that, I do worry that an over-dependence on abstraction tools, no matter how pretty or easy to read, has the potential to add more problems than it solves. First, FluentLenium is based on Java, so you have to use the Java bindings and test structure to use it. Second, Java has enough syntactical weirdness at the ground level. For me, I know there's plenty of aspects of the native Java language bindings I need to get more proficient with. After I exhaust those options, or I find myself wishing I could do other things beyond what the base Java bindings provide, then I could see this being both valuable and interesting.



End of Session. 

Thursday, April 11, 2013

It Works in (X, Y, Z)... Live From SF, it's Selenium Meetup

I had a feeling today would be eventful. Note to self, do not drive down to Palo Alto when there is a meetup in San Francisco. It took me twice as long to make the trek as it would if I had just caught the 5:06 p.m. baby bullet to Millbrae. Still, I made it, with a little time to spare (food was a little delayed which worked immensely to my advantage, and much thanks to La Méditerranée for providing essential victuals and to Lookout Mobile Security for providing the venue).

One of the great things to hear in the introductions is that we spent close to ten minutes hearing from multiple companies with a simple and wonderful message... "we're hiring!". Trust me, after the 2000's, that line never gets old ;).

Tonight's topic is:


Works in (X, Y, Z): Parallel Combination Testing With Selenium, JUnit, and Sauce

David Drake, who is a lead SDET at Dynacron Group will be giving a talk about running tests in parallel using JUnit and Selenium, and utilizing the massive parallelization that Sauce Labs can provide.

Info about David:

David Drake is a lead SDET with nine years of experience in testing and automation, currently working at Dynacron Group, a Seattle consulting company. He maintains their parallel-webtest library for driving tests through Sauce Labs, and spends most of his time designing and using frameworks for performance and functional regression testing.

Selenium has a series of similar  issues. Lots of variables to deal with...

Operating Systems
Browsers
Devices (types and dimensions)
Languages (Localization)

Each one of these would make for a large suite of tests. to cover every possible permutation... close to impossible, at least serially. Still, with a bit of effort, they were able to put together a bunch of csv files to store all of the parameters and the various combinations to text. Messy, hard to read, ugly... yeah, they came to the same conclusion.

They shifted to Typesafe Config as a later step to allow them to create more complex structures that they could map to objects. A big problem came to the fore as they realized that multi-dimensional arrays with linked dependencies made for a really complicated and huge test suites.

To get the system to work they would read in the parameters, create a unique config combination based on a dense muti-dimensional array, convert the resulting configs into JSON, and then run the individual tests based on the config provided. The tests themselves are relatively lean, but the configuration options are huge, and as such, the number of tests run into the hundreds of thousands.

Parallelizing tests is really not hard, especially if you have something like a Grid farm or a Sauce array to work with. However, getting the tests to actually run right... that proved to be the bigger challenge. Yes, you can run multiple tests in parallel on multiple machines, but even in these isolated examples, they still saw odd behavior that wouldn't replicate when the tets were run serially.

Another big challenge with lots of tests running in parallel... how do you keep track of all of them?! The logic that they use is to make sure that descriptive method naming is used, as well as descriptive logging, so that even by looking at the log output of the code, you know what you are running with a minimum of mental parsing time. Had to smile a bit at this because it's very similar to what we are currently doing at Socialtext :).

One aspect to be aware of when talking about JUnit as the driver is that these tests are mostly organized at the class and unit level (makes sense considering, as its name implies, JUnit is a unit testing framework). Thus, many of these parallelization techniques happen with the same class. The real benefit comes when tests are required to be performed over many classes. With multiple classes, plus multiple configuration options, you get into an exponential increase of tests, but with the Sauce infrastructure, you can span out as many servers as you need (which I'll say right now, doing the math, that kinda' freaks me out!).

Areas that they are looking to improve on, and these are still challenges they are trying to deal with, are ways to remember the tested combinations, how to display the tested combinations, and how to parameterize the test runs in a way that utilizes JUnit's framework rather than custom code.

Tools discussed were Parallel Webtest, Typesafe Config, and Sauce method for seeding parallelization runs.

Something that I have to say put a smile on my face tonight. Often, when I hear about a variety of frameworks and the minutiae that goes into it, I often suffer from MEGO when hearing some of these presentations. the past two months have actually helped give me enough information and details that I was able to follow along with all of the details discussed. I'll not pretend I understood every line and statement, but I was able to follow along with at least 90% of it, and that feels really cool! It's also been a great boon that,the more often I come out, the more I retain and feel that I can actually contribute to the conversations. If someone werte to have told me six months ago that I'd be actively writing and looking for tips about how to optimize JUnit to write tests, I would have said you were nuts. I'll consider that a change for the better ;).



Friday, March 22, 2013

PRACTICUM: Selenium 2 Testing Tools Beginner's Guide: Advanced User Interactions


This is the next installment (in progress) of my PRACTICUM series. This particular grouping is going through David Burns' "Selenium 2 Testing Tools Beginner's Guide".

Note: PRACTICUM is a continuing series in what I refer to as a somewhat "Live Blog" format. Sections may vary in time to complete. Some may go fast. Some may take much more time to get through. Updates should be daily, they may be more frequent. Feel free to click refresh to get the latest version at any given time. If you see "End of Section" at the bottom of the post, you will know that this entry is finished :).


Update: This project was put down back in April, with the intention to get back to it after I figured out why the examples weren't working. I didn't intend for the process to take 3 MONTHS to get back to, but I believe in finishing what I start, whenever I can. Therefore, all of the previous examples from earlier posts have been updated and re-examined. thanks for the patience and here's hoping you all continue the journey with me once again :).



Chapter 9: Advanced User Interactions


Much of what we have worked with thus far would be plenty if the web of yore were the one we were working with. Ahh yes, those days when things were simple. Three tiered architecture, some simple front end details, links, buttons, and a few CGI scripts. Just find the elements, click the buttons or links, and move to the next item. So simple... and so 1994!

The web doesn't really look like that any longer. Well, many places actually still do, but the places we most like to visit certainly don't. Today's Twitter and Facebook level web apps are a lot more dynamic. It's not just typing text and clicking buttons. Now we can move stuff, resize things, pull and stretch and add components whenever we feel like it, and who knows what will happen next.

For these types of environments, this next chapter focuses on how the user can leverage more advanced options in the API to have this level of interaction. These interactions require a more in depth coverage of not just keyboard combinations, but actual placement and movement of the mouse, and how to track it and implement the appropriate actions.

Creating an Action Chain

One of the things that we are ab;e to do with Selenium is create sequences of events that we can bind together. These sequences are referred to as "action chains" and they allow us to  make a variety of movements and treat them as one action. Below is the example code that David presents to show us how to drag and drop an element:


If we run the text, we should see two things happen. The first is we should see a screen like the first one below:



and then see a second screen that looks like this:



I say "should because, well, that's not what I'm seeing. Instead, my browser looks like this:

And my output for the test looks like this:



Now, if I have learned anything from these past several weeks, it's not to jump to conclusions that something isn't working. It's very possible that what may be the culprit is the version of Java JDK I'm using. My plan is to dig in and see if there's some thing I'm unaware of. For the Java pro's out there, if I'm doing something dumb, please, let me know so that I can fix it :).

Moving an Element by an Offset Value

So, the building option to allow us to move elements also allows us to move items by specific amounts (or so the book says). With this is n mind, I tried to create the example as described, using JUnit 4.1.1 format. Instead of dragging and dropping abox from one element to another, this time, it calls for moving by an index amount on the X-Y axes.



Seems straightforward enough, but I am still seeing the same issues with this example as with the last one.



Again, Java peoples out there, it is just me, or is there something not happening here that should be happening? An enquiring mind wants to know.

More to come. Stay tuned!

Michael Larsen, TESTHEAD

Tuesday, March 12, 2013

ExploreIt Book Launch Party: An SF Selenium Meetup

This is a day that I've been waiting to see happen for quite awhile. For those who may remember, I started reading and covering ExploreIt! during the Summer of 2012. Through several beta revisions, I've seen the book grow and develop, and today it is now officially available in print form as well as e-book format. This isn't going to be a book review... I already wrote that ;). Nope, in this case, I'm here to discuss what Elisabeth will be discussing, which will be "The Top 5 Reasons Your Team Needs Exploratory Testing".

For those not familiar with Elisabeth and her background, Elisabeth is a tester, programmer and agile advocate. She believes strongly that Exploratory Testing is more necessary now than it was five or ten years ago.  One of the points that she puts forth early on is that "Users do not (and often will not) complacently follow the path you set for them". In short, "you just can't trust your users to use your system the way that you hope they will use it". They might, but frankly, users are often unintentionally creative. We think we control the file system. We think we control the database. We even think we can control the user... we don't. In fact, the areas we really do have control over is very small. Thus, it's critical that we actually explore these areas that we can't trust.

Another point Elisabeth makes is that "no one can reason through all of the possible parameters of a change".  Also, she makes the point that "sometimes things just don't connect the way you think they do". Additionally, the biggest goal and reason why exploratory testing is more relevant today than ever is that, if teams actually use these exploration tools, we will "increase the odds that if there is a problem, you will see it".

Short and sweet, direct and to the point, and now it's time to draw for books (ten of them, in fact). Must be present to win, and I WON ONE! YAY :)!!! Since I have all of the previous beta copies, I think I will give this to the test team over at Socialtext, so that everyone on my team can see the stuff I have been trying to evangelize all these years...

And with that, it's time for cake. Catch you all in a bit :).

Monday, March 11, 2013

Selenium Book Giveaway RESULTS!!!

Hi all, the time has come, and in the nature of trying to be as fair as humanly possible, I chose to enter everyone in order of their submission, first entry gets a number, duplicate entries removed, and with that, a number was assigned from 1 to 21 and pumped through a random number picker. After all was said and done, here's the winning numbers:

* 4. Punkmik

* 6. Duston Diekmann

* 14. El Panaton


Congratulations, and I will now forward this to Packt Publishing so that they can honor the delivery. Note: as per the rules of engagement, those in North America and Europe will get a choice of E-Book or Paper book. those in other regions will receive the E-book only. 

Any questions, let me know now. Also, if I don't have your email address, please get that to me quickly :).  

Friday, March 1, 2013

"...and Those Automated Browser Tests Totally Saved My Ass!"

Hmmm, too provocative a title? Well, it reflects the reality of a conversation we had last night, and it was just too good not to share.

Some background, last night at the Selenium Users Group Meet-Up, I met up with Chris McMahon, Zeljko Filipin and Ken Pier, my Quality Director at Socialtext. After the Meet-Up, we headed out to a local pub to talk shop and relive old war stories, of which their were many. Chris and Ken worked together at Socialtext a number of years back, and they were telling me about a number of decisions, changes, issues and other such things that led them to, by necessity, create the automation methods that they did.

Socialtext does something rather cool, if I do say so myself. As a wiki, and the fact that the Selenium test harness that we use leverages the wiki framework that we provide, in addition to the fact that we aggressively dogfood our own product, we also automate the vast majority of acceptance test cases created for any story. These acceptance test cases are converted into code and stored in our wiki page hierarchy. What this means is that our product actually tests itself. It has its own framework, it's own structure (which closely follows Selenese and Fitnesse standards), and it's easy to run just one test case or thousands of them.

The discussion changed over to a current series of comments that, if a test doesn't find any new bugs, then it's ultimately waste to run them. Chris and Ken were commenting on the fact that, while new discovery may well not be at the forefront of an automation test battery, what certainly should be is the idea that anything new didn't break something older. That's why those tests are kept, maintained and run regularly. As Chris was explaining a particularly challenging issue, he pointed out that "that battery of Automated Browser Tests (seen by many as not really offering value)... totally saved my ass on more than one occasion!" After a good laugh, Ken leaned over to me and said "hey, that would make for a good blog post"... and I agree :).

Reducing waste is important, reusing as much as we can efficiently is also a huge win, but sometimes, there really is a value to the tried and true and boring tests that never find an issue. They're really not designed to. They're designed to safeguard and help you know if something new you did has rendered other work compromised. Throw that work away, and you might not have an opportunity to work your way out of a troublesome spot. Sure, refactor, re-purpose, make efficient, but don't fall for a false economy. Confirm that what you use is providing value and is actually being used. If so, you may find yourself being spared a lot of heartache.


Thursday, February 28, 2013

Selenium SF Meetup: Tales From the Selenium Testing Trenches

For those who haven't seen it, and are in the SF Bay Area today, the San Francisco Selenum Meetup group will be getting together tonight. I'm setting the stage with this post, and I'll update it later this evening when the event starts so that those who can't be there can get a taste of what's happening.


So a static blog to set up a live blog for later... yeah, that's it :).


With that out of the way... Thursday, February 28, 2013 at 6:30 p.m. (we walked in at 7:15 p.m. and tamales were never so appealing, lot of traffic on 101 tonight, my thanks to my Quality Director for the ride up :) ):

Tales From the Selenium Testing Trenches

Lookout Mobile Security
One Front St, Ste 2700,
San Francisco, CA
6:30 p.m. - 9:00 p.m.
(map)


The first speaker is Giri Nandigam of TRUSTe about their Tracker Detection Tool and how Selenium has become part of their mix to test, develop and move forward. TRUSTe is in the business of protecting customer data, and to do that, they have to make sure that large amounts of data can be safeguarded, and that involves dealing with some huge systems. Their goal was to try to comb networks to look for data and verify that it is not being exposed. Their challenges that they were facing were having issues where JavaScript engines would throw too many errors, and trying to get systems that could be maintained in a realistic manner. Their idea was to use Selenium as a crawler, and to allow them to use Selenium to go from page to page, to collect all of the links (thousands of them) and then export out those links as HAR files for later access. Also, they've made a browser hack so that they can get access to the cookie store for requests (an interesting way to go about doing some protective data mining, to be sure).

These kind of talks are always interesting because we get to see what other organizations do with Selenium, and while we are used to thinking of Selenium and its framework to "test", it's a system for computer aided processing and step automation, and looking to see what data can be gathered.

Taking a little break, and nature calls, so I'll be back in a bit. Oh, and Ken got the last Tamale, if any are here and looking for more food ;).

---

The second speaker is Dr. Wenhua Wang of Marin Software, who will be discussion ways to use Selenium to test IE. Due to a lot of multi-browser needs ,and yes, a large customer that mandates IE7 as part of the support deal, IE is not going to be something I will be seeing less of any time soon. Marin has   lot of legacy code and the idea of creating new automation tests was and still is a daunting situation to deal with.

Also, many of their tests used XPath locators (sometimes very helpful, but often very expensive if not implemented in the right scope... hey, I actually understand what that last sentence means now ;) ). They decided that, rather tan start from scratch, they would see how much of their existing test code they could refactor, especially with the gains that WebDriver now gives to allowing an easy retrofitting of Selenium drivers, and instead of changing the testing code itself, use the time and energy to update and refresh the locators and methods to access them.

The big win and goal of these changes was to try to explore techniques that would minimize the code maintenance, plus allow for a maximum on the return of effort with the lowest overall cost. Developing automation tests with well supported techniques will help make sure that the churn of test change is minimal.

One interesting question from the audience was to ask "if you could do this all with no budgetary impact and all the resources necessary, what would you do differently"? The best answer came from Ken, who was sitting right next to me... "tell your developers to make sure that very element has an id or a class!" Frankly, I couldn't agree more!

I said I was looking forward to seeing Chris McMahon and Zeljko Filipin being in town for this, and sure enough, they came. I felt bad that I didn't have any time this week to get out and hang with them while they were in town, but this in a small way makes up for that. My thanks also to Ken for being my ride to the city tonight, and letting me take a much needed nap on the way up (that's a topic for another post, but not just yet ;) ). To all of my Selenium compatriots, good to se you again, and we get a treat in two weeks, another session at Pivotal Labs, this time to celebrate Elizabeth Hendrickson's in-print launch of Explore It. That will be on Tuesday, March 12, 2013, and I hope to see you there :).

Wednesday, February 27, 2013

TESTHEAD Giveaway: WIN a FREE Selenium Book!!!


And now for something "Completely Different"... and yet, not.

For those who have been following this blog for the past month, I've been working through "Selenium 2 Testing Tools" and doing a long form review of the exercises and materials. I also, last month, did a review of another book, "Selenium Testing Tools Cookbook", which is also available from the publisher of Selenium 2 Testing Tools, i.e. Packt Publishing in Birmingham, UK.

So why am I mentioning this? Because Packt Publishing is partnering with TESTHEAD to give away three FREE copies of the Selenium Testing Tools Cookbook!

OK, do I have your attention :)?

Here's some of the details about "Selenium Testing Tools Cookbook" from the Packt Publishing site:


  • Learn to leverage the power of Selenium WebDriver with simple examples that illustrate real world problems and their workarounds.
  • Each sample demonstrates key concepts allowing you to advance your knowledge of Selenium WebDriver in a practical and incremental way.
  • Explains testing of mobile web applications with Selenium Drivers for platforms such as iOS and Android
  • Uses a variety of examples and languages such as Java, Ruby and C#.

Read more about this book and download a free Sample Chapter

How to Enter

All you need to do is head on over to the book page (Selenium Testing Tools Cookbook), look through the product description of the book, and drop a line via the comments below this post to let us know what interests you the most about the book. It’s that simple.

Winners from the U.S. and Europe can either choose a physical copy of the book or the eBook. Users from other locales are limited to the eBook only.

Deadline

The contest will close on March 10, 2013. Winners will be contacted by email, so be sure to use your real email address when you comment (you can, of course write it as my (dot) name (at) emailProvider (dot) com). The winners will be chosen by a random number generator, and I'll only count people once, so no leaving a bunch of comments to boost your chances.

Please feel free to share this post with any and all who want to participate.

Friday, February 22, 2013

Tis The Conference Season!!!

It's almost March, when a young testers heart and mind, if they are so oriented and plugged in, starts to think about the upcoming Conference season and where they might go, who they might hear, and what they might say.

This year, I have decided to aim for three conferences. Two as a speaker, one as a supporting player, and I'll see if I can finagle one more if it makes sense and my company is cool with it.

First of all, for those in the Southern California area or those who will be attending STP-CON Spring 2013 (to be held April 22-25, 2013). I'll be there speaking on my experiences as a "Lone Tester in an Agile World". Actually, I may have to say that as an "Ex Lone Tester", as that's not my reality any longer, but I can say I certainly learned enough about what could be done, and most certainly what not to do, that it should be a fun and engaging session. My talk is on Thursday, Apr. 25, 2013, stating at 11:45 a.m.

The second conference is one I'd like to attend, and I've already made overtures as to how I can get a "crew pass" and work the show as a roving correspondent. Yes, I'm talking about the Selenium Conference 2013, to be held in Boston, MA June 10-12, 2013. My goal is to be a roving correspondent and create audio content that can then be distributed both at the conference or afterwards. The final decision still hasn't been made on this one, but I am hopeful. Either way, I'd like to see how I can be of service to this conference, and perhaps can package the materials to be available before the conference. Either way, I want to see this one success, and I want to be part of it one way or another.

I just received confirmation that I will be a full track speaker for CAST 2013, which is being held in Madison, WI, August 26-28, 2013. My topic for this conference will be how we went through and developed the (as of now) still baking content for the SummerQAmp initiative. What we learned, what went well, what went not so well, and the feedback that we have received along the way, both good and bad :).


That's what has been forecast thus far. as in all things plans may change, and I will likely look to opportunities that are weekend oriented and close to home, too. More on those opportunities as I hear about them. More to the point, I look forward to meeting as many of you as I can who can make it to any of these events :). 

PRACTICUM: Selenium 2 Testing Tools Beginner's Guide: Selenium Grid


This is the next installment (completed) of my PRACTICUM series. This particular grouping is going through David Burns' "Selenium 2 Testing Tools Beginner's Guide".

Note: PRACTICUM is a continuing series in what I refer to as a somewhat "Live Blog" format. Sections may vary in time to complete. Some may go fast. Some may take much more time to get through. Updates will be daily, they may be more frequent. Feel free to click refresh to get the latest version at any given time. If you see "End of Section" at the bottom of the post, you will know that this entry is finished :).


[Note for 2/19/2013: Compiler errors... no, that will not stand, man! I will get to the bottom of this... and I think I finally have. For those who are using Java 7 as your JDK, look towards the bottom, I discovered something that resolved our past compile problems, or at least, I think I have :). --MKL].

Chapter 8: Selenium Grid

One of the nice things I have noticed thus far with regards to the Selenium tests that I have seen is that they are relatively easy to change so that the tests can be run on multiple devices. The down side is that, so far, we have had to run them all one at a time, or switch things over to different servers, different bridges, and check them out one at a time. Also, tests are launched serially and run in order in the approaches we have taken so far. That's all cool, but is there some way that we could simplify things and just have one place we could focus our attention? The answer is "well, almost!"

Selenium Grid lets us set up a number of Selenium instances, with one central "hub" to send commands to. Selenium Grid routes the commands destined for a particular device or driver based on the configuration options and identification elements we create. While it's not an "all in one" solution, it's a nice way to go about gathering a lot of tests under one roof and varying the ability to test lots of different devices. First, though, we need to set up our environment to allow for a grid server.

Starting up Selenium Grid


Thankfully, this is an easy first step. We start up Selenium Server like we usually do, but we add a flag to it, like this:

java -jar selenium-server-standalone-2.29.0.jar -role hub

This will start up selenium server in grid mode (note, of course, use the version of selenium-server-standalone[x.xx.x].jar that you have, if you have a newer version, it should work the same).



And here's what your browser will say if you point to the server.



Cool, we've successfully started Selenium Grid. All looks good here. This is going to be the center point of our tests, and other Selenium instances will be reached from here. When we start Selenium Grid, we can see what options we have control over and which ones we can change. By passing in the port number (i.e. –port ####) We can use any value that we want (and have access to, of course).

By typing in

http://nameofmachine:4444/grid/console

we can see what environments are accessible (or at least assessible as far as the grid hub is concerned).

Unlike the Selenium Grid 1.0 version, 2.0 allows a machine to host multiple servers and allow access to multiple browser drivers.

Setting up Selenium Grid with a Local Server Node

This time, we'll look at starting up Selenium Grid, as well as starting up a second Selenium server.

Enter the following command in the apropriate location (note, the previous selenium grid command needs to be running for this to work. That's not directly mentioned in the book):

Start with

java -jar selenium-server-standalone-2.29.0.jar -role hub

and then run

java -jar selenium-server-standalone..jar -role node -hub http://localhost:4444/grid/register

You should see the following in your command prompt or console:



And if all goes well, this is our Selenium Grid display:




Nice! Looks good :)!

Setting up Selenium Grid With Remote Server Nodes

While having the grid running locally is cool, to really leverage the benefits of Selenium grid, what we really want to be able to do is run Selenium Server on multiple machines, and have those machine register their connections with the main Selenium Grid hub. The more machines we have that can do this (and the different variety of machines we use for this purpose), the more we can leverage the benefits of cross-platform and parallel testing. For this purpose, here's my setup with a second Darwin device and my Windows machine:

Each of the machines needs to be registered, and each needs to point to the primary grid hub, like this:

1. Make sure the hub system is running on my main Darwin system:

 java -jar selenium-server-standalone-2.29.0.jar -role hub

2. Set up a secondary local Darwin Selenium server instance and have it pointing to the hub:

java -jar selenium-server-standalone-2.29.0.jar -role node -hub http://localhost:4444/grid/register



3. Set up the second Darwin machine to run a Selenium server and register it with the first Darwin machine's hub:

java -jar selenium-server-standalone..jar -role node -hub http://74.51.157.191:4444/grid/register



4. Set up the Windows machine to run a Selenium server and register it with the first Darwin machine's hub:

java -jar selenium-server-standalone..jar -role node -hub http://74.51.157.191:4444/grid/register



Cool, so if we did that all correctly, we should have a grid with three registry entries... do we?


Indeed we do. Right on! So the key here is that we have the hub instance running, and with that, we can set up as many remote environments as we have machines. For our purposes, though, I think this will be sufficient to make the point.


Setting up Selenium Grid to dedicate OS/Browser Specific Tests

If we only needed to run on one browser and one operating system, this would be a much easier situation to deal with. In that case, much of Selenium Grid would be to merely provide for parallel tests and pointing to different instances of Selenium Server, all utilizing the same browser driver. While that's probably OK in some instances, most of us have to deal with situations that require a variety of browsers. More to the point, we don't have to just consider issues of browsers (Chrome, Firefox, Opera) in their own rights, we also have to consider the challenges of running them in a variety of Operating Systems (Windows, Mac, Linux, etc.) Even with Internet Explorer being solely on Windows, there are now multiple versions and multiple OS considerations (XP, Vista, 7, 8). Safari runs on Mac, and we haven't even started talking about Mobile yet! Selenium Grid, in a manner of speaking, gives us "One Ring (Hub?) to Rule Them All!" Well, OK, perhaps not all, but there are nine options that we can leverage at this point.

We've already seen that we can set up a base hub for the Grid, and use the -hub flag to point remote instances to it. We can use the -role flag to associate remote instances of Selenium server with our Grid hub. To get the browser level into the mix, we use a flag called -browser . The following command lets up set up a Selenium Server that registers with Internet Explorer specific options:

java \
-jar selenium-server-standalone.jar \
-role node \
-hub http://localhost:4444/grid/register \
-browser browserName="internet explorer",maxInstances=1,platform=WINDOWS


If this is doing what it's supposed to do, then we will have a Grid entry that only references Windows. That's what the book shows. My reality is a different story:


Not sure if that's going to make a big difference, but suffice it to say, we're running with the parameters the book told us to run. In our later tests, we'll see if they run correctly.

Using Selenium Grid 2 with YAML

YAML is a way that we can take commonly used data and encapsulate it into a file, so that the details we might want to update, or run with several instances, can be sourced from one place. Here's the suggested example for our YAML file (called grid_configuration.yml):


There are two different configuration options that are available. If you have Selenium 1 Gris servers (i.e. Selenium RC) and you want to run them along with the more up to date Selenium2/WebDriver servers, you can. Define the details you want to configure in the YAML file, and when you start up the Grid hub, give it the flag -grid1yml and the name of the YAML file, and you're good to go. 
 



A  quick look at the config options shows us our grid1yml file configuration options (note that we haven't created a grid 2 configuration file yet):


The key takeaway here is that a lot of the parameters that we would want to keep track of can be housed in a couple of YAML files, and that makes configuration changes quite a bit easier to track.

Actually Running Tests in the Grid

Up to this point, I have to say that my update speed has been very slow with this chapter. That's been a combination of work conspiring against me, plus some unusual issues with getting files to compile that made me decided that I had to get to the bottom of it. Through that process, i discovered that Java 7 requires a little more verbosity than Java 6 did. I could be wrong about this, but on the bright side, I think I may have found an answer to the issues of not being able to compile in previous sections.

First, let's talk running tests that leverage the grid. This is a simple example, simply because I decided I didn't want to fight multiple machines, multiple drivers, and multiple error messages just yet (hey, I'm having such fun with it, why shouldn't I let you all do the same ;) ?). But, to show that we are actually "gridding" it, here's a simple test that I put together to make sure that it goes to a particular location. In the example below, I have three servers. One's running on Windows, one's running on Linux and one is running on Mac. I want to make sure the test in question goes to the Mac instance. 


Right, so what's interesting about the above code? Glad you asked. As I was setting this example up, I kept getting the compile error "SetPlatform requires Platform, but found String".

This is so similar to all of the design pattern errors I was getting, I decided that there had to be a reason I'm seeing this. Additionally, I've gotten to the point that I couldn't believe that so many code examples were just not workable. I have met David, and I know he's way more conscientious about this stuff than to allow so many "mistakes". Well, what if it's not a mistake? What if Java 7 requires stricter type usage? As soon as I thought about that, and remembered back in my old C days that I could "force" a type casting to be applied, I started experimenting. In the process I came up with the following approach and change (note the difference between the two files):


Once we forced the type requirement, the project compiled! Not only that, but it actually ran the test to the specific server we wanted to have run it. Look below, the grid console shows the test instance being run. It's the muted firefox icon, and yes, that server is indeed the one running on my Mac:


Oh, and note... I finally got my Windows Server to declare itself an Internet Explorer only node, so yes, it's doable. I don't know what I did differently other than have the Windows version run the latest version of Selenium Server... hmmm, that might be it.

Summary:

Yeah, this section took a long time to get through, but on the plus side, I made a neat discovery, and that discovery may go a long way to solving a few other problems. Net result is that I'm going to go back and do some re-testing and re-coding of the previous examples and bring them in line with Java 7 realities. Seriously, I am ecstatic that there is a potential solution to all of the random compile errors, and if it's this simple, then there will be some easy changes to suggest to Packt so that they can perhaps make an update in the errata to reference Java 7 type conversion (or lack thereof :) ).


End of Section.