Saturday, April 30, 2011

What Can a Bayshore's Litter Tell Us?

Today was an interesting diversion. Each year, our church does a community project that involves several hundred people. Last year, we did a major cleaning and invasive species removal from Golden Gate Park in San Francisco. Today, we went down to Ryder Park and Seal Point in San Mateo to clean the breakwater along the San Francisco Bay.

My kids and I got to be part of the "heavy lifting crew" because, at the area we were assigned, there was a lot of debris that had washed up on the shore and needed to be removed. My kids and I decided to "play detective" for the day, as it was entertaining to see what we would find. Less that 100 yards into our journey, we came across a waterlogged queen-sized mattress. Man, was this thing heavy, and did it ever smell foul. As we folded it up and rolled it up the levy, we laughed and hypothesized... "how does a queen sized mattress find itself in the San Francisco Bay?"

We played with a number of options that were both plausible and fantastical. The simplest solution is that someone dumped it somewhere further up the bay shore and somehow it got washed out in the tide and came to rest here. The more fantastical story (which, admittedly, was more entertaining to discuss) was that a boat with a bed in it may have capsized in the bay, and since the mattress could somewhat float, it drifted away from the boat and the tide washed it ashore here. We had numerous questions and queries about a number of things we found (large boards encrusted with barnacles, an office chair, bottles of beer and booze, etc.) Each time, I'd ask my kids to think of the scenarios where these might have come here. In many of the cases, they came up with likely scenarios. Small items like beer and booze bottles are easy; they were carried in, consumed, and discarded. The bigger items, however, prompted more questions (how could these get here, there's no easy way for them to be carried in and dumped, there's no automobile access... how does a queen size mattress get out to the Bay shore?!).

I took some time to explain to my kids that my work is a lot like this. Many times we see errors that make sense as to why they are there (a missing end tag, a missing brace, etc.). Each of these things are easy to explain, and there are usually very simple reasons for why they occur, and we don't have to go to great depth or study to figure them out. But there are times when we get the mattress that washes up on the shore, and we cannot figure out why, or where it came from. Many times, I tell them, I then have to go into "pure speculation mode" and make up ideas as to why a problem might be there. Often, I can talk to my development team and they can help guide my search and my speculation, because they know whether or not my SWAG (short for "Silly Wild Assed Guess") has any merit or not. Sometimes, though, I don't even have the developer's ideas as to why an issue appears, so occasionally, my SWAG's are given a chance to be explored. Most of the time, SWAG's don't turn up good leads, but every one in awhile, it will turn out to be something real.

So when you are out searching for answers, sometimes it pays to speculate and think of things that are wild and improbable. Sure, there's often a simple and reasonable answer, but every once in awhile, you just have to dig deeper and trust that your visit to "silly territory" will ultimately make sense, or it will be another path you can safely discard. Either way,  you get the chance to investigate new avenues you might otherwise not consider.

Friday, April 29, 2011

TWiST #43 - Heather and Andy Tinkham, Part II

So here's our follow-up to last week's interview with Heather and Andy Tinkham. While Heather was the focus for much of the interview last week, Andy gets his say this time :).

It was col to hear a lot of Andy's comments because it put his talk at Selenium Conference in better context. additionally, it was a lot of fun getting to listen to this interview again and compare it to Andy, Marlena, Dawn and I walking around San Francisco and taking in dinner at The Stinking Rose (*the* destination if you are a fan of garlic and happen to be inSan Francisco :) ). 

Much of Part 2 deals with the ideas of test automation and the manual approach and how essential both are, and that trying to automate everything is a mistake, as much of the work required for good testing cannot be handed over to a computer. But enough of me prattling on about it, please, check out Episode 43 for yourself.

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Thursday, April 28, 2011

Book Review: Writing Down the Bones

Wow, three book reviews in one week?! Well, not really. The Selenium one was written a couple of months ago, but I definitely have two for this week. It’s been awhile since I’ve done a retro review, and I’m in the process of clearing the deck of some titles that I’ve been reading here and there over the past few months, so this seems as good a time as any for this one.

Merlin Mann started me on a series of books related specifically to writing. Three of his suggestions have made their way into my hands. I’ve already reviewed one of them (Stephen King’s “On Writing”) and the third title I’m hoping to have a review for early next month. This time, though, I’m reviewing a book that’s 25 years old and in some ways feels its age, and yet in others, feels timeless.

Natalie Goldberg’s book “Writing Down the Bones” was originally published in 1986. Some of the topics and the world view is steeped in that time, but the advice and the theme of the book is timeless. The purpose of the book is to get you, ostensibly the wannabe writer, to roll up your sleeves and get to it. There are multiple short chapters that encourage you to do exactly that.

“Writing Down the Bones” is not really a how to write book so much as it is a cheerleader’s guide to aspiring writers (or fill in the blank creative work, as this book can apply to many endeavors, not just to writing). Each chapter is a standalone exercise or inspiring section, meant to give you a tool, a tip or a method to use to get more inside of your technique. Natalie’s focus is informed a great deal by her study and influence in Zen Buddhism, which adds for an interesting level of insight (some of which can be maddening, but it certainly gets one to think about things in a different manner).

Many of the ideas described are exactly as the titles describes, they are ways to get you to think in a different way, to get a skeletal structure down so that you can hang the meat on the bones as the story/essay/poem develops. Methods include writing down ten nouns and then writing down ten verbs, without referencing the other list, then matching nouns and verbs in different ways, some of which might not make any sense at first. Taking a look at the tone of voice used when we write sentences, are we ambiguous or are we direct ((Natalie prefers being direct, for those interested).

What interests me is that this book has more of a cheer-leading and an encouragement feel than it does a mechanical aspect. This is good, as there are a number of books that cover the mechanics quite well. This fills a niche for the reader who is struggling with what to write about and wants to have someone encourage them to write just a bit today. Find a place, find your motivation, take a tour of your town, imaging you are seeing things for the first time, but write.

There are many interesting ideas to take from this book, and many of them feel like performing meditation on the act of writing (which, coming from a professed Buddhist, should come as little surprise). The steps are small; the ideas are easily consumed in a few minutes in most cases. Even if you do not sit down to physically write out the idea, the mental process of envisioning the practice offers new insights and suggestions.

Bottom Line:

For those looking for a book on structure, technique, style and syntax, this isn’t book for you (though it does delve in those areas in places, just not in great depth). If you would like to have a book that will encourage you and give ideas to branch and stretch in ways you weren’t sure you could, then this book is well suited to that task. Really, the bottom line is… just write. Do it often, do it actively, do it from your heart, and do it because you love it, but at some point, the books have to give way to the practical advice of “sit down and write something”. It’s good advice, and the short meditations give you an object lesson and an approach to chew on. If that style seems interesting, then “Writing Down the Bones” may be just the book to help you get into the groove to “just write” and give you enough motivation and ideas to make that a reality.

Writing Down the Bones: Freeing the Writer Within (Shambhala Library)

Wednesday, April 27, 2011

Looking For Test Talent? Matt Heusser is Available!

About 18 months ago, I went into overdrive as I decided I wanted to “fall in love with testing” again. One person has been more responsible for helping re-kindle that spirit, both with encouragement and with active participation in the process, than any other, and that’s Matt Heusser. Matt is more than a software tester. He’s an excellent writer. He’s a great spokesperson for our industry. He’s a spot on and excellent presenter. He also knows how to interview people very well. HE teaches, mentors, and helps people in so many ways, myself being a prime recipient, but there are many others.

Due to a restructuring of his company, Matt will be “at liberty” as of May 3rd, 2011. If you ever wanted to get your hands on a “hot property” in the testing world, well, now’s your chance to get one of the best players out there as a free agent. Seriously, check out his blog if you’d like to see the caliber of brain we’re talking about here.

Note: I am getting no kick-back for this, I’m doing this because of exactly as I said in the first paragraph. Matt’s given a lot to the development of software testing as a craft and a profession. It’s time we all did likewise. One thing I can say for sure, Matt won’t be on the market for long :).

Pushing the Boundaries: A Weekend Testing Follow-Up

Last Saturday I had a chance to do a “double dip” with Weekend Testing. I facilitated the WTAmericas session at 11:00 AM, but prior to that, at 8:00 AM, I joined up with the European contingent to be a participant in their event.

This time out was interesting because we were testing the boundary conditions of Skype, the tool we were using to conduct our session. This had a lot of interesting possibilities… what happens if we bring down the service? Will we kill our session? Will the powers that be at Skype take a negative view of us for doing this? Could we be dealing with an ethical time bomb here?

In truth, every time we test we run into these situations. In some ways, I joke that we deal with a schizophrenic Hippocratic Oath; we must do no harm once the application is out in the wild, but we can be as evil and diabolical as we want to be while it’s in our labs. But how aggressively do we test when the actual app we mare testing is already in the wild? And how do we reconcile what one country thinks is fair play and another thinks is illegal, or at best, bad manners?

Weekend Testing is unique in self-directed education opportunities in that it’s a way to be 100% open about your learning progress and experience. Everyone who participates gets their actions published with every experience report. In short, there’s a full and unexpurgated record of everything we do, say and act upon. In the role of being great testers, we can get proof of our actions and point to our building experience in the testing we perform. In the event of a too aggressive outlay, however, we also have a transcript of our actions in those processes, attached to our names. It’s a double edged sword.

With this in mind, we went about looking at conditions that would cause our application to feel stress or otherwise not respond. I decided to try an old favorite tool when lots of text is required for a purpose. QAHatesYou described what he calls “the Hamlet Test” which is all of the text of hamlet in a copy/paste buffer, applied to various inputs. I use something similar that I call the Lorem Ipsum test because, well that’s the tool that I use. The key to Lorem Ipsum is that you can designate has much data as you need, and the site will generate that much text. The text is nonsensical Latin, so there’s really no rhyme or reason to the characters other than words formatted in paragraphs. What it does do it allow for an exact byte count or word count to be generated, and then that block can be used to test any text inputs you would like to use. Lorem Ipsum is great for testing buffer overflows, not so great for testing XSS or SQL injection. In this case, because Skype is an active service and application on a live network, I didn’t want to risk that.

Interestingly enough, there’s plenty that one can do with just plain alphanumeric text. I discovered that 29999 characters is the maximum that Skype will display in a message. I discovered that it will cut off the remaining characters from view from the user. WhatI didn’t discover was if those characters were lopped off from the delivered message, of it was limited to what it actually displayed on screen. There’s a subtle but not insignificant difference with that (i.e. what is the database actually holding beyond what it is displaying). I discovered the Skype was not running and transmitting messages via HTTP, but its own protocol. There are plenty of additional probing tests I could have done as well, but it would have required being more intrusive on a public site than I am personally comfortable with doing. We also found that varying the languages, so that we were entering unicode text such as Kanjio or Hanghul made for some very interesting results as well. In some cases, just a line of text was printed before the message was truncated.

There are time when “going medieval” on a application is necessary and warranted. Usually that is when the application is in the local testing stage. The more outward facing the application is, the more critical it is we find those strange and anomalous issues that could be catastrophic, but the closer we get to live deployment, the less our opportunity to “get medieval” becomes. Crashing a system in a sandbox with clear steps is a great help to developers. Crashing a live app and taking an app out of service that people depend on may be a help to developers, but it is definitely adding to the irritation of its users. Fortunately, we did not have that happen during our testing, but it’s a real threat to be aware of, and know how to deal with it in these endeavors.

All in all a great session, and a good reminder of our responsibilities as testers, knowing when “going Medieval” and “do no harm” need to be balanced and respected.

Tuesday, April 26, 2011

Book Review: Selenium Simplified

OK, you are probably scratching your head, because, wait a minute, where’s the book review?!

Well, this time, I’m going to encourage you to go elsewhere! The book review was written for The Testing Planet, Issue 4, and that’s where you will find it (on page 8, to be exact)!

So if you would like to see my review for Alan Richardson’s book Selenium Simplified, you need to get over to The Testing Planet and read it there… and while you are at it, take some time to read the other great articles in The Testing Planet!

Next book review will be here, I promise ☺.

Selenium Simplified

Monday, April 25, 2011

Book Review: The Manga Guide to Relativity

First off, let me set the expectation here. I’m a software tester by trade. I’m fan of science (as opposed to being a scientist). I’m also a huge fan of Japanese animation, which is commonly referred to in America as “Anime” in its video format, and “manga” in its illustrated paper format. In short, yes, I’m a grown man who enjoys comic books and I have absolutely no shame in saying that whatsoever ;).

Anime and manga is used to reach many audiences in Japan; it’s not just geared towards kids. Stories range from the fanciful to the dark and gritty. In between, every conceivable topic and interest is covered and illustrated in a way that grabs attention, entertains, and helps inform the readers on an emotional level.

This combination of storytelling, emotion, quirky characters and an illustration style that’s both cute and engaging helps lend it to the idea that “hard topics” can be discussed using manga, and that the topic will be much more engaging for the reader. “The Manga Guide to…” series is an example of this, and covers a broad variety of interesting, difficult and sometimes downright geeky topics. In some ways, “The Manga Guide to…” series can be seen as being on par with “Standard Deviants”.

The most recent title, “The Manga Guide to Relativity” (written by Hideo Nitta, Masafumi Yamamoto and Keita Takatsu) uses the classic story techniques common to most fans of manga; student body president Ruka Minagi takes on a challenge from Rase Iyaga, the sadistic and capricious school headmaster (who also has a penchant towards androgyny, but hey, for anyone with more than a passing familiarity with Manga titles, this is par for the course) to write a report about relativity, thus sparing the rest of the class from having to do it over summer break. If he succeeds, the rest of the class will be spared the assignment. If he fails, he has to agree to be Iyaga’s “personal assistant” for the next school year. All is not lost, though, as Physics teacher Alisa Uraga agrees to teach Minagi about relativity so that he can complete the challenge. With that, an adventure begins.

During this process, the reader almost forgets that they are actually looking at a topic that is fairly challenging to explain, the theories of Special and General Relativity. Instead the focus is on a fun and engaging story (and not a few quirky characters… did I mention the Vice Headmaster is a dog? OK, I’ll mention it).

So can a “comic book” really teach us one of the trickier scientific topics? Let’s find out…

What is Relativity?

The first chapter helps us get into the mindset of our protagonist Minagi and his sensei Uraga as they discuss the differences between special and general relativity. The history of relativity from Galileo and Newton on through Einstein and the idea that the speed of light is a constant and the fact that all reality is in constant motion is explored. The illustrations are both cute and informative, and help fill in the blanks for many of the concepts that might be difficult to visualize any other way. At the end of the chapter a full breakdown of the concepts and some background information is presented again to help drill home the ideas (this also allows those who want to have a nice outline and paragraph explanation of the principles a chance to get that along with Minagi’s exploits).

What Do You Mean Time Slows Down?

A Japanese fable leads off this section, the story of Urashima Taro. The legend tells about a man who rescues a turtle and brings him to the undersea palace of the Dragon God. When he returns home back to land, instead of a few days having passed, several hundred years have passed. This idea is called the Urashima Effect in Japan, and is called the Rip Van Winkle effect in western countries. In both cases, the concept covered is Time Dilation.Time dilation is the situation where as an object approaches the speed of light, time slows down for the object. the manga guide uses an imaginary device called a "light clock" to help define how this idea works. this is further emphasized with a visual idea of the Twin Paradox, where a twin goes on a space voyage for a year at light speed and returns to Earth, and sees that their twin has aged by several years in their absence.

The Faster an Object Moves, the Shorter and Heavier It Becomes?

Wow, Sensei Uraga looks pretty hot in a bikini... have I piqued your attention yet ;)? Hey, it's a manga, what can I tell you? OK, back to the topic... Sensei Uraga continues the discussion with the idea that, when an object gets towards the speed of light, it contracts (and demonstrates in a breakdown the equation necessary to prove this idea, i.e. the Lorenz Contraction). Space and time are said to contract based on this theory of specific relativity (remember, thus far that's what we've been looking at, general relativity comes later) and because of this, we need to look at Space and time as not separate entities, but interlocking entities. Additionally, objects get progressively heavier as they approach the speed of light (except for light, which by its very nature is assumed to have a mass of zero for it to work the way it does). Incidentally, this is why it is believed that no object of any measurable mass will ever get to break through the light barrier (science fiction story writers and Start Trek fans notwithstanding). The relationship between mass and energy are also discussed here (the famous E = mc^2 equation and what it really refers to).

What Is General Relativity?

Special relativity takes the idea that gravity and motion for an object travels in a straight line. General relativity is more mathematically complicated, because the gravity of nearby objects (such as stars) has a direct effect on the object in motion, and that gravity has to be accounted for. In addition, light "bends" as it makes its way around an object with a large gravitational pull. Time also slows down as it passes such a large gravitational pull as well. The idea is called the Equivalence Principal and states that "the inertial force accompanying accelerated motion is indistinguishable from gravity, and therefore, they are the same." This is compared to the feeling of pressure you feel in an accelerating and decelerating train, or in an elevator as it goes up and down, or on an amusement park ride like the spinning swings. A demonstration is shown where a bowling ball is placed on a tightly pulled rubber sheet. When the bowling ball is placed on the sheet, the sheet indents to make room for the ball. Put another one on the same sheet at the opposite end of the sheet, and it will make its own indentation. Give enough time, and the balls will slowly move towards each other. This shows that gravity is really the bending and warping of space (yeah , I had to read that one a few times :) ). General relativity also takes into account that matter, space and time all have interactive relationships, and while it's a "theory", there are devices we use everyday that depend on this theory and in its actions prove it works (GPS, anyone :)?). We can really take this to the mind bending level of looking at the universe (by the theory of General Relativity, observations indicate our universe is expanding).

Bottom Line:

That's a lot of detail packed into a manga. The cool thing is that it's entertaining, fun to read, and in many ways, the ideas and theories come naturally, and it's only when you put down the book that you realize "wait a minute... did we just cover what I think we covered?!" That's the great success of this book, in that you learn new ideas and concepts without really having to think about it too much. You're having too much fun to realize how much you are learning. On that level, The Manga Guide to Relativity succeeds very well. So how does Minagi do on his report? Can Sensei Uraga deliver the goods? And what is it about that dog, anyway?! For answers to those riddles (and many others within the Relativity metaverse), you'll just have to pick up a copy of The Manga Guide to Relativity and find out for yourself.
The Manga Guide to Relativity

Saturday, April 23, 2011

WTA010 - Persistence of Time. A Weekend Testing Follow-Up

Today from 11:00 a.m. - 1:00 p.m. PDT, I had the chance to facilitate another Weekend Testing Americas session. Today, I must admit, I was a little concerned, since there was a European Weekend Testers even held just thee hours earlier (which I participated in as well) and prior to that, there was a Weekend Testing event held in India. I don't know if that's a record, but that's three Weekend Testing events in one day!

Fortunately, we had a good turnout for all three events, and a good turnout today was important, because I decided to try something a bit different this time.  we looked at Rescue Time, which is a time management application. The idea is that individuals install a data collector on their local machine and then report the results to the Rescue Time web server.  The server then aggregates activities and then displays reports of what the users spent their time doing.

We split into two groups today. One group was the "Engineers", who focused on testing the application specifically to see how well it handled time management from an individuals perspective. The second group was the "Managers", and they were tasked with testing the application from the perspective of the management team and from tracking billable hours.

With this scenario, we expected to see different results from two different perspectives. The focus, parameters and heuristics used differed for both groups.  The nature of the issues found likewise differed. What was unanimous, though, was the distaste for organizations that use these types of apps to monitor and dictate compliance with such an approach. The general feeling was that the ability to game the system was widespread, and even if the system could be locked down so as to not be tampered with, there was still a lot of ways to game the system to appear to be more "productive" than the reality, as well as the fact that very productive activities could be labeled distractions by fiat.

Interestingly, the idea of using the tool on an individual basis for individual clarification and focus was not seen as negatively. In the spirit of full disclosure, I actually have been using this app, and while I likewise find it something that can be "gamed", it does do what it claims to do, and that's track the active things you work on and how long you work on them. Additional note: this application was developed for the Mac first, and then ported to the PC. Most of the issues we came across were related to the PC version.

For the rest of the story, the experience report can be viewed here and the full chat transcript can be viewed here.

Friday, April 22, 2011

Going from Memory to Automatic

Yesterday was a bittersweet day for me. It was, for all intents and purposes, the end of snowboarding season for me and my family. Sure, there's mountains still open for a few more weeks, but with a number of time commitments that I have and other things I need to do, this effectively ends the season for me. thus I bid adieu to what was an epic season of great snowfall and record breaking conditions.

To celebrate our last day, I brought my older daughter up with me. As is often the case, when we have our ride days together, I tend to play the role of coach as well as Dad. She's come a long way in the past few years, but I always have to remind Karina the day she beats me down the hill is the day I'll stop playing coach (LOL!). Also, i like to see her get better and more focused as time goes, so I keep offering her suggestions. Yesterday she looked at me and asked "Why is it so hard for me to remember all of this? How come you do it so effortlessly?!"

With that, I had to step back and think about that. Part of it was that I have literally hundreds of snow days to draw upon over the years, and I've just done this a lot, but it's true, when you are first starting out with anything, it seems like you are being overwhelmed with information. When it comes to snowboarding, it's a matter of "put your weight on your front foot, don't let your rear hand trail behind you, square your chest and shoulders, look over the front of your board, bend your knees, drive your rear knee towards your front knee to help flex the board and maximize turn efficiency..." come on, that's a lot to remember if you have to remember it. Is it any wonder why we feel overwhelmed at times? Still, little by little, these things recede into the back of our mind, and we just do them by instinct.

I often think about the experiences I've had in the AST Black Box Software Testing classes. Even in a Foundations class, I still find there is a lot that I learn each time through, because of the different participants and their unique experiences forces me to not rely on instinct, but to look at the situations fresh from their perspectives. the advantage of doing this is that we keep our minds fresh, we remember what it is like to be in that "first timers" shoes, and often, it catapults us back to first timer status as well. there have been several times where I have said "Hey, I hadn't thought of that approach" and I've been through this class five times now!

Bottom line, every one of us goes through the uncomfortable experience of having to "remember everything" and try to hold onto it all,.and feeling like we are not up to the challenge. At those moments, all I can do is think back to my first days riding, and how much I had to juggle in my brain to keep upright. With time and practice, I got better, and with time and practice, the juggling in my head for testing topics will fade back, too, and instinct will take over. Unlike riding, though, I hope I never get too comfortable in my testing knowledge that everything is run on instinct. Context changes, and what I knew for certain as a sure fire approach yesterday, might not serve me so well today. Automatic is good, but we have to be careful that automatic doesn't become our default mode.

TWiST #42 -- with Heather and Andy Tinkham (Part 1)

So this is a first. As an April 1st gag, uTest launched a spoof site called In honor of the site, Matt reached out to Heather and Andy Tinkham to be interviewed together. Why? Well, other than being a married couple, they are both testers!

This is also our first two part interview, in that there was too much to cover in just one session (which makes sense when you interview two people :) ). This first part of the interview covers a fair amount of ground, starting with heather's move from academia to business (she was working towards a PhD when she decided she'd had enough) and Andy also described some of his challenges in the academic space and his experiences at Florida Tech studying with Cem Kaner. Andy also talks about what he feels are the benefits and challenges with automated testing and his primary focus on test automation over the years, as well as Microsoft and other companies visibility on that issue. Rounding out the discussion is Matt and Heather talking about companies that deal with the "long ball" aspects of testing and business development, and what it takes to get beyong the quick fix or core competencies to remain relevant for years down the road. Anyway, don't take my word for it, have a listen to Episode 42 for yourself.

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Tuesday, April 19, 2011

Keeping Fish and the Wonders of an Ever-Changing Environment

It’s been an bit of an exciting time at the TESTHEAD homestead the past few weeks. One of my hobbies outside of software testing is that I like to raise freshwater aquarium fish and make an environment where the fish can spawn. My favorite species, bar none, is Archocentrus Nigrofasciatum (that’s to show everyone I have a true fish-geek pedigree, but I’m not going to refer to them as that for the rest of this post). Most people just call them “Convict Cichlids” or “convicts” due to their distinctive blue-white coloring with black bars that stripe their bodies like, well, prison garmets of the early 20th century (hence, Convict’s).

Why are these guys my favorite fish? In general, they are Cichlids in their attitude and behavior, but they are a dwarf species, meaning they top out at about 6 inches. This is important because a 70 gallon tank can only hold so many fishes, and the smaller the fish, the more can be housed comfortably. Even with their limit in size, they have all the classic hallmarks of Cichlids in that they have all the belligerence, jealous guarding of territory, and a dimorphism that makes it easy to tell the males from the females (females have a splattering of orange scales that at times become more or less prominent. If you see orange anywhere, you know it’s a female). Additionally, getting them to spawn is fairly easy (they don’t have a reputation as “the rabbits of the fish world” for nothing!).

It’s in this mad dash to spawn that something interesting has happened. I’ve gotten used to the fact that the fish will spawn, the babies will be around for a week or two, then they will magically disappear (OK, they don’t magically disappear, the other fish eat them as their parents stop paying attention to them). This most recent time, however, something happened… the mating pair in question picked a hollow spot between a bunch of rocks that they then proceeded to hollow out even further by removing all of the gravel in the hollow and around the rocks. This made for a pretty much impenetrable bunker where the fish could spawn and the babies could hide… for a time.

After about 6 weeks, the fry were free swimming and freely visible, and able to fend for themselves, somewhat. They could swim, eat and range about independently. What they couldn’t do was fend for themselves just yet (they are still about 3/8” in length). Therefore, the parents have been working overtime to protect their area, resulting in some less than thrilled tank residents. Ultimately, I decided that the crowding and the predation needed to be limited, so I decided to “thin the herd” and relocate many of the mature adults so as to make room for the babies.

Is there a comparison to software testing here? Yep, there sure is :).

We often deal with environments that spawn new entities, and we test the environment in a tried and true manner. When things are set up correctly, the environment behaves as it always has, and little details that are below the radar and too small to see oftentimes get righted without our ever knowing they were there (or even if we do, they are gone with the next release). Bugs and baby fish are similar, in that they need an environment that is primed right. For fish it’s the right water temperature, the right PH, and a pair of attentive parents fining a sheltered space to defend. For bugs, it’ usually similar, a good environment where they can hide and can grow without interference from the outside world (and from the prying eyes of testers).

As the fish grow, they are likely to take on pressures from outside. As the bugs grow, they are less likely to stay hidden as they gobble up more system resources or risk being seen. The fish will place pressure on the other inhabitants. The bugs will place pressure on the developers and testers. Similar to both, though, is that neither will grow in a hostile environment, they need to have an environment that will be conducive to their development.

While baby fish may be desirable in the grand scheme, they can’t really be allowed to reproduce ad infinatum. For one the tank will not be able to sustain the large numbers of fish and the ammonia produced by their waste will rise to toxic levels (even with daily water changes). Thus as a point, measures have to be taken to stop the spawning of babies, usually accomplished by segregating the males from the females (my son has what we jokingly refer to as “the Ladies tank” in his room. In times of extreme pressure, we relocate all of the females into this tank. In our software environments, it’s important to also keep a close eye on the conditions that can help bugs reproduce (our browsers, our utilities, our machine stats, programs we install, etc. It would be nice to have an equivalent of a male or female convict in our machines that we could separate out to prevent bugs from appearing, but alas, there is where this metaphor breaks down. We don’t have that option.

What we do have is the ability to know as much of our systems as possible, and to check regularly for behavior that seems out of the ordinary. Much like it takes attention and vigilance to not have a surprise spawn (and often the same vigilance if we actually want to have it happen). We can do much to keep bugs at bay on our systems, but we likewise need to be sure we have checked the rocks and hollows that would be the most conducive to their breeding and flourishing.

Now if you’ll excuse me, I have some fish food to grind up :).

Monday, April 18, 2011

TWiST #41 - with Michael Larsen

Sorry for the delay in getting this up, had a busy Friday and weekend. Besides, I always feel a little funny talking about myself when I do an interview.

One thing I will share, though, is that when you listen to yourself off script, you notice some interesting ticks. Whereas I said in the past I don’t typically say “um or ah”, that’s not to say that I don’t have my own verbal idiosyncrasies. The one that I noticed most tellingly this time through for me is that, when I feel anxious or unprepared, I tend to “double speak”. No I don’t mean I’m evasive with answers, I mean that I tend to say things twice. It’s the old “I’m going to tell you something, tell you, and then tell you what I told you. That may be effective in a classroom or in a talk, but it’s annoying in an interview. Needless to say, I was brutal with the editing because of this, so if you hear what feels like stutters in the interview, I figured you’d be more willing to deal with that then me repeating myself ad nauseum (LOL!).

This week’s interview was originally recorded on April 1st, and because of that, we decided it would be fun to mess with the format a bit, including the theme song. Some of you may know this, but for those who don’t we change the theme song every 20 shows. Since this was show 41, we were going to change the song anyway, but Matt thought it would be funny for this show to inject Rebecca Black’s viral YouTube track “Friday” into the mix. We figured if anyone could roll with it and not feel like they were being made fun of, it would be me (and they would be right ;) ). Next week wee will have a different theme, and that will run us out for the next 19 episodes.

This week I talked a bit about my transition from a traditional development environment to an Agile one, and some of the interesting challenges that entails. I also talked a biot about some of the interesting quality environments and expectations with certain products (such as video games, especially before the auto-update abilities of more recent systems), and I also talked a bit about Weekend Testing in the Americas. If you’d like to hear the interview, listen to Episode #41.

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Wednesday, April 13, 2011

Book Review: The Book of Audacity

There are times when a book reviewer reviews a book because it’s something they have a passing interest in, or it is something they know they should be doing and want to learn more. Then there are those books that relate to something so familiar and so every day, that it’s easy to build an expectation for the title. For me, audio editing on behalf of the TWiST podcast is a nearly daily thing (at least five days a week), and Audacity is my tool of choice for doing audio editing, partly by chance but more recently by familiarity. Is there a book out there that would keep my interest on the subject?

I’m happy to say that “The Book of Audacity”, written by Carla Schroeder, and published by No-Starch Press, is just such a book. You don’t have to be a computer aficionado, but it certainly helps. You don’t have to be a musician, but it certainly helps there, too. You don’t have to be someone who enjoys a witty and sometimes mildly sarcastic writing style, but again, it certainly helps. If you are the special geek that fits all three of these categories just mentioned, then this book is a gem!

The Book of Audacity is platform agnostic, but shows a number of approaches to using it on a number of platforms (with a special section specifically focusing on Linux). Audacity is free and cross platform, and the author spends plenty of time trying to make the users feel comfortable tackling the various topics.

Many books are written to describe the process of recording and producing music, but they often devolve into an impenetrable treatise on sound theory and issues of interest to professional audio and signal processing engineers, leaving many of us everyday folks interested in recording behind. Carla is careful to not do that here. Often her tone is irreverent, humorous, and at times downright snarky, proving this isn’t your typical audio recording book, and that’s all for the better.

The Book of Audacity uses multiple projects to help the user get the most out of the program and appreciate many of the finer details the application provides. Some of the projects will have varying amounts of interest, but the chapter long focus on many of these projects allows the reader to get in-depth enough if they want to focus on that particular aspect, or skip to the next chapter if they are not.


This section gives a good overview of the program and how to get a quick understanding of many key features. It’s not an in-depth tutorial, but the first time user will feel comfortable after perusing this chapter.


Think that you will have to break the bank to create a “recording rig”? In many cases, it’s as simple as using the hardware already in your computer. If you want to get more involved, there are additional items that can be used, and I appreciate the level of personal detail Carla provides in showing the equipment that she personally likes using, including audio interfaces, microphones and digital recording devices.


This chapter may be skipped if this project isn’t relevant to you, but there’s a lot of good information in here regarding how to get external audio into your computer and then work with the audio to create master CD’s. Topics like dynamic range, recording levels, fixing pops and clicks, dealing with clipping, dealing with separate tracks, creating fade-ins and fade-outs, and mixing down and creating the final audio CD are covered.


If you have an interest in recording live music, especially if you use a modern flash based digital recording device, then this section is for you. It covers the technical details of recording a live performance, whether that be using the digital recorder to record the live room or getting a feed from the board directly. It also covers such things as how to interact with the sound crew and generally be a net positive in the experience (these are the little things that set this book apart from many other audio books, in that it covers some of the finer personal interactions, not just using an audio tool). Methods for dealing with recordings made at different times and different places are covered so that the end result sounds as consistent as possible.


This section covers taking audio from multiple audio sources and creating a compilation CD (or for those of us of an older generation, a digital “mix tape”). Using multiple audio formats, ripping from CD’s and DVD’s are covered, converting MP3s to uncompressed WAV files, and covering the details needed to get the tracks to a consistent sound level, determining track order, and creating a seamless production so that the final product can be mixed down and burned to a Red Book format CD are discussed.


DVD audio is a high end sound approach that takes advantage of the space a DVD allows to let the person doing the recording use even higher quality audio samples and audio formats (uncompressed WAV, AIFF, FLAC, etc.) This section walks the user through the process of producing an audio DVD and highlights tools that will help the process go smoothly.


So far, most of the projects have been dealing with single track (mono or stereo) sequences. This is the first section that gets into multi-track recording and covers my bread-and-butter topic, podcasting. You’ll learn how to mix multiple parts together, including using intro music, creating a duck down track for theme music and having it drop when the announcer speaks, cleaning up verbal ticks (how to deal with “ums” and "ahs” and the like), as well as exporting to a format that is high enough quality for listening purposes but minimizes the size requirements for easier downloading.


So you have gone to great lengths to produce audio for various purposes. Now what do you want to do with it/. In this chapter, the topic of getting your work out into the public space is covered. Ever wondered what it would be like to be a signed musician, Do you even want to go that route? Do you need to? Can you forge your own path? This section answers those questions as well as how to go about developing an online presence, how to get your work out to those who want to listen to it, whether or not you should use Digital Rights Management on your files (personal choice, I generally say “don’t”, but your mileage may vary), and dealing with the issues surrounding copyright and fair use.


While the section for podcasting goes into the basics of multi-track recordings, this is a deep dive into how Audacity allows for multiple channels to be recorded (and the equipment necessary if you want to do it “live”), dealing with and ordering multiple tracks, getting the levels right, using the internal mixer and equalization tools, labeling tracks to keep track of what’s happening and where, how to move various tracks around and group them together, as well as how to construct custom mix downs so that instruments are panned and ordered in the mix the way you want them to be.


If you are tired of paying for ring-tones for your cellular phone, or just want to know how to make them for the fun of it, this section is for you. It discusses creating a short recording, using dynamic range compression so that the recording can be optimized for the small speaker inside a phone, and then converted to the file format that your phone will support and allow you to upload it and use it as a ring tone. Again, it’s an example of a project that shows how Audacity can be used for a range of audio options.


One of the great advances Audacity and other audio tools provides is the ability to use effects that used to require lots of outboard gear. Signal processing allows for very basic details such as dynamic range compression and signal leveling, but also lets the user go to town with interesting effects such as reverb, digital delay, tremolo, phase shifting, distortion, etc. Using these effects effectively is more challenging than just setting an effect and applying it. Many effects on a given track will cause additive effects and knowing how to deal with them is important. It’s also possible to use the effects to create electronic drum sounds and other approaches (there’s an entire effects language called NyQuist that is covered a bit in here as well).


Ideally, we will do the best we can to get clean and focused audio recording, but sometimes the source material is not perfect or we have little control over some of the issues that arise in the recording process (live interviews, noisy rooms, sudden volume changes, etc). This section deals with the ways to handle the inevitable “clean-up” jobs we will face from time to time. Simple tasks like cutting out sections or trimming silence are covered, as well as more advanced topics like splitting audio tracks, performing noise removal (and dealing with the effects of that), how to effectively use leveling and normalization, waveform repair using the draw tool, modifying tempo and performing pitch correction, and using compression effectively are all covered here.


For those who are using Windows or a Mac, this section will probably not be relevant. For those using Linux distributions, though, there is a gold mine of information in here for maximizing your system’s abilities.


Windows also has a number of quirks and areas that can be adjusted so that the system performs smoothly and integrates that various options for sound input and output together. With the number of available devices that leverage USB 2.0, FireWire and other media formats, even laptop users with limited options for on-board configurability can branch out and experience high quality interfaces and how to tweak them for best use.


There are lots of preference options that will allow the user to tailor the experience to the things that they do best or wish to take advantage of, and this section covers all of them. This section also covers default file formats, setting up batch jobs (chains) that can be used for automating many of the routine tasks, exporting to a default file format, enabling or disabling effect, and other helpful options so that you can focus on your projects and not dealing with the system that supports it.


The last section is a reference guide to audio recording equipment, topics and terms, recording myths and explanations that will help demystify the process of audio recording and help present the truth about such ideas as “equipment burn-in” (myth, it’s not needed), the superiority of tubes over solid state (99.9% of listeners won’t be able to tell the difference), and the need for specialty cables (in most cases, nope. You want properly made cables, yes, but gold plating is not necessary).


There is truly something in here for everybody that deals with audio recording of any stripe, and Audacity can be used for both simple and all-encompassing projects. The style of the book takes into account that not everyone has the same goals, and the book is structured in a way where a complete read-through is not necessary. The basic techniques are covered multiple times in the book so that mastery can be developed but also so that the project that interests the user can be the primary focus.

Most of all, the book gets out of the way to let you work on the projects you want to work on in the way you want to work. It also strikes a balance between technical discussion and everyday reality, with a tone that is both engaging and entertaining (yes, technical books can be entertaining, too, it is possible). For those looking to go beyond the basics, and want to use Audacity as their tool of choice, The Book of Audacity would be a good title to help get the most out of that decision.

The Book of Audacity: Record, Edit, Mix, and Master with the Free Audio Editor

Saturday, April 9, 2011

EWT39 - Optical Recognition. A Weekend Testing Follow-Up

I was pleasantly surprised this past week when I saw that there was going to be the 39th European Weekend Testing event. Much of the action for EWT has been covered by the group meeting on weeknights, which for me is right in the middle of my workday. Hence, I'm not able to make those. So I was happy to see that we were going to have a session that would allow me to attend, even if somewhat early (start time of 8:00 AM for those of us on the West Coast of the US, but 4:00 PM GMT/UTC).

Today I'd like to welcome Eusebiu Blindu as European Weekend Testing's newest facilitator. I think he did an excellent job and I look forward to future interactions with him at the helm of EWT's sessions.

The mission today was as follows:

Compare the following image to text converters:

1)file size capability
2)speed of conversion
3)quality of conversion (defined by the tester)
4)find other aspects that can be a relevant metric and point out the observations

From here, we split off and presented the findings of our tests. I wanted to try some things that I normally have to deal with when it comes to documents that I'd like to convert to other things. These included:

  • tables of data
  • checklists
  • phone lists

I have literally dozens of these in PDF format, many of them flat images that have been scanned or emailed to me. Oh how it would be nice to see this data liberated and usable in other capacities. These apps are designed to help do that. From this premise, I used one of my favorite approaches to testing, which I call (and so do lots of other people, it's not my idea ;) ) "persona based testing". Persona based testing is where we take on the role of a person looking to accomplish a task, and put ourselves in their shoes. By doing this, we try our best to forget the technology (especially if we know it) and try to experience the situation from the perspective of another person. My goal was to see if, with the three document types I described (all three flat PDF files) if I could get them to be viewable and in a usable manner.

All three applications could deal well with straight text, say, from a letter or a short story on a page. The ability of the applications started to diverge and break down and we added more complexity to the tasks. The Free-OCR application took my phone list and separated their four columns into one long column, one listed right after the other (not useful for its intended purpose). Online OCR, was able to scan it is with no line breaks, but preserving the line order, so that was a little more useful. Google Doc conversion maintained the format of the original file including line breaks. The toughest test was to see how it handled a table of data with cell borders (visible lines) between the data. The fact is, noe of them handled it well. All of the "scanners" broke up the text into random batches and none of them corresponded with another.

We discussed the various methods that we could put into play to test these applications,and here's where the true value of Weekend testing comes in... no matter how much you think you know about an application, no matter how much experience you have in testing, someone else is going to provide another idea for you to consider that you didn't. In a short time period, something has to give. None of us tested all aspects on all three tools (whether or not it was because we didn't have time, or figured someone else was going to test the particular item is open to interpretation). We also had an interesting discussion comparing the claims of the product vs the realities, both in what it was able to do and what it wasn't (interesting find, though one of the apps stops you at 15 uploads an hour, if you clear your cookie cache and history, and then go back to the site, you can continue to upload files).

So my thanks to the testers that participated in today's session. Each one of these helps us develop our skills and approaches to testing, and more to the point, help us think just a little differently. I'll argue that I think the latter is the more important of the group, but any practice adds value, so I'm again happy for the chance to keep improving with a little help from my friends.

Friday, April 8, 2011

TWiST #40 -- with Fiona Charles

So this was a challenging week to edit a podcast, all things considered. The Selenium conference was in town, and I was an attendee (see my past three entries for my live notes from the various sessions that I attended).

Normally, I use the spare time that I get in the various times of the day to edit these details, but the conference left me few times where I would be able to do that, so that left me with early morning or late at night (and those who follow the blog know that I'm next to useless late at night, so early morning it was). I was actually able to get the podcast done early this week, and therefore it's been up for a day already (that doesn't usually happen, but I hope the early treat was enjoyed by you all out there :) ).

This week's interview is with Fiona Charles. Fiona is a consultant who's specialty is test management for large scale companies and organizations, both private as well as public/government projects.  Fiona is the principal consultant with QualityIntelligence. Her  blog and twitter details are in the links.

Fiona takes on a number of topics related to large projects, dealing with the challenges and issues that those projects face, including project and corporate politics and the roles of testers in that big mix. But why listen to me prattle on about it, go check out Episode #40 for yourself :).

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Wednesday, April 6, 2011

Selenium Conference, Day 3

Day 3 got underway with an early arrival to set up, check the sound board and make sure that we could get more recording done (I have approximately 30 pieces of audio to go through, so it might take awhile to parse through and massage all of it). David Burns was kind enough to sit down and give me an interview discussing his involvement as a committer and his views of both the present tech and future of Selenium and his involvement with it. also, for those interested, David Burns is the Author of the Selenium 1.0 Testing Tools Beginners Guide that is the subject of my Practicum section of my blog :) ).

Wednesday morning's keynote was given by Bret Pettichord, he of the legendary cowboy hat and the Watir evangelism, among other things :). Bret lead a tour-de-force performance championing science and eschewing the drugs that are record and playback. The dark ages of test tools actually had gag orders; you couldn't criticize the tool vendor if your system didn't work. Much discussion of Watir and Selenium and the key quote that I found very valuable, and it seems a lot of others did, too: "Watir was made to help make testers better developers. Selenium was made to help make developers better testers". Again, Bret followed on with the example from Jason Huggins yesterday, and did his whole talk without any visual aids. It seems a lot of people appreciated this approach as well. The key takeaway is that the story we are telling about Selenium does not match the reality of what we are experiencing. If we look at the Selenium site, it's all about the Selenium IDE and recording tests. If we look at the people here at the conference, almost nobody is using the IDE. Our story does not match the science. We need to change the story, and help testers realize they must make the step from tester to programmer. They don't have to be brilliant programmers, but they need to know some underlying concepts and not be afraid of writing test programs.

Andreas Tolfsen followed on with a talk about how Opera is using Automated testing in their organization, and shared that they generated anywhere between 1,000,000 and 6,000,000 test results *EVERY DAY*! Another interesting statistic? Their developer to tester ratio is 2 to 1! While I had a chance to catch a few of the comments from this talk, I was mostly engaged interviewing Ashley Wilson for the podcast. For the record, everyone at this conference owes Ashley a huge vote of thanks! She has very much been the "wizard behind the curtain" for this conference. Most of the moving parts and actions that have been going on to make this work have been primarily handled by her and a troop of volunteers, so if you see Ashley, definitely say "thank you" for what has been a terrific few days.

Eran Messeri followed on with what has vexed and interested me for some time... how can we test in virtual environments better? Google has a huge initiative regarding how to run well over 1 million test cases on a variety of platforms. Their answer is a heavy focus on using virtual machines that they can spin up and down, as well as massive parallelism of test execution, using what Google refers to as "Test Shards". Not only are tests run on multiple configurations, but test suites are literally split up to run over dozens or even hundreds of virtual machines. Eran insists this is not a holy grain by any means, there's a number of drawbacks to this approach, not the least of which is a continuous need to maintain many very large disk images and update the applications on a regular basis. Still, even with that, the benefits outweigh the disadvantages considerably.

Francois Reynaud and Kevin Menard discussed the changes and the opportunities for testers with Selenium 2.0 Grid. There are some exciting changes in play with Grid 2.0. The most interesting one off the bat is that it's just one jar file now, and therefore it is now easier to install and set up. Additionally, Grid 2.0 can also act as either a node or as a switch, it also has a lot of customization options. Oh, and the demo looked really, really cool :). What's more, Watir users will be able to leverage Grid 2.0 to extend their tests across multiple machines via watir-webdriver. The load balancer in Grid 2.0 allows the user to customize which requests are sent to which nodes. Also, Grid 2.0 is not bound to Selenium. Excited? Alas, we'll have to wait until Selenium 2.0 beta 4 is released, but beta 4 is practically all about Grid 2.0. Can't wait :)!!!

Andy Tinkham followed up with a talk about using Selenium and Cucumber to demonstrate a case study of how his Health Services company is structuring their test cases. They are doing as much as possible to separate the data from the tests, using random test data to increase the odds of finding bugs, and using Cucumber to help construct the tests in plain text while still making the underlying framework effective for those using it. The Cucumber tests and features are constructed with QA, Dev, BA's and Product managers to create the initial stories. They use just enough source code so that they can translate plain text into test calls. Nice example of a case surrounding patients with allergies and notifying users that this is the case. The scenario as described (given, when, then) is well constructed and sensible to anyone who reads it (as is the promise of Cucumber, generally speaking :) ). Seriously, this is an impressive real world example and while some of this is over my head, it's looking to be less over my head. Additionally, I like seeing the references and comments to meta-programming using Ruby (since the company I work with is heavily Ruby, I like hearing more and more about success stories of Ruby working well with Selenium and Cucumber. It gives me hope :).

Following lunch, Joel Klabo talked about "Flex Piloting". First thing to note, Joel is a 1st time speaker, and I want to offer my congratulations to him for stepping up to speak. as he said when he forwarded his talk for consideration "you have to start somewhere". Flex Pilot is an add-on that can be run in both selenium IDE and Selenium RC. it's an add-on to both, and allows for an additional series of commands top be added to both. Demo looked interesting and it has sparked my interest :).

Next up was the ADD portion of the program... that's right, Lightning Talks. Seriously, I'm not going to do this in real time, I just don't type that fast (LOL!), but suffice it to say that the presenters are both entertaining and covering a broad variety of topics (Selenium Tips in poem form? whoa!). Also, as I was watching the 2nd talk I noticed that Devon Smith was delivering it... I had no idea she was here! Devon, stop by and say "Hi", I'd love to finally meet you in person! Amazing and broad litany of talks presented and only a couple of buzzes. thank you all for giving me my "shiny fix/short attention span theater" for the day!

During the break, I finally cornered Jason Huggins and got an interview (seriously, I've been trying to get this interview for three day, so I'm a little psyched :) ). Following the break, Adam Christian gave a talk about Automation Battle Scars. The most memorable opening slide ever... "Automation Prison Rule 1: Punch a developer in the face!" No, not really, but don't be a pansy. Get in, make your case, make your stand and get to it. Earn the respect of the development team. Stop trying to take shortcuts. Learn how to code. You don't have to be an expert, but you'll be well positioned to do better testing and better automation work if you learn some of the basics. Please note, much of the tone of the talk is tongue in cheek. All in all a great and entertaining talk.

Simon Stewart is the final speaker, and it brings us back to why we are all hear... what was the easiest way to control a browser. It was a good goal once upon a time, but the ambition has changed. There's way more to the web than there was 7 years ago. So what's next? We need to desroy the project to make the world a better place. We need to make the effort to reinvent and redevelop Selenium for the future. We are going to be the ones to help do that, and browser vendors are stepping up to help make their part of that happen. We need good high quality bug reports, but much more important, we need good quality patches. the future lookks amazing, the watir project is going to be based on webdriver, so is selenium. The browser community is getting in on the game, and helping us get to the point where standards are set. We're getting speed and performance increases, and we are getting a heck of a combined community. there's a lot that we will be able to do in the future, but note the point, it's what WE will be able to do in the future. The beauty of open source projects is the WE. the community needs us, all of us. Let's all commit to doing something to make Selenium and watir and web Driver and [fillInTheBlank] even more incredible than it is today.

My thanks to everyone for a tremendous program and an amazing three days. It's been an amazing experience. Here's hoping I can bring this all home and put it into practice in a meaningful way, and be one of those WE's that the community needs. Now I'm hoping that we will have some fun this evening in the closing hours, and I'll talk about that in another post.

Afternote: Some have wondered why I haven't posted a review yet for Selenium Simplified, Alan Richardson's book. The answer is, I have, but it's in a different place. You'll find my review for Selenium Simplified in The Testing Planet, the journal published by the Software Testing Club. Paper copies just went out this week, so those who purchase the paper version are getting first crack at it, but I am willing to bet the electronic version will be available by the end of the week, and then you can see my review there.

Tuesday, April 5, 2011

Selenium Conference, Day 2

So here we are once again, after a fun night of celebrating with the kickoff party at the Cigar Bar and Grill (great food, snacks, an open bar, and 300 of your closest geek friends, how could you possibly go wrong ;)?). The #tableoftrouble decided to go together and I walked us all through Chinatown to get to Jackson Square, and on the way we stopped by my favorite Sweet Shop in Chinatown, with every conceivable delicacy you could imagine (black candied ginger and spicy cuttlefish FTW).

From there, we spent some time schmoozing with the attendees, and I was able to get a few interviews, including a really cool one with Patrick Wilson-Welsh, where he turned the tables on me and started interviewing *me* :). I had fun talking with Bret Pettichord, and there's a picture floating around out there of the "Testers with Hats" :). Elizabeth Hendrickson came out to hang with us for a bit, and a bunch of us went for late Mexican food down near Yerba Buena gardens (good food, great conversation, had to be rolled out of the restaurant, really sleepy but very happy). My thanks to Dawn Cannan, Marlena Compton, Bret Pettichord, Andy Tinkham, Elizabeth Hendrickson, John Medlong and Patrick Wilson-Welsh for a great evening.

Tuesday morning fired up with the introductory keynote from Jason Huggins, and an explanation for where Selenium came from, how it got its name, what it was meant to do and how the state of technology keeps having Selenium up its game to keep up and move ahead. One of my favorite quotes from the talk was the idea that "elephants aren't meant to dance, but if you can make an elephant dance, that's really cool!" What was also noteworthy of Jason's keynote was the fact that he did the whole thing without a single slide. Every word was to be heard by everyone without support, and he did a masterful job of that. Also, while it's well known, it was quite fun to reiterate that "Selenium is the cure for Mercury poisoning" (LOL!).

Dave Hunt and Andy Smith followed up with an engaging demo of a game with a flying helicopter, and automating the game. The intent is to show that Selenium can automate Canvas application (canvas being a new tag in the HTML 5 specification). To see an example of a Canvas application, check out The approach for testing canvas applications w/ the Selenium suite is to create Testing Requirements, write Auto Hooks, consider Visual Feedback, Work Together & Create Tests. Sounds like good advice all the way across the board.

The next talk focused on Selenium and Cucumber. One of the main points of this talk was that, if you are a lone tester, you have two choices, you can do manual testing, or you can do automated testing. You cannot do both,or at least you can't do both well; it's too much for one person to handle. For solid automation to take place, it needs to be a dedicated function and role. From here, the talk veered into Cucumber and how, after the automation framework is in place, it can go a long way towards documenting and describing tests in language that real people understand. Some controversy was raised with the comments that the testers should do testing in the language the application was written. On the surface I agree, but I also see that the tests may require unique aspects that the development language won't specifically cover.

Following on just before lunch was a much anticipated and what promised to be an entertaining talk, and Adam Goucher did not disappoint. In short, the talk was titled "YOU'RE DOING IT WRONG!" and was dedicated to all of the steps that contribute to brittle, unmaintainable, and unmanageable tests. For those who want to have some fun, I encourage you to do a search on twitter for @adamgoucher and #yourdoingitwrong, you'll find a lot of good practices and ideas.

As if Adam Goucher/Man of Steel wasn't busy enough, he followed on after lunch with another talk, this time focusing on the creation and maintaining and overall desirability of Selenium IDE plug ins. In this talk, Adam let slip an interesting irony alert... while Adam is the maintainer of the Selenium IDE project, and does a lions share of development on it, he's actively engaged in encouraging people to actually not use it. He'd much prefer that tests be exported out to a real programming language and leverage Selenium-RC or WebDriver going forward. However, since Selenium IDE is still used, the goal is to make the experience as painless as possible for those who are using it. To this end, and to help encourage more robust and less error prone IDE development, Adam actively encourages the development of plug-ins and the benefits that come with the plug-in approach. Also, for those wondering, the development for the Firefox 4 plug-in isn't lagging, it's broken. They are not ignoring you, they just have some things that have to be fixed before it can be released. So now you know :).

Dante Briones presented a talk directed to going "Beyond Page Objects" and making a case for developing Page Components. I have to admit I felt that this talk spent a little too much time focusing on saving keystrokes inside of IntelliJ as opposed to focusing on the main idea of the talk, which is creating smaller and more dynamic page components. Why would we care about that? Because today's pages are not at all the monolithic static pages of yesteryear, there's a lot of small elements that we interact with, so being able to focus on the components is a good idea and interesting. For the record, I am not anti-IDE, although I much prefer using vim when I write what code I do write. I find that relying too much on the IDE adds too many layers of abstraction, and cause me to be a user of an app rather than a writer of code. Anyway, just my opinion on that little segue, overall I think the ideas of being able to target and utilize smaller elements of the page to be of value. 

Of note, I also want to say that there are a number of talks being given in the Track B session (Adam's talk about "You're Doing It Wrong" being a shining example, so if you notice I'm not talking about many of those, well, I can't be in two places at the same time :) ).

As an almost cosmically comical juxtaposition, Dealing With Test Results with Mikeal Rogers went for ultra simplicity. His presentation was made in TextMate...  no, seriously, the entire presentation was done in TextMate... he even made large line breaks as though they were slide separations. Of course, that's no the point of his talk, but I found it amusing to see the comparison of the two talks side by side. Mikhael gets to the point quickly; use CouchDB and query it. We can use html and JavaScript to query the data that's entered into the database. the key takeaway is that, to deal with test results, we need to store the results, and then we need to query for the details. Which database is irrelevant, just make sure it get in there! Also, for the record, Mikhael gets my vote for the most freewheeling talk/presentation/demo... the dude's got style :)!

The group of Watir and Selenium contributors and maintainers took the stage to discuss the details about various burning topics of controversy, such as:

  • Is Grid necessary ("YES")
  • Why are there both selenium-webdriver and watir-webdriver (both are considered to be at different layers, and both handle different API's. Why two gems instead of one? Small gems with clear purposes are easier to maintain, and easier to scope). 
  • Will there be support for load testing (No, not the target or focus of Selenium, in fact Jason said "No, No, a thousand times no!"). 
  • What's the value for having a headless implementation of something like HTMLunit (it's really fast, for one!). 
  • Should tests depend on data generated/established by other tests ("NO!"). 
  • Are they considering moving to github ("NO!", lots of binary artifacts make that really hard). 
  • How about running their own git server ("Discuss over beer later!"). 
  • What are some useful things you can do with an on the fly profile (Client side profiling, creating HAR files, using custom extensions, etc. Benefits are that you don't have to copy multiple profile files to run tests like in Selenium 1). 
  • Will there be user extensions in Selenium 2.0? ("NO!", more to the point, they have a very low usage percentage, and there are other ways to do it that are better and more easily maintainable). 
  • NO really, will there be user extensions in Selenium 2.0? ("How many people use user extensions in Se 1? Um, 6... out of 250. Will it be in Se 2? YeeeeeeeNO"). 
  • Can we inject headers in Selenium 2? If not, why? ("Selenium 1 add headers, but then cannot remove them. Selenium is a browser automation framework, it is not a "monkeying around with the web" tool. Out of scope."). 
  • Will there be tools to automate mobile apps? ("There are some that focus on the WebDriver APIs to allow users to drive native applications" They have a proof of concept for Android already." Otherwise, use Selenium 2). 
  • Watir vs Selenium? ("Different markets, it's a taste thing for many, plenty of room for both"). 
  • What's the level of browser support in Selenium 2? ("Selenium does not support all browsers equally. Opera in WebDriver has excellent support because Opera rolled their own browser support. Chrome is getting better, focus on standardization of browser support".). 
  • Can Selenium deal with Client Side certificates ("Custom profile helps deal with this"). 
Overall a great discussion.
Tuesday's Closing Keynote with Patrick Lightbody was titled "Selenium: Seven Years in the Making". Patrick talked about when he first got involved in Selenium. From an email quote: "too bad this didn't exist when we were setting up the test environment at Spoke". Selenium started as a small project, but with the advent of Selenium IDE, it really picked up considerably. It's exciting to see how each of the parts came into being, and how webDriver and Selenium 2.0 has been incubating for nearly three years. It really helps to put into perspective the level of work and commitment that it has taken to make this an ongoing concern. With the possibility of overtaking QTP in the marketplace, the Open Source Ecosystem that contributes to and drives Selenium is very healthy and thriving (thinhk Capybara, Webrat, Watir, HTMLUnit) as well as corporate interests like SauceLabs, Twist, PushToTest, dynaTrace, etc.). While there have been many great successes, there's a lot that they still want to do:
  • Better searchability for documentation 
  • better answers and solutions for issues
  • making it easier for contributors to build the system and give back to the project.
  • Also, the fate of selenium IDE is not as clear cut as some would say. 
Patrick is encouraging everyone to participate in the process of helping define what Selenium IDE is and can be. So how do we get involved then?
  1. Answer questions on selenium users list. 
  2. Join the Selenium Mailing list. 
  3. Join the IRC chat. 
  4. Build the app. 
  5. Submit patches. 
  6. Improve documentation. 
  7. Educate your peers.  
  8. Help maintain our build system. 
There's a lot that all of us can do to make a quick and big difference. We may not all be able to do everything, but we can all do something! The Future? Selenium IDE is the driver for bringing on new users, it needs to be maintained and enhanced. Selenium WebDriver for direct automation, Grid replacing SeleniumRC and who knows what else? It looks like an exciting seven years could be in our future.
It's a shame that I have to bolt immediately afterwards, as I'd love to talk more with my fellow Selenium fans, but thanks to all of the support staff who help make this conference a success. Today was terrific, really looking forward to tomorrow and the bittersweet fact that this will be over tomorrow night! I've learned a great deal and hope to learn even more Wednesday. Also, if anyone would like to be interviewed, I'll be sticking around Wednesday after the conference and would love to talk to anyone that would like to be part of a future podcast. Just look for the bald guy in the Sinatra hat :).

Afternote: So one of the funniest outcomes of this conference is that "we've become a hashtag!". Simon Stewart posted this morning that he hoped "the #tableoftrouble" would keep providing color commentary, and we of course said we would. What's funny is that we have been referred to as #tableoftrouble ever since. I've always aspired to do something worthy of being hash-tagged, so thank you all, we're enjoying the experience, too.