Monday, January 31, 2011

Selenium Conference is Coming to San Francisco

So this is pretty exciting! The selenium conference is going to be held here in San Francisco. 

The organizers of the event have asked that we spread the word, so that's exactly what I'm doing!

Here's the important details to be aware of:

WHAT: Selenium Conference 2011
WHERE: Marines’ Memorial Club, 609 Sutter St, San Francisco, CA
WHEN: April 4-6, 2011
HOW MUCH: Early bird tickets ($195) go on sale Tomorrow, February 1st, 2011 at 6am PST.
Official EventBrite page:
Tickets can all be purchased on the website:
For more details about the conference, check out

Hope to see you all there!!!

Saturday, January 29, 2011

Weekend Testing Double Header

So today is a double shot of Weekend Testing!

I hosted a Weekend testing event this morning/afternoon with Weekend Tester's Americas (and we had a pretty excellent turnout, too, 23 attendees!), and I'm also attending the Weekend Testing Australia New Zealand group (WTANZ). Their sessions begin at 10:30 PM PST, which would normally be a stretch (I've done it before, but it makes for a late night) but tonight my daughter has 7 giggling 4th grade girls over for a sleepover to celebrate her birthday... needless to say, I will be *WIDE AWAKE* at 10:30 PM, so I might as well learn something (LOL!).

WTA06 – It’s In The Cards

The past few challenges, we have been able to approach products and tools that have been developed by contributors to Weekend testing (Shmuel Gershon's Rapid Reporter, Tim Coulter's and last time Albert Gareev presented a challenge around "Lightbot", a flash game that helps to teach programming concepts. This time, WTA regular Eusebiu Blindu, @testalways at Twitter, contributed a testing challenge that he coded himself.

The game is based on a traditional five card draw hand used in Poker, but instead of random cards, the user can select exactly the cards that they want to make their hand. Once they have decided on their hand, they can click on a "Generate Image" button and an image displays.

The primary task is, of course, to test the application. The mission we presented was as follows:

Detect if any relation exists between the selected cards and the image generated in pop up (after pressing “Generate Image”)

The testers then broke up into groups and shared their ideas on how to determine if there was a relation. The short answer was YES, but many testers went on to see if they could figure out WHAT the relation was (which could be considered outside the scope, and some testers called out some other testers on exactly that, but it made for an interesting testing session and discussion, so I'm not complaining :) ).

During the discussion period, we asked the testers to consider the following questions:

Are you able to determine if the application needed some skills that you needed to improve and, if so, which is the main one?

What skill do you think helped you the most in accomplishing this task?

Here's the card game if you want to give it a try:

WTANZ12: Boundary Testing

So it turns out that the session started tonight at 9:30 PM, which means an hour earlier. Hey, fine with me :).

Richard Robinson has planned a challenge based on Boundary Testing. we worked through what has now been referred to as the classic ParkCalc application. For those wondering what this is, it's an actual parking calculator that calculates faire's for parking your car at this specific airport. We were to test for boundaries, and I decided to see which boundaries I could find. I was somewhat amused and dismayed to realize that it would accept some very outlandish dates (like 1,000,000 BC and more, seriously!). I mentioned in this that I felt compelled to see if I could find an error to generate, and that I seemed to get sucked further and further into a vortex of "come on, I have to be able to create a date that will generate an error someplace!!!"

The danger here is that I'm assuming that an error is appropriate. How do I know that that is what is needed? I expect it. It's what I'm trained to look for, and when I can't generate one, I keep looking for it, and get frustrated when I can't easily generate it with outlandish dates. What does one do at this point? Does one go with an assumption and say "look these dates are ridiculous, this is a bug because, I mean, come on!!!" assumptions can be correct, but they can also be dangerous if we are wrong, so it's important to know when an assumption can be used as a legitimate yardstick, and when it cannot.

All of the participants were able to explore a number of boundary conditions, including many that we hadn't specifically considered. All in all it made for a fun and fast paced session.

So when is the next session? We're not sure at this point, but we promise, as soon as we know, we will let you all know :).

Friday, January 28, 2011

TWiST #30 – Tester’s Dinner at Jing Jing’s

So this was an interesting experiment!

Matt came to the Bay Area for SocialText’s all hands meeting, and as he was here, he thought it would be fun to have a bunch of people get together for a “tester’s dinner” while he was here. Since he was staying in Palo Alto, I thought it would be cool to go to Jing Jing’s, which is my favorite Chinese place in Palo Alto (Bay Area people, if you haven’t been to Jing Jing’s, seriously, go. The food is great, reasonably priced, and their Spicy Eggplant is to die for :).

I came over to Jing Jing’s and met up with Matt (this was the first time the two of us have met in person, btw) and a host of other Bay Area testers. The full list includes:

- Jane Fraser, a Director of QA at Electronic Arts
- Matt Heusser, software process naturalist and member of the technical staff at SocialText
- Yaron Kottler, CEO of Qualitest
- Michael Larsen, Senior Tester at SideReel (yeah, that would be me ;) ).
- Peter Magnusson, a Director of Engineering at Google
- Jake "Skinny Boy" McGuire, Infrastructure Lead at YouTube (Matt’s nickname, blame him :) )
- Jonathan Mischo, Manager, Quality and Support Engineering at Involver
- Ken Pier, Product Quality Manager at SocialText

The noise, bustle and movement is real, and it was a particularly challenging editing process. There’s a fair amount of noise that, while I was able to get rid of a lot of it, there’s some that was just not possible to remove without mangling the conversations, so you’ll hear a fair amount of bumping and banging of plates and stuff. The conversation ranges from some spectacular bugs found at YouTube and EA over the years, to giving career advice to testers to survive and thrive in the next five to ten years as testers. Oh, and to that effect, I should also mention that Involver and Qualitest ARE HIRING!!! So if you are interested in hearing the responses, please head on over and give TWiST #30 a listen!

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Wednesday, January 26, 2011

PRACTICUM: Selenium 1.0 Testing Tools: Chapter 7: Creating Selenium Remote Control Tests

This is the seventh entry for the TESTHEAD PRACTICUM review of PACKT Publishing's book Selenium 1.0 Testing Tools Beginners Guide by David Burns. The emphasis on this type of review is the actual exercises and the practical applications of what we learn. Rather than reprint out the full listing of the exercises in the book (and there are a bunch of them, as that is the format of this particular title), much of these chapters are going to be a summary of the content, some walk-through of application, and my take on what I see and do.

Selenium 1.0 Testing Tools is full of examples, "try it yourself" instructions and explanations of what happened. The book’s cover states “Learn by doing: less theory, more results”. The focus will be less on the concepts and more on the actual implementation and methods suggested. Because this chapter requires so much code example and steps, to preserve the intention of not copying David's work too directly, I have included in Red Text those sections of the book that are verbatim or almost verbatim.

My apologies for the delay in getting this chapter up, but time got the better of me transitioning between jobs, and I had to put this on the back burner. But there was a more insidious reason... I plain and simple found myself stuck in this chapter! Why you may ask? Because what's printed in the book does not match the available environments for Selenium RC! To be fair, I'm using a slightly different environment compared to what David is using, and I had some challenges with it in the last chapter. Those challenges became greatly compounded this time around and I really had a hard time navigating around them. This is where not having a background in Java slowed me down considerably, and where I could not figure out why I couldn't get the code to compile or successfully run. However, this is a Practicum, and the whole point is to show what I learned from these experiments, and what I learned is that an environment that is slightly different from the examples can cause frustrations when trying to work through the examples.

Chapter 7: Creating Selenium Remote Control Tests

At this stage, Selenium Remote Control should be set up on your system and you should be able to run Selenium Remote Control to drive tests. This chapter focuses on converting Selenium IDE tests to code (specifically Java, since that is what was used in the book as the example).

Making these conversions will add all of the flexibility of the Java programming language with the structure and functionality of Selenium. This chapter will cover:

- Converting Selenium IDE tests to run in a programming language and getting them running
- Writing Selenium Remote Control tests from scratch
- Applying best practices such as Page Object design pattern to create lasting tests
- Running tests against a continuous integration server


This chapter expects the user to have the following tools installed to perform the exercises in this chapter:

Java IDE: IDEA Intellij (
Unit Testing Framework: jUnit (

Converting Selenium IDE Tests to a Programming Language

We have so far focused on creating tests with the Selenium IDE. Now we will see what it takes to convert a Selenium IDE test case to run as a Java test case and use jUnit to drive the tests.

1. Open IntelliJ IDEA and create a new project.
2. Create a folder at the root of the project called test.
3. Click on File | Project structure.
4. Click on Modules on the left-hand side of the dialog that has loaded.
5. Click on the test folder that you created in the folder tree on the right-hand side of the dialog.
6. Click on the Test Sources button and the test folder should turn green.
7. Open the Selenium IDE.
8. Open a Selenium IDE test you saved as HTML, or quickly create a new one.
9. Click on File then move the mouse pointer down to Export Test Case As and then click on Java (actually there is no Java option, but there are two jUnit options. Choosing jUnit 3 matches the structure of the code example shown in the book). 
10. If following the book and looking at the example code in the book, change the text that says change-this-to-the-site-you-are-testing to
11. Click on File | Project structure.
12. Click on Global libraries.
13. Click on the + to add a New Global Library.
14. Click on Attach Classes and add selenium.jar and common.jar. This should be in the same place as your Selenium-Server.jar (from my vantage point, these files are not here; they are not part of the distribution of Selenium RC 1.0.3 or of the previous version).
15. Do the same for jUnit now. You can create a new Global library for it or add it to the Selenium Global Library.
16. Click on the Modules link on the left-hand side again.
17. Click on the Dependencies tab.
18. Click on Add and click on Global Libraries. Add the Selenium and jUnit libraries.
19. Click on Apply. When this is done the text selenium should turn purple.
20. We are now ready to run Selenium Server. We do this by running java–jar selenium-server.jar.
21. Right-click on the Java file created by Selenium IDE and click on "Run testcase1".

Below is an example of the text that was captured from the server when I ran the test:

16:59:57.933 INFO - Command request: getNewBrowserSession[*chrome,, ] on session null
16:59:57.939 INFO - creating new remote session
16:59:57.942 INFO - Allocated session 995fbbeba27c4f4eaa807f4f2eb9ad50 for, launching...
16:59:58.001 INFO - Preparing Firefox profile...
17:00:01.392 INFO - Launching Firefox...
17:00:06.962 INFO - Got result: OK,995fbbeba27c4f4eaa807f4f2eb9ad50 on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:06.975 INFO - Command request: open[/chapter1, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.739 INFO - Got result: OK on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.745 INFO - Command request: select[selecttype, label=Selenium RC] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.784 INFO - Got result: OK on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.788 INFO - Command request: isTextPresent[Assert that this text is on the page, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.817 INFO - Got result: OK,true on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.822 INFO - Command request: isTextPresent[Home Page, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.849 INFO - Got result: OK,true on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.853 INFO - Command request: click[link=Home Page, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.900 INFO - Got result: OK on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:08.906 INFO - Command request: waitForPageToLoad[30000, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:09.325 INFO - Got result: OK on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:09.330 INFO - Command request: testComplete[, ] on session 995fbbeba27c4f4eaa807f4f2eb9ad50
17:00:09.330 INFO - Killing Firefox...
17:00:09.445 INFO - Got result: OK on session 995fbbeba27c4f4eaa807f4f2eb9ad50

For the record, as I mention above, the book and what was possible to do veered away from what I was able to do, as two of the libraries mentioned did not exist. So what did I ultimately do? I went into the server folder and linked the jar for selenium-server, the jars in the Java client component of Selenium RC and the library and class structure of jUnit. Even with that, I am still seeing unusual behavior in that the Selenium RC printout is only shown for a couple of seconds, then disappears.

Creating a Selenium instance with JUnit 3

This next section walks the user through how to create a test for JUnit3 style syntax. The focus is to get the user away from running the IDE to capture tests, convert them to Java, and then massage and run them.

1. Create a new java class in IDEA.

2. Add the Selenium Import to your java class:

import com.thoughtworks.selenium.*;

3. For JUnit 3, we need to extend our java class with the TestCase class.

public class SeleniumBeginnersJUnit3 extends TestCase {
Selenium selenium;

4. We now need to set up a new Selenium instance. We will do this in the setUp method that is run before the tests. In this we will initialize the selenium object that we previously created. This object takes four parameters in the constructor. They are:

- Machine name hosting the Selenium Remote Control server.
- Port that Selenium Remote Control server is running.
- Browser string. For example *chrome.
- Site under test.

The following code shows the initialization:
selenium = DefaultSelenium("localhost",4444,"*chrome",;

5. Let's start the browser up programmatically. We do this by calling the start() method.


Your setUp() method should look like the following snippet:

public void setUp(){
selenium = new DefaultSelenium("localhost",4444,"*chrome","");

6. Now we need to create a test method. We do this by creating a new method that has test as a prefix. For example: testShouldDoSomething(){…}.

7. We can add selenium commands in our test so that it can drive the browser. For example:

public void testShouldOpenChapter2LinkAndVerifyAButton(){"/");"link=Chapter2");

8. Right-click on the test method and click on Run Test. You should see your test driving the browser.

The example below comes from the exercise. Note: the program below is described through the steps and is the result through the description. For a beginner, this would be the code structure and syntax as described. The code as described, however, generates errors when we attempt to run it (specifically, we need to add another import statement so that the assert command would be recognized). Minor point and the compiler catches this, but as this is a beginner's guide, a full and complete printout of the resulting program would be helpful to see if there are any discrepancies or structural differences.

import com.thoughtworks.selenium.*;

public class SeleniumBeginnersJUnit3 extends TestCase {
 Selenium selenium;

public void setUp(){
 selenium = new DefaultSelenium("localhost",4444,"*firefox","");

public void testShouldOpenChapter2LinkAndVerifyAButton(){"/");"link=Chapter2");

Creating a Selenium instance with SeleneseTestCase setUp()

In the previous section we explicitly set the properties to start a browser. Selenium provides a command called SeleneseTestCase that uses a different format. The steps are as follows:

1. Create a new java class in IDEA.

2. Add the Selenium Import to your java class.

import com.thoughtworks.selenium.*;

3. For JUnit 3, we need to extend our java class with the TestCase class. Selenium has its own version of the TestClass called SeleneseTestCase.
public class SeleniumBeginnersJUnit3 extends SeleneseTestCase {

4. We now need to set up a new Selenium instance. We will do this in the setUp method that is run before the tests. In this we will initialize the selenium object that we created previously. This method takes two parameters. These are:

- Browser string. For example *chrome.
- Site under test.

The following code shows the initialization:


5. Now we need to create a test method. We do this by creating a new method that has test as a prefix. For example: testShouldDoSomething(){…}.

6. We can add selenium commands in our test so that it can drive the browser. For example:
public void testShouldOpenChapter2LinkAndVerifyAButton(){"/");"link=Chapter2");

7. Right-click on the test method and click on Run Test. You should see your test driving the browser.

I saw errors when I tried to run this example. Again, I'm not trying to be too critical here; I have a different environment and some things are just not lining up, and I can accept that, but it really drives home the point that, in this books examples, Ubuntu Linux is used as the development platform, and the differences and the errors are different, and may take awhile to resolve (or in my case, yet to resolve).

Creating a Selenium instance with JUnit 4

The following steps show how to create a new test with JUnit 4 syntax:

1. Create a new java class in IDEA.

2. Import the Selenium and JUnit. You can the use the following code:
import com.thoughtworks.selenium.*;
import org.junit.*;

3. We now need to start a browser. You will need to declare a Selenium variable. Do this outside of any method.

4. Create a new method that will be run before any of the tests. Inside that we will start our selenium instance. Your code will look similar to the following:
public void setUp(){
selenium = new DefaultSelenium("localhost",4444,"*chrome",

The .start() call will make Selenium start the browser up

5. Now that we can start up the browser, we will also need to kill it when
our test has finished. We do this by creating a @After method with
selenium.stop() in it. The method will look similar to the following:

public void tearDown(){

Your test file should look like this now:

import com.thoughtworks.selenium.*;
import org.junit.*;

public class Selenium2 {
  Selenium selenium;

  public void setUp(){
    selenium = new DefaultSelenium("localhost",4444,"*chrome","");

  public void shouldOpenChapter2LinkAndVerifyAButton(){
  public void tearDown(){

6. Run the test by right-clicking and clicking on Run Test in the context menu.

What's good about this example is that a complete test was displayed for comparison's sake.

Creating a Selenium instance with TestNG

TestNG is another popular testing framework for Java. It can be more extensible than JUnit.

1. Create a new class file for testing against

2. Create a new setUp method. Use the annotation @BeforeMethod. This will need to have the code to start a Selenium instance.

3. Create a new tearDown method. Use the annotation @AfterMethod. This will need to have the code to close a Selenium Instance.

4. Create a new test. This uses the same annotation as JUnit 4.

5. When you have completed that your test file should look like this:


import com.thoughtworks.selenium.DefaultSelenium;
import com.thoughtworks.selenium.Selenium;
import org.testng.annotations.*;

public class Chapter10 {
  Selenium sel;
  public void setUp(){
    sel = new DefaultSelenium("localhost",4444,browser,"");
  public void testShouldOpenTheRootOfSite(){"/");

  public void tearDown(){

Creating a Test From Scratch

With the previous examples, the setup has been made so that the tests can be added. Using the last example, now we create a test that loads the root of the site, moves to the chapter2 page, and then verifies a button is on the page.

1. Create a new method. I have called mine shouldOpenChapter2LinkAndVerifyAButton. Add the @Test attribute to that method.

2. Use the open() function and open the root (/) of the site.

3. Click on the link for Chapter2.

4. You will need to call the waitForPageToLoad() method to wait for it to load. This method takes one parameter and that is how long Selenium should wait for it to load before throwing an error.

5. Finally Assert that the button but1 is on the screen. This will need to use JUnit's Assert class. You will need to call Selenium's isElementPresent call and wrap that with assertTrue. When your test is complete it should appear as follows:

public void shouldOpenChapter2LinkAndVerifyAButton(){"/");"link=Chapter2");

Selenium Remote Control Best Practises

David makes some examples in this section to describe some best practices to use so that tests can be made to be more maintainable and easier to update. the following example creates a DSL so that people can see what is going on.

The example used is were a number of tests work on a site that requires logging in and navigating to a certain page. the approach would be to find out if you are on the correct page and if not, go there.

1. Create a new java class in IDEA.

2. Import the relevant Selenium Packages.

3. Create the setUp() and tearDown() method. I prefer the JUnit 4
style of tests and so will use code samples with the annotations.

4. We need to check that the test is on the correct page. For this we will use the selenium.getTitle to see that page title, and then if incorrect move to the chapter2 link. We do this because navigating to page is slower than checking the page's title or any other calls to the page already loaded.

5. We need to then validate that it is correct and then work accordingly. The following code snippet is an example of how we can do this:

if (!"Page 2".equals(selenium.getTitle())){"/chapter2");

6. Create the rest of the test to check that items are on the page.

Moving Selenium Steps Into Private Methods to Make Tests Maintainable

The following example walks the user through the process of refactoring code so that the tests will be easier to use for multiple executions.

Let us create a number of tests as follows:
public void shouldCheckButtonOnChapter2Page(){"/");"link=Chapter2");

public void shouldCheckAnotherButtonOnChapter2Page(){"/");"link=Chapter2");

Using the given examples let's break these down.

1. Both examples always open the root of the site. Let's move that into its own
private method. To do this in IDEA you highlight the lines you want to refactor
and then right-click. Use the context menu and then Extract Method.

2. Then you will see a dialog asking you to give the method a name. Give it something
meaningful for the test. I have called it loadHomePage.

3. Now do the same for the other parts of the test so that it looks a lot more succinct.

4. Your test class should look something like this:

public void shouldCheckButtonOnChapter2Page(){

public void shouldCheckAnotherButtonOnChapter2Page(){

private void loadHomePage() {"/");

private void clickAndLoadChapter2() {"link=Chapter2");

There's a lot of meat to this chapter, and if you are not a really avid coder, or if java isn't something you are very familiar with, there can be some rough going in spots. also, as I stated at the beginning, having a different environment or not having access to the same options described can make for results that veer from the expected. I found this chapter to be frustrating to work through because some tests worked and some didn't. Overall, I felt the explanations were well done, but you may need to take my approach and explore to see what options are actually available in your implementation and make the necessary changes. Practice with the code and the implementation that is most familiar to you and see how/what works.

Tuesday, January 25, 2011

PRACTICUM: OpenSceneGraph 3.0 Beginners Guide: Coming Soon

One of the books that I did a review for a couple of months ago was related to the software audio editing tool Audacity, and was published by Packt Publishing in the United Kingdom. I am also in the process of doing a review for David Burns "Selenium 1.0 Testing Tools" (up to and working through Chapter 7 as of this writing). The Practicum Review format, with the focus on the actual exercises, has led me to see both the benefit (working through the challenges) and disadvantages (sometimes the books examples and the real world tools don't always match up, or the environment we use aren't always the same as the examples, so creativity must be implemented).

I decided that for the next book I would review in this format, I wanted to take on a tool and a challenge that would be a little different, something that I wasn't really used to doing, but had an interest in. For many years I've been a fan of video games, and I always wondered how they designed the backgrounds or "worlds" that those games worked in.

OpenSceneGraph 3.0 is a tool that allows for this very thing, and it's a good thing this is a Beginner's Guide, because I really am approaching this from the vantage point of a beginner. the format of reviewing this book will be similar to what I do with the Selenium 1.0 Testing Tools Beginner's Guide, in that my plan is to work through the problems and the examples and see how they line up, what I actually learn and how I feel about all that.

I need to finish the Selenium 1.0 Test Tools Practicum first, and I hope to have that finished within the next week or two. Once that is finished, I will then focus attention on this title. Stay tuned :).

Friday, January 21, 2011

TWiST #29 with Scott Duncan

This week’s episode was especially timely for me, since just this week I moved into a new company that is actively practicing Agile software development. Our guest this week is Agile coach and Scrum Alliance President Scott Duncan. I found myself listening closely to this episode and enjoying the view that Scott discusses (and at times defends) regarding Agile practices and how they are applied to companies (both successfully and unsuccessfully. He covers a number of interesting areas including the thought that we are making a mistake when we provide too much emphasis on trying to copy manufacturing models for software testing processes. Anyway, for those interested in listening, here’s Episode #29.

Standard disclaimer:

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.

TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.

Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

A Talk to Teenagers: Become What You Want To Be

As I've mentioned here before, in addition to being a Software Tester, I'm also a Scoutmaster, and by extension, I'm also a young men's advisor in my church. By virtue of this, I get to interact with a number of kids between the ages of 12-18 every week, and I've had the pleasure of interacting with them and watching them grow and become adults now for more than seventeen years. recently, as part of an activity, I was asked to give a talk to the youth about career development and education. I'm not entirely sure if this is the talk they wanted to hear, but it's what I delivered anyway. as I was going through some papers, I discovered this talk and decided that it may well inspire some testers out there, or anyone for that matter that is interesting in seeking a change or wants to find more joy in what they do. Anyway, I hope you enjoy it :).

A Talk to Teenagers: Become What You Want To Be
We live in an amazing world today, one in which we are looking at a “sea change” in the way that things work. For the bulk of history until very recently, the options to get an education were limited both in what could be obtained and where it could be obtained. Today, with the power of the Internet and the resources available on it, what was once known only to a few now has the potential to be known by everyone. However, just because it’s out there, it won’t do you any good unless you seek it out and learn from it.

We should be very familiar with the phrases “Seek, and ye shall find. Knock, and the door shall be opened”. When it comes to taking control of your personal education, those words are more appropriate today than ever before. The old “contract” that we have been used to as a society, the idea of “go to school, get a job, work for 40 years, and we will take care of you” is over, at least for most people. The new “contract” is that we have a very fluid world, where work and ideas and creativity are available in more places than we could ever imagine, and more people can be utilized in these ways than ever before. The ones who will do the best in this new environment are those that take advantage of the fact that you must always learn, always grow, always adapt and learn more.

Right now, you have both an obligation and an expectation to be in school and do well while you are there. However, do not think or set yourself up to think that doing well in school will guarantee that you will do well in life. To be blunt, doing well in school proves that you are good at school, and that is all that it proves. Many skills in life are learned in other places, such as sports, Scouts, community, church, and even on the playground. The world is your classroom, and many of the lessons that are going to be most important to you are not going to be learned in a classroom. Always be open to learning.

When you finish school, are you done? Absolutely not. While a well rounded, classical liberal education will give you many tools that will help you throughout your entire life, many people who go to school to learn about their career will find that the skills they learned in school are out of date in five years. If all you do is focus on what you learned in school, you will be left behind. Also, don’t be surprised if what you ultimately do does not have any bearing on what you went to school to learn.

When it comes to the work that you do, I like to ask one simple question… if you had $10 million dollars at your disposal, or to make things simple, if you never had to work a day in your life to live and be comfortable, what would you do with your time? This is after you have gone on whatever dream vacations you may have in mind. Visualizing this will give you a huge insight on what you may actually want to pursue. Will this work all of the time? Perhaps not. I’ll be the first to say that there are many things that I love to do and would be really happy to spend my time doing that would be a less than optimal way to make a living, or would require me to make too many other trade-offs in my life to do them. Still, I use this example for one reason… it helps us discover our passions, the things we would do with all our heart, and we’d do it for free. If you can find that, you have a huge step up on others who just decide to trade their time for money. If that’s not realistic, the next best thing is to find an area of work that you think you might enjoy or be interested in, and learn to develop a passion for it.

It’s called Work for a reason (to borrow a play on words from Larry Winget), and that’s because there will be times where it will be hard, tiring, frustrating, and otherwise not what you would call a fun experience. Still, I believe the way to make a work environment more fun is to throw yourself wholeheartedly into it. From my own life as a software tester, when I just “did my job”, I had the lowest level of focus and overall interest. Yeah, I was competent, but barely. When I found times that I was doing more, or giving my all, my experiences were more fun, more engaging, and much more memorable. I once set up a lab for engineers where for a week I practically lived at the campus and a couple of the nights actually slept under my desk… that was one of my fondest and most memorable times at that job. Not because the job itself was especially fun; it was a lot of hard work pulling cable, climbing through cable ladders, bolting racks together, loading them with often really heavy equipment, etc. What made it memorable and fun was that I gave it everything I had. You all may find that you will feel and do the same if you give it your all as well.

I firmly believe that the ones that ultimately succeed are the one that get past the “TGIF/OGIM”, live for the weekend lives. Since you are young and probably haven’t experienced this yet, let me counsel you now… don’t get into it. If you find that you are living your life for the weekend to get here, and your job is so onerous that you can’t stand it, you are better off quitting and finding something where you will not feel that way. When you are young enough to absorb such a change is the best time to do this. For many adults with families, house payments and other obligations, the options to just quit and find something more enjoyable may not be as realistic, or easy, but they’d also be well suggested to start the process; either learn to find ways to make your work life more enjoyable, or develop the skills to make a move. Ultimately, the world is yours, and the choices you make are likewise yours. Good or bad situations are often much of our own making.

We don’t always get to choose who we work with, but we always get to choose how we will interact with them. My recommendation is to do so with respect and understanding of the other people you are working with. Always be professional, and always be generous with your time and talent. Friendship at work is great but it is totally optional. You have really no control over who will be your pals at work, but you have control over how you respond to situations and attitudes. Respond well, and it’s likely your co-workers will do the same. If ever in doubt, though, to borrow (and paraphrase) from Steven R. Covey’s book The Seven Habits of Highly Effective People… “it is far better to be respected than to be liked”. Do not make decisions that compromise your integrity for a short term gain or to gain points with people you hope will be your friends. Stand by your principles and understand when a situation is about looking at something differently and compromising your honesty and integrity. Always be willing to do the former, never be willing to do the latter.

The world is a different place than it was ten years ago, and ten years from now, it will be different again. Be prepared to learn, grow and interact with everyone around you, and do so with passion and vigor. I’ll dare say that, if you do, nothing will keep you from succeeding in your goals. Setbacks, crisis and issues beyond your control may derail you, but your attitudes and desire to succeed will determine just how much effort will be needed to get back on track. The key is to get back on, and keep going, with purpose, determination, and yes, faith that you have a purpose to fulfill and that you can really do almost anything you set your mind to. I say almost because, right now, the odds of any of you flying under your own power are pretty slim, but then again, mechanical flight was once seen as an impossible dream, too. Who knows, maybe one of you will figure out how to do it :).

Thursday, January 20, 2011

BOOK CLUB: How We Test Software at Microsoft (13/16)

This is the fifth part of Section 3 in “How We Test Software at Microsoft ”. This chapter focuses on how Microsoft communicates with customers and process the feedback they receive. Note, as in previous chapter reviews, Red Text means that the section in question is verbatim (or almost verbatim) as to what is printed in the actual book.

Chapter 13: Customer Feedback Systems

Alan makes the point in this chapter the customer is a large part of the quality puzzle. In fact, without the customer, there isn’t much of a point to focusing on quality in the first place (of course, without a customer, there’s not a market for Microsoft’s products either… hey, it’s the truth!). The main point for companies like Microsoft to be in business is to fill a need for people that need the tools and options that they provide. Software helps people do tasks that they either cannot do without it, or would require a great deal of time to do were it not there. So Microsoft recognizes the value of the customer in the relationship, and they work in a number of ways to include the customer in the quality conversation.

Testing and Quality

The truth is, customers don’t care a whole lot if something has been tested a lot or little, what they care about is whether or not a product works for them or not. Actually, they do care if a product has been tested as they voice frustration about a product not working well or causing them frustration or “pain” to use. Outside of that, however, they don’t really care all that much what was done to test a product.

Alan includes a great hypothetical list. Those of us who test products would certainly appreciate this list, but for anyone else, it would probably result in a shrug followed by a muttered “whatever” for most people. For the testers reading this (and yes, I realize that is 99% of you  ), here’s a list we’d like to see:

- Ran more than 9,000 test cases with a 98 percent pass rate.

- Code coverage numbers over 85 percent!

- Stress tested nightly.

- Nearly 5,000 bugs found.

- More than 3,000 bugs fixed!

- Tested with both a black box and white box approach.

- And much, much more…

For those of us who test, that’s an impressive set of statements. Guess what? The customer doesn’t care about any of them. The customer really only cares if the product fits their needs, is affordable under the circumstances, and that it works the way that they expect it to. As far as value to the user, software quality rates pretty high up there (people want good quality products regardless of what it is). The rub is, most testing activities don’t really improve software quality (well, not in and of themselves). So why do we do it and why do we consider it important?

Testing Provides Information

Take a look again at the bullet point list above. What does it tell us? It gives us some indication as to what was actually done to test the product, and it gives us information on the success and failure rate for the tests performed. While our customers may not find any of that terribly interesting, rest assured the development team does! This information provides important details about the progress of the testing efforts and what areas are being covered (and a bit about the areas that are not).

Quality Perception

Even if everything goes smoothly and no issues are found after extensive testing, the test team has still provided a valuable service in the fact that they have decreased the risk of an issue being found in the field (not eliminated, understand, because testers cannot test everything, and there are situations that may never have been considered by the test team). There is, however a danger to the way that a lot of testing is done. When done in isolation or from a perspective of meeting functional requirements, we can create a false sense of security. While we may well have tested the product, we may also have tested the product in a manner that is totally foreign to the way that the customers would actually use the product. We hope that our test data and the quality experience of our customers would overlap. In truth, while they often do, the correlation of the two is not exact, and often there are only small areas where they intersect and find common ground. The more we can increase that commonality between a customer’s quality experience and the test data and test efforts performed the better.

Microsoft gathers information from a number of sources; emails, direct contact through customer support, PSS data, usability studies, surveys, and other means like forums and blog posts all help to inform as to the customer experience. The bigger question then is, what do we (meaning Microsoft in the book, but also we as testers and test managers) do with all of this information? How do we prioritize it, make sense of it, and interact with it to make a coherent story that we can readily follow and understand?

Customers to the Rescue

In a perfect world, we would be able to watch all of our customers, see how they interact with the product, and get the necessary feedback to make the product better. This approach works in small groups, but what do you do when your user base numbers in the tens of millions (or even hundreds)? Microsoft has a mechanism that they use called the Customer Experience Improvement program (CEIP). It’s entirely voluntary; you may have seen it if you installed a Microsoft product. Participation is entirely voluntary, and if you do participate, you send statistics to Microsoft that they can analyze and get a better feeling as to how you are using the system. The information provided is anonymous, untraceable, and no personal or confidential information is ever collected. Below are some examples as to what data is collected:

- Application usage

o How often are different commands used?

o Which keyboard shortcuts are used most often?

o How long are applications run?

o How often is a specific feature used?

- Quality metrics

o How long does an application run without crashing (mean time to failure)?

o Which error dialog boxes are commonly displayed?

o How many customers experience errors trying to open a document?

o How long does it take to complete an operation?

o What percentage of users is unable to sign in to an Internet service?

- Configuration

o How many people run in high-contrast color mode?

o What is the most common processor speed?

o How much space is available on the average person’s hard disk?

o What operating system versions do most people run the application on?

Based on this information, the test teams at Microsoft are able to see this information and make their game plans accordingly.

More Info: For more information about CEIP, see the Microsoft Customer Experience Improvement Program page at

Customer-driven Testing

Our group built a bridge between the CEIP data that we were receiving and incoming data from Microsoft Windows Error Reporting (WER) from our beta customers. We monitored the data frequently and used the customer data to help us understand which errors customers were seeing and how they correlated with what our testing was finding. We put high priority on finding and fixing the top 10 issues found every week. Some of these were new issues our testing missed, and some were things we had found but were unable to reproduce consistently. Analyzing the data enabled us to increase our test coverage and to expose bugs in customer scenarios that we might not have found otherwise.

—Chris Lester, Senior SDET

Games, Too!

This customer experience approach isn’t just on windows and Office. It extends to the Xbox and PC gaming realm as well. VINCE (Verification of Initial Consumer Experience) is a tool that is used widely on the Xbox and Xbox 360. Beta users of particular games can share their experiences and provide feedback as to how challenging a particular game level is, using just the game controller and a quick survey. Microsoft specifically used this feedback to help develop Halo 2, arguable one of the biggest titles in Xbox franchise history. The team was able to get consumer feedback on each of the encounters in the game (over 200 total) at least three times. Overall, more than 2,300 hours of gameplay feedback from more than 400 participants was gathered.

VINCE is also able to capture video of areas and show the designers potential issues that can hinder gameplay and advancement. Halo 2 used this information for an area that was deemed especially difficult, and by analyzing the video and the customer feedback they were able to help tailor the area and encounters to still be challenging but with a realistic chance of working through the level.

Customer usage data is valuable for any software application, in that it allows the developers to see things from the users perspective and helps to fill in the blanks of scenarios that they might not have considered. Adding instrumentation to help provide this feedback has been a boon for Microsoft and has helped shaped their design, development and testing strategies.

Windows Error Reporting

Just about every Windows user at one point or another has seen the “this program has encountered an error and needs to close” dialog box. If you are one who routinely hits the “Send Error Report” button, do you ever wonder what happens with that report? Microsoft uses a reporting system called Windows Error Reporting (WER) and these dialogues help them to gather the details of when systems have problems. In later OS versions such as Windows 7 and Vista, there is no need for the dialog box. If an issue appears, the feedback can be sent automatically.

WER works a lot like a just-in-time (JIT) debugger. If an application doesn’t catch an error, then the Windows system catches it. Along with error reporting, the system captures data at the point of failure, process name and version, loaded modules, and call stack information.

The flow goes like this:

1. The error event occurs.

2. Windows activates the WER service.

3. WER collects basic crash information. If additional information is needed, the user might be prompted for consent.

4. The data is uploaded to Microsoft (if the user is not connected to the Internet, the upload is deferred).

5. If an application is registered for automatic restart (using the RegisterApplicationRestart function available on Windows Vista), WER restarts the application.

6. If a solution or additional information is available, the user is notified.

Although WER automatically provides error reporting for crash, hang, and kernel faults, applications can also use the WER API to obtain information for custom problems not covered by the existing error reporting. In addition to the application restart feature mentioned previously, Windows Vista extends WER to support many types of noncritical events such as performance issues.

In addition to individuals providing this information, corporations can provide it as well from a centralized service. In many ways, this helps to ensure that company specific and trade secret details are not shared, and to prevent a potential leak of sensitive information. Microsoft ensures confidentiality on all of these transactions, but it’s cool that they offer an option that doesn’t require the companies to expose more than they have to for error reporting purposes.

Filling and Emptying the Buckets

Processing all of these details from potentially millions of instances of a crash or an issue would be daunting for individuals to handle on their own. Fortunately, this is something that computers and automation helps with handily.

All of the specific details of a crash are analyzed and then they are sorted into buckets (specific errors associated/ a specific driver, function, feature, etc.). These buckets allow the development team to prioritize which areas get worked on first. If there are enough instances of a crash or issue, bugs are automatically generated so that teams can work on the issues causing the most customer frustration.

In many cases, trends develop and the situations being seen can be rendered down to a function, a .dll file or a series of items that can be fixed with a patch. In classic Pareto Principle fashion, fixing 20% of the problems amounts to fixing 80% of customer’s issues. Addressing just 1% of the bugs often fixes 50% of the reported issues.

Out of the total number of crash experienced and reported, most can be whittled down to a small number of actual errors. By looking at the issues that cause the most crashes, testers and developers are able to focus on the problems that are causing the greatest pain, and in many cases, resolve most of the issues seen.

WER information is especially helpful and effective during beta release and testing. Many product teams set goals regarding WER data collected during product development. Common goals include the following:

- Coverage method: When using the coverage method, groups target to investigate N percent (usually 50 percent) of the total hits for the application.

- Threshold method: Groups can use the threshold method if their crash curves (one is shown in Figure 13-6) are very steep or very flat. With flat or steep crash curves, using the previously described coverage method can be inappropriate because it can require too many or too few buckets to be investigated. A reasonable percentage of total hits for the threshold method is between 1 percent and 0.5 percent.

- Fix number method: The fix number method involves targeting to fix N number of defects instead of basing goals on percentages.

Test and WER

So what is test’s role in gathering and analyzing this data? Monitoring the collected and aggregated crash data and measuring progress are important. Getting to the bottom of what is causing the crash is also important. Understanding how to get to a crash, or using code analysis tools to see why the bug was missed in testing can help strengthen and tighten up testing efforts. Fixing bugs is good, but preventing them from happening is even better.

One of the benefits of exploring the crash data is that “crash patterns” can emerge, and when armed with crash patterns, these steps can be used to see if other programs or applications run into the same difficulties.

More Info: For more information about WER, see the Windows Error Reporting topic on Microsoft MSDN at

Smile and Microsoft Smiles with You

When a developer has worked on a product at Microsoft, often the CEIP and WER data can provide information about which features are actually being used in a given product. The only problem is that this feedback shows where something is going wrong. Wouldn’t it be great to have a system that also shares what the user loves about a product, or perhaps even share about what isn’t necessarily a crash or sever issue, but would be a nice improvement over what’s already there?

Microsoft has something that does this, and it’s called their Send a Smile program. It’s a simple tool that beta and other early-adopting users can use to submit feedback about Microsoft products.

After installing the client application, little smiley and frowny icons appear in the notification area. When users have a negative experience with the application that they want to share with Microsoft, they click the frowny icon. The tool captures a screen shot of the entire desktop, and users enter a text message to describe what they didn’t like. The screen shot and comment are sent to a database where members of the product team can analyze them.

This program is appreciated by many of the beta testers and early adopters, in that they can choose to send a smiley or frown with a given feature and quickly report an experience. The program is a relatively recent one, so not all products or platforms have it as of yet.

Although Send a Smile is a relatively new program, the initial benefits have been significant. Some of the top benefits include the following:

- The contribution of unique bugs: The Windows and Office teams rely heavily on Send a Smile to capture unique bugs that affect real consumers that were not found through any other test or feedback method. For example, during the Windows Vista Beta 1 milestone, 176 unique bugs in 13 different areas of the product were filed as a direct result of Send a Smile feedback from customers in the early adopter program. During Office 12 Beta 1, 169 unique bugs were filed as a result of feedback from customers in the Office early adopter program.

- Increased customer awareness: Send a Smile helps increase team awareness of the pain and joy that users experience with our products.

- Bug priorities: driven by real customer issues Customer feedback collected through Send a Smile helps teams prioritize the incoming bug load.

- Insight into how users are using our products: Teams often wonder which features users are finding and using, and whether users notice the “little things” that they put into the product. Send a Smile makes it easy for team members to see comments.

- Enhancing other customer feedback: Screen shots and comments from Send a Smile have been used to illustrate specific customer issues collected though other methods such as CEIP instrumentation, newsgroups, usability lab studies, and surveys.

Connecting with Customers

Alan relays how he early on his career used a beta version of Microsoft Test. While he felt it ran well enough for a beta product, he ultimately did run into some blockers that needed some additional help. This led him to a Compuserve account (hey, I remember those :) ) and an interaction on a forum where he was able to get an answer to his question. Newsgroups and forums are still a primary method of communication with customers. Rather than tell all about each, I’ll list them here:

- microsoft.public.* (USENET hierarchy): hundreds of newsgroups, with active participation from customers, developers

- Numerous forums sponsored directly at and under Microsoft.

- Numerous developers and testers blog under these areas.

- a somewhat social network approach to communicating with Microsoft.

Each of these areas allows the customers the ability to interact with and communicate with the developers and testers at Microsoft as well as other customers. This way the pain points that are occurring for many users can be discussed, addressed and, hopefully, resolved.

Much as we would like to believe that we as testers are often the customer’s advocates and therefore the closest example of the customer experience, we still do not have the opportunity to engage as deeply or as often in customer related activities as would be optimal. Still testers need to be able to gather and understand issues specific to the customers using their product and quickly determine their pain points and seek ways to help communicate them to others in the development hierarchy. By listening to customer feedback and understanding where the “pain points” are, we can generate better test cases, determine important risk areas, and give our attention to those areas that matter the most to our customers.

Monday, January 17, 2011

Day 36-40: BOOT CAMP: End of the Road!

Well, today is the day. Day #40. It came way faster than I thought it would. 40 days seemed like a long time when I first set up the challenge, but it went like a rocket. I accomplished a lot, but I feel like I accomplished only a part of what I could have. Alas, "life is what happens when you are busy making other plans".

Today was my first day at SideReel, and as I've made the point before, I don't tattle on my companies or tell many specifics (I don't feel it's appropriate), but there's some amusing aspects of today that I can share. First and foremost, I came in all prepared to get an environment together with Ruby, Selenium, Cucumber, etc, and wow them with what I learned. As I got there and got situated, one of the founders handed me... an iPhone... and said "we need to test our newest app and make sure that we find any potential issues."

Heh  Heh Heh!!!!!!

Understand, I'm not laughing at the request. I'm laughing at the fact that the one thing they had me focus on today was 100% NOT part of my 40 day BOOT CAMP plan, so I had to just kinda' wing it (thank heavens for Exploratory Testing techniques), and thus I became a Mobile Application tester today :).

The environment here is different than anything I've worked with before. There are no cube walls, we all sit out in the open area and work together. Lots of collaboration, lots of pairing, including with me to see how I was doing and what I was uncovering, and a chance to familiarize myself with Atomic Object Basecamp and Pivotal Tracker... why yes, I have just dived head first into an Agile company! I think I'm going to need to reread Lisa and Janet's book book again, as I think now I can actually see a lot of what they were talking about now from a practical perspective, where before it was a "well, that would be cool someday". Someday is very much now (LOL!).

What's really fun is that I get to play with new environments. My "home system" is a Mac, which hasn't been a home base machine for me in almost a decade, so there's a lot I need to refresh and a lot I need to learn (but it felt very comforting to open up that terminal window and have access to a native UNIX system again :) ). Add to that the fact that I'll also be getting a Linux system to use for a lot of the testing and automation work I'll be doing, and yeah, I'm pretty excited about the new toys.

As I said on Friday, the work is often fun and energizing, but it's the people that make or break a gig. though it's only been one day, i can say that, thus far, the people are great, albeit quite a bit younger than me... I feared going in that I'd be the old sage, and that, it seems has proven to be true :). Ah well, someone has to be it, right? so to everyone that's followed this madcap and rapid based adventure, I hope that some of the things I've stumbled across have proven to be helpful to you as much as they have been to me. While the Boot Camp is now over, my Special Forces training is just beginning, and who knows how long that will take (don't worry, I'll spare you all my making another silly category to follow, and just report the findings I make as I make them, whatever timetable that works out to be).

Sunday, January 16, 2011

BOOK CLUB: How We Test Software at Microsoft (12/16)

This is the fourth part of Section 3 in “How We Test Software at Microsoft”. First off, my aplogies for the delay between chapter posts. I had to finish up some work as I was changing jobs, and much as I love this blog, it had to take a back seat to other things that I had to complete. This chapter focuses on Other Testing tools and the build process and tools that facilitate that. Note, as in previous chapter reviews, Red Text means that the section in question is verbatim (or almost verbatim) as to what is printed in the actual book.

Chapter 12: Other Tools

Alan starts off this chapter with the analogy of a carpenter and the need for a variety of tools for the carpenter to do their job. It's important for that carpenter to not just have them but know intimately how to use all of them to best do the job necessary. Detective shows like CSI likewise use various tools to help solve crimes and uncover the truth (well, on TV at least ;) ).

Tools for testing and developing software are all over the place. They cover areas such as physically running tests, probing the system, tracking testing progress, and provide some automated and "computer aided testing" assstance in lots of areas (an exhaustive list might uncover dozens if not closer to a hundred tools for these purposes).

In this chapter, Alan discusses a few additional tools that are part of a tester's everyday life at Microsoft.

Code Churn

Churn describe the process and amount of changes that are applied to a file or a code module over a particular period of time. To determine the amount of code churn, it is helpful to consider the following:

Count of Changes: The number of times a file has been changed
Lines Added: The number of lines added to a file after a specified point
Lines Deleted: Total number of lines deleted over a selected period
Lines Modified: Total number of lines modified over a selected period

Microsoft Visual Studio Team System calculates a Total Churn metric by summing the total of Lines Added, Lines Deleted, and Lines Modified.

Code churn can give the tester an idea as to where more bugs are likely to be found. typically code is changed for only two reasons; writing new code to add features, or changing existing code to fix bugs. Often, in complex software systems, fixing one bug leads to introducing another one, which requires more code changes which... well, you get the point.

Is code churn in and of itself an indication that there are problems in the code? Maybe, then again maybe not. still, it's worth taking a look at these frequently changing areas as the likelihood for instability is there, and deserves a closer look.

Keeping It Under Control

Microsoft's testers utilize Source Control Management (SCM) just as regularly and consider it as important a tool as do the development staff. One of the main uses of source control for the test teams is tracking changes made to test tools and test automation. Some test tools span the entire company, so keeping those tools in sync is important for a number of teams. Just like in a development team, changes in testing tools are just as prone to introducing bugs as regular software development code is. The main difference is that the test teams are the stakeholders, rather than external customers. still, test code is software development code every bit as much as traditional software development code is.

One common benefit is the creation of a "snapshot" of a particular point in time of the application's development. By understanding all of the test in place, say, when an application was released to manufacturing, all of the tests in use up to that time can then be used as a baseline for further tests related to maintaining an application. In essence, this is a way of creating a regression test for a suite of tests already in use (software can have regression issues, and so can test cases and test code).

One of the most powerful tools is using the various file comparison tools. SCM systems support the ability to view two files side by side and highlighting the differences. This can help to demonstrate where code errors may exist by highlighting the changes made to the code. SCM can also be applied to other documents as well as to source code. Requirements, specifications, even the pages of HWTSAM were written and reviewed within a SCM system.

SCM is helpful when it comes to what changes, but often it will not tell the rest of the story, i.e. why was it changes in the first place? Alan shows an example where a return code was changed so that it returned the value * 2. So why was this change implemented? If there is a standard for comments in code, then the comments may explain it, but if not, then the SCM can tell who the developers were that made the changes and when, usually with specific notes explaining what was changed. Sometime this is helpful with recent changes, but what about for those developers who may be long gone from a project or even the company? In this case, the SCM may house additional information, such as developer comments, bug ID's to compare to comments associated with the fix applied.

The biggest challenge Alan mentioned was that, at least in the earlier days, there were many systems used, and they rarely talked to one another. In addition, the use of the SCM was informal at best when it came to the testing teams. The files were often manually copied to shares so other teams could access them, and most of the time this approach worked, but at times, it didn't because files didn't copy or were lost.

As the various test teams began to bring their various server resources together under one roof, they decided to make the system a little more structured so it could be backed up, maintained and managed better. With these changes came the storing of test code alongside and in conjunction with the source code for the product under test (treating test assets on par with development assets). This way, product code and test code can be maintained together and used by multiple groups if necessary, in addition to propagating test binaries to machines as needed.

Build It

The daily build is an integral part of the development and test life at Microsoft (as it is at many organizations). Source control, bug management, and test runs/test passes are all worked into the build process, which at Microsoft flows as follows:

Product Code
- Perform pre-build (sync code and tools)
- Perform compile and code analysis
- Conduct check-in tests

Private and/or Buddy Build
- Clean Setup and Test
- Create a checkpoint release (copy source and components) --> Self-Test Build

- Conduct build verification tests
- Conduct post-build (cleanup)
- Create release build (copy source and binaries) --> Self-Host Build

The Daily Build

The daily build is exactly as it sounds; the entire product is built at least daily to make sure that all components can compile, all executables can be created, and all install scripts can be run. In addition, the process of Continuous Integration (meaning continuous builds performed with frequent code check-in) is also actively supported in Agile environments.

Note: The Windows Live build lab creates more than 6,000 builds every week.

Test teams usually run a suite of smoke tests. Most often, these are known as build acceptance tests (BATs) or build verification tests (BVTs).  A good set of BVTs ensures that the daily build is usable for testing.

- Automate Everything: BVTs run on every single build, and then need to run the same every time. If you have only one automated suite of tests for your entire product, it should be your BVTs.

- Test a Little: BVTs are non-all-encompassing functional tests. They are simple tests intended to verify basic functionality. The goal of the BVT is to ensure that the build is usable for testing.

- Test Fast: The entire BVT suite should execute in minutes, not hours. A short feedback loop tells you immediately whether your build has problems.

- Fail Perfectly: If a BVT fails, it should mean that the build is not suitable for further testing, and that the cause of the failure must be fixed immediately. In some cases, there can be a workaround for a BVT failure, but all BVT failures should indicate serious problems with the latest build.

- Test Broadly—Not Deeply: BVTs should cover the product broadly. They definitely should not cover every nook and cranny, but should touch on every significant bit of functionality. They do not (and should not) cover a broad set of inputs or configurations, and should focus as much as possible on covering the primary usage scenarios for key functionality.

- Debuggable and Maintainable: In a perfect world, BVTs would never fail. But if and when they do fail, it is imperative that the underlying error can be isolated as soon as possible. The turnaround time from finding the failure to implementing a fix for the cause of the failure must be as quick as possible. The test code for BVTs needs to be some of the most debuggable and maintainable code in the entire product to be most effective. Good BVTs are self-diagnosing and often list the exact cause of error in their output. Great BVTs couple this with an automatic source control lookup that identifies the code change with the highest probability of causing the error.

- Trustworthy: You must be able to trust your BVTs. If the BVTs pass, the build must be suitable for testing, and if the BVTs fail, it should indicate a serious problem. Any compromises on the meaning of pass or fail for BVTs also compromises the trust the team has in these tests.

- Critical: Your best, most reliable, and most trustworthy testers and developers create most reliable and most trustworthy BVTs. Good BVTs are not easy to write and require time and careful thought to adequately satisfy the other criteria.

Breaking the Build

One of the most simple benefits of having daily builds is that, if there are errors, then the error will be found within 24 hours of the check in and attempted build. Syntax errors or missing files (forgetting to check in) are the most common culprits, but there can be other situations that can "break the build", too. Sometimes the build can be broken because of a dependency on another part of the system that has been changed.

While eliminating build breaks entirely is probably a pipe dream, it is possible to take steps to minimize them and their impact. Two of the most popular techniques Microsoft uses are Rolling Builds and Check-In systems.

A rolling build is an automatic continuous build of the product based on the most current source code. Several builds might occur in any given day, and with this process build errors are found more quickly.

A rolling build system needs the following:

- A clean build environment
- Automatic synchronization to the most current source
- Full build of system
- Automatic notification of errors (or success)

Using scripts to combine the steps and to parse for errors (cmd and Power Shell in Windows, plus tools like sed, awk, or perl in UNIX environments) can help to make the process as automated and hands-off as possible. In some cases, BVTs are also performed as part of a rolling build and are automatically reported to the team after each build and test run.

A Check-In System also helps to verify changes made to the main source code. A Staged Check-In can be helpful when dealing with very large projects.

Instead of checking the code directly into the main SCM, they submit them first to an interim system. The interim computer verifies that the code builds correctly on at least one platform, and then submits the code on behalf of the programmer to the main source control system. In addition, many of these staged interim check -in systems will make builds for multiple configurations.

The interim system (often referred to as a "gatekeeper") can also run various automated tests against the changes and see if there are any regression failures. By using this "gatekeeper" process, a significant chunk of bugs are found before they get committed to the main trunk.

Static Analysis

A common question when kit comes to test software being written is, of course, “Who tests the tests?”. A lot of effort gets put into writing test cases and test code for automation purposes, but make m=no mistake, that code is code, just as production code is code. testers have the same susceptibility to creating errors as developers do. In a lot of ways of ways, running and debugging tests does a lot of that and helps shake out problems, but there's still lots of stuff that we can miss, especially if we are not specifically looking for those errors. Let's face it, testers are great when they look at other people's code and implementations, but we are likely to turn more of a blind eye to our own work (or at least not as critical an eye as we would for the developers).

One approach that can be used is to run Static Analysis tools. These tools examine source code or binaries and can pinpoint many errors without actually running the code.

Native code in Microsoft is code that has been written in C or C++. there are a number of tools that are commercially available to allow the tester to check code and see if there are any issues. Microsoft uses a tool called PREfast. This tool is also available in Visual Studio Team System. PREfast scans the source code and looks for patterns and incorrect syntax or usage. When PREfast finds an error, it displays a warning and the line number of the code where the error occurs.

Managed Code is any program code that requires a Common Language Runtime (CLR) Environment. .NET Framework code and languages such as C# fit this description. FxCop is an application that performs Managed Code Analysis. It can report issues related to design, localization, performance, and security. FxCop also detects violations of programming and design rules. FxCop is available as a stand-alone tool or integrated into Visual Studio.

Note: while these tools are helpful in finding many errors, they do not take the place of regular and focused testing. The code could be free of code analysis–detected issues, and yet still have lots of bugs. still, finding many of these issues early does help free up time for the testers to focus on potentially more significant issues.

Another key detail to be aware of here is that the test code is subject to the same limitations, and therefore, the same level of focus to static analysis needs to be performed on the test code as well. Test code is production code, too, just for a different audience.

Even More Tools

There are many tools available to testers, many of them are internally built, many are commercial products, but all are meant to help make a process work faster and better. Screen recorders, file parsers, browser add-ons, and other specific tools are developed to help solve particular problems. Shared libraries are often implemented to that various teams can use each others automated tests. The Microsoft internal tool repository contains nearly 5,000 different tools written by Microsoft employees. still, even with the large number of test tools available, there is no replacement for human eyes and hands in many test scenarios.

Tools are often essential when it comes to performing efficient testing, but just as important is knowing which tool to use for which purpose, and like the carpenter with the latest and greatest tools and technology, sometimes the best tool is to just get in the house and see what is going on. After getting a good lay of the land and understanding the potential situations, then they can pull out the necessary tools to do their jobs as efficiently as possible. For many of us as testers, the same rules apply. Tools are only as good as their implementation and their effectiveness is limited to the skill of the user.