Tuesday, July 31, 2012

Deconstructing Weekend Testing: A Test Coach Camp Talk

As I was preparing talks for the Test Coach Camp session that preceded CAST this year, I had a number of ideas I wanted to discuss. Most of them were centered around Weekend Testing, since in my "coaching" career per se, that has been where I've done most of it and most regularly. As I was preparing materials for explaining the Weekend Testing model, I had a heretical thought... what if I ran a session completely deconstructing Weekend Testing? If for some reason, the main site were hacked and destroyed, and we had to start it all over again, what would we do?

With that, I decided to make that a core element of what we discussed. It was interesting to hear some of the feedback from participants, those who were interested but never did a session, and those who knew nothing about it at all. I hope this isn't seen as a blind side to the founders or other facilitators, it's not meant to be. It is shared with the spirit and understanding that we have something awesome, but even awesome can strive to do more, be more and get better.

What are some thing we discussed?

The first and most obvious aspects that were discussed was the main site itself.  When we go to the Weekend Testing site, we see prominently displayed the last three sessions of weekend testing that were held. these are experience reports,and they are displayed at the top. On one hand, this is helpful, because it shows what we have done and what we do. This is great for regulars who have missed a session and want to get caught up or see what happened. However, for most curious lurkers, they don't really want to know what has already happened, they want to know what will happen. When is the next session going to occur? Many of the commenters on my talk said that, instead of giving such prominence to past activities, we should be using that slide show to advertise the next testing sessions.

Tied into the previous aspect is how we find out about sessions. As I explained, the tradition has been that we advertise on twitter and via email about a week prior to sessions. It's also been to put the sessions up on the weekend testing site in the forum for announcing sessions. One participant explained to me the process of finding a session. They first:

Had to navigate to the site.
Click on the Forum link.
Click on Next Weekend Testing Session.
Scroll through and find the one that was in their geographical area or time zone that worked for them.

It's been proposed that these sessions and their times and details should be front and center, the first thing people see when they get to the site. I think that's a very reasonable request :).

Other comments related to how we construct the sessions. This is a common question, and one in which each chapter has addressed in different ways. For most of the sessions, we need to expect that they will be one-offs. That means that we need to make sure that what we present, what we discuss, and how we discuss it will be handled in our two hour meeting time. because of this, there are certain topics that we just don't have the time to get in to, such as more advanced automation techniques, or more detailed explorations of a product. We tried this with a approach that we called Project Sherwood a while back, and while it worked well the first few times, the problem was getting people to commit to a regular meeting time. This has led to questions about "asynchronous sessions", or making a weekend testing model that utilized the forums and allowed for a more in-depth discussion that, rather than just being for two hours, might continue for a week or more.

Another thought that was brought up was an interesting question... how could we encourage more facilitation? Right now, we have sessions that have one or maybe two facilitators? What if we were to make it so that there were many potential facilitators in a given session? How would we do that? What roles could we give them? What if one of the goals was a rapid capture of the session, so that the experience report was published within minutes of completing a session, rather than hours or days, which is sometimes the norm? What if there were ways that we could leverage the skills of the collective of testers to help streamline the process of making experience reports, so that the announcements become the experience reports, and the next announcements go up as quickly as possible? It may mess with the egalitarian model of everyone coming into the sessions on the same footing, but maybe that's OK.

There was a lot of other areas we talked about as well, and I am looking to engage the other facilitators and maintainers of the Weekend Testing site and approach as to how we can scale our creation and help it to grow and be useful to our participants. To that end, I'll ask all of you the same question... if for some reason, the Weekend Testing site were destroyed and we had to rebuild it, what would you like to see us do. More to the point, what would YOU be willing to do to see that vision become a reality :)?

Monday, July 23, 2012

Cross Browser Fun: Adventures in TestCompleteLand

As I said previously, I'm looking to see how TestComplete would work were I to try to use it to handle some of my regular or desired testing. That means handling my environment as is, which is that Sidereel is a Rails app. Some of the bigger challenges I currently face are that it's a bit of a headache to do a lot of cross browser testing with my current setup of Gherkin and Cucumber with Ruby (not impossible, it just requires some tweaking and making sure everything plays nice).

The benefit (ideally) of having a tool like TestComplete is that it is meant to be a self contained solution. While Selenium/Webdriver/Capybara/Cucumber/fillInTheBlank requires a fair bit of integration to get everything to play together, I was curious to see what I could do out of the box to focus on cross browser testing (and not just some random demo site, but my site that I have to actually test!).

Here's a basic example:

- Using TestComplete, record a test where I open a browser window.

- Go to www.sidereel.com

- Click Log In to get to the fields for the username and the password.

- Enter the username and password.

- Click Log In.

- Make a checkpoint to that we can tell that the page has the right username.

- Click Log Out.

The test runs, albeit the record process creates a bit of detritus that I then ended up having to strip out (wait states, hover points, mouse movements, etc.). Still at the end of the process, we have a working script.

So what does it take to make a cross browser test? This is pretty slick, to tell the truth. What you do is you create a 2nd test file, and that acts as a wrapper and a controller. You put the calls for the browsers you want to run, and then you call the test you want to have run/repeated. In this case, the login test can be set up to run on IE, Firefox and Chrome, and if we want to set up conditionals or loops, we can do that to run the tests as many times as we want to.

Showing a hiccup when we try to test all three browsers; the setup of each browser is shown at the bottom. 

Yeah, this is a simple and quick example, and it doesn't necessarily tread on a lot of new ground. I would be interested to see how well this setup tests and identified CSS variations, since that's something that I deal with regularly (especially between browser versions; would TC catch any differences, or would I have to eyeball that to make the identification and make the call of it being a bug or not? Something to think about :).

Sunday, July 22, 2012

Getting My Bearings: Adventures in TestComplete Land

I had anticipated that there would be a few things I'd need to address to get to this in an effective way. I didn't think it was going to take me three weeks to get back here. Still, with a lot of the details of Test Coach Camp and CAST now completed, as well as taking a week out to go to Summer Camp (note, this is actual Scout Camp, not the SummerQAmp that you will likely be hearing a lot more about in the coming days), I think it's time to get back into the swing of things and visit with an old friend I really have not seen in a long time.

As I stated previously, coming back to TestComplete after what has been a few revisions (version 7 was the last one that I had actual interaction with, and most of my knowledge came from versions 5 and 6), I figured it would be like coming back to a familiar neighborhood a couple of decades after I'd been a regular. In some places, that's not so difficult (the Sunset District of San Francisco looks remarkably similar today as it did in 1990). Some neighborhoods, of course, will look very different (my home town of Danville, CA has changed greatly since 1990, so much so that there are whole neighborhoods there now that didn't even exist when I moved away). Where does TestComplete fall in this category? Surprisingly, a lot of the look and feel is very much the same.

This is primarily a Win32 application for Win32 applications. In the last few years, it has moved into the web space, so much of the infrastructure for this application is primarily of a Win32 flavor. While I would be really happy if I were to learn that TestComplete now supported Perl, Python or Ruby, alas, that is not to be, even in version 9. It does support VBScript, JScript, DelphiScript, C++Script and C#Script. These are languages that, admittedly, I'm not all that comfortable with, but they will work in a pinch and I can at least navigate them and make sense of what I see.
A quick example of a test script.

The system has a lot of flexibility and ability to integrate with and coordinate with Visual Studio, so if that is the primary environment that a tester wants to do their test development in, TestComplete is well suited to take advantage of this space.

What is also helpful is that TestComplete has enhanced and improved the Keyword testing apparatus and makes it a choice for recording and constructing tests. The benefit of Keyword tests is the fact that the tester can, with a little practice, construct tests using the keyword interface that will allow them to develop robust and effective examples that use such options as looping, branching, Data Driven Testing (DDT) and allow for modification of elements to work as standalone tests or as a suite of tests. If the user is so inclined, once they have cobbled together an effective set of tests, they can export the tests to one of the supported scripting languages.
A quick example of a Keyword test.
At first blush, the obvious approach will be to use the recording method. For people just getting started, this is a natural place to start, and several tests can be created in this manner. However, just as I would tell anyone using the Selenium IDE or any other tool, friends don't let friends use Record and Playback. Once you have built a number of keyword tests and have become familiar with the method for creating the building blocks for a keyword script, you may find that it is just as effective to build your scripts from scratch using the keyword tools, rather than recording and going in to modify the overly exact steps that are recorded.

A cool new tool that is new to version 9 is the Test Visualizer, that helps to show you, based on where in the script you are, what is happening. Rather than rely on the script and its description, the ability to see images that define exactly where you are in the script are available to the user. This is helpful when you are trying to look at recorded steps and debug/determine what you want to do as you plug in new script lines or keyword blocks.
The Test Visualizer with some sample SideReel screens

The biggest challenge with tools like TestComplete is that they start off gentle, they give some examples of what a tester can do, and then things get steep really quickly. The keyword interface helps, but even there, if you are a neophyte to the world of programming, you will struggle beyond the basics with TestComplete. The good news is that, even with that steepness, there is a lot of user documentation and screen casts to help the user make the move beyond the basics.

So as I stated at the beginning, this is a "get your bearings" level post. Over the coming days, I'm going to commit to a little bit each day rather than try to slam out big jumps. Also, I have some actual questions I want to answer, and I'm going to use my own site (sidereel.com) to try to answer them, or I might make some examples to go deeper if the options available don't carry over to SideReel. Stick with me over the next several days and see where this tour takes us. In many ways, your guess is a good as mine :).

Thursday, July 19, 2012

My Primary Talk at CAST 2012

This has been a topic that I have been interested in digging deeper into over the past several months, and I have found that, as I learn more and practice more, the disciplines of software craftsmanship and exploration are really not exclusive. They complement each other and help each discipline find more issues and are more effective combined than they are in isolation.

The Emerging Topics track was set up in the main ballroom, so as to allow as many people as wanted to see the talks the ability to do so. It felt good to know that I was presenting on the same stage (or in front of it in my case), as the keynote speakers were. Also, it felt good to get out of my traditional comfort zone. Most of my talks to date have been centered around Weekend Testing or coaching/mentoring other testers. To take on a more "sticky subject" like this was a stretch, and honestly, it was quite fun.

A disclaimer: this talk was modified before I delivered it. I was set to give one kind of talk, and by the end of a session that we were discussing in Test Coach Camp the day before, I decide to give a slightly different version. Ken Pier and Cem Caner and others addressed their own confusion and frustrations about the rhetoric that has led to the marginalization and isolation of skills such as Exploration and Automation. Through this, I realized there was a lot to this discussion that I was not addressing. Further, the points they made fit very well with my chosen topic. With their permission, I tailored my talk to include these aspects.

The net effect was also that, instead of delivering a polished talk with Power Point Slides, I used flip charts to capture the bare basics of the takeaways and used the format to have a more open discussion and query of the ideas. If it seems like a great deal of the talk was rather impromptu, that's because it is. Sometimes serendipity gives us a gift, and the best advice I can give is to run with it. I'm also grateful to all of the participants for their questions and comments at the end. It's given me much to think about and  help me to fine tune my presentation.

...and with that, I'll pipe down and let you hear the talk for yourself :).

Wednesday, July 18, 2012

CAST 2012 - Day 2 and 3: Semi Live Blog

I had to shift gears yesterday and be on the spot for quite a bit of the lightning talks (as well as giving one about humility, receptiveness and martial arts, and how they relate to testing and knowledge acquisition), plus talking about "What I learned in Test Coach Camp" and helping to facilitate the lightning talks... this is to say that I didn't get a chance to blog yesterday because I didn't get much of a chance to even sit down. But here's the recap from yesterday.

Morning: The group of us that wanted to gathered together and had a panel discussion on what we learned at Test Coach Camp. there was quite a bit of overlapping layers, and I was excited to see that the participants voted on the value of EDGE as being a key takeaway. I'm starting to think that EDGE could be a really good workshop idea for a testing conference (at some point, I'm going to have to figure out how much I can cover before I have to ask BSA's permission; EDGE is a trademark of BSA, so while I can talk about the concepts, I have to be careful how I word things).

After we did the recap on Test Coach Camp, we started another session related to "How We can Change the State of the Craft of Software Testing". For this purpose, I gave a talk on Martial Arts, Humility and Receptiveness to learning. For those who didn't get a chance to hear it on the live stream, here's the basic gist:

When I was a teenager, I was a big fan of Bruce Lee. Because of that, I wanted to learn martial arts to be an official bad-ass. I went to East west Karate school in Walnut Creek, CA in 1981 to learn Bok-Fu so I could be awesome. However, I was deeply put off by the fundamental rigor that we had to do, and I was even more put off at the fact that the cool katas and techniques I developed for tournaments were scored so low, while the staid and boring basic ones by other kids were winning medals. add to that me getting my head handed to me in several boxing matches, and I decided Bok-Fu was not for me.

Fast forward thirteen years. I'm a young married adult now, working at a company, and to have something to do at lunch, I decided to take Aikido because, at the time Steven Seagal was fairly popular and he looked really cool doing it (hey, I ain't proud, that's how I came to know about it ;) ), and I started doing it at lunch when it was offered. this time, I was cool with the rigor and the fundamentals; I'd matured to that point. the big killer was the overall time commitment expected to get really good. Four days a week, two hours each day was expected. I couldn't commit to that, so over time, my frustration level with the time commitment got the better of me, and I drifted away after a few Kyu level jumps.

This weekend, I had a chance to talk Kendo with Benjamin Kelly, who has been doing Kendo for 20 years, and yet he said he still routinely gets the smack down laid on him by guys who are 70 or 80 years old (disclaimer: Ben lives in Japan, and Kendo is somewhat of a national sport ;) ). This time, I appreciated the joy and the pleasure of the basic movements, the serenity of just practicing and working out with others, and the belt or recognition being ancillary, or unimportant entirely.

Our commitment to testing is very similar; when we focus too much on being one of the cool kids, or when we focus too much on the time commitment, we are missing the humility and the patience to genuinely learn and grow. Maybe this just comes with time, maybe it comes with experience, but when we get to a point where we are happy to do the work just for the sake of the joy of doing the work, that's when we are really ready and able to learn.

After jumping around and helping to facilitate a number of other talks (great ones by Markus Gaertner, Anne Marie Charret, Thomas Vaniotis and Matt Heusser among others, we ended the day by assisting in the facilitation and founding of a new Special Interest Group, this one based on Test Management / Test Leadership. While I don't have the ability or time to manage two groups, I definitely see some areas of cross pollination and where we can leverage strengths from each other, so I hope that the Education SIG can be actively involved in their efforts and them with ours.

So now it's Wednesday, and I'm spending the day in a room with a bunch of other people that want to beef up our automation chops. Adam Goucher is leading us in how to create and maintain a Continuous Integration/ Continuous delivery system. So far we've deployed a VM with a sample application, unit tests, functional tests, and version control tweaks.

The afternoon started off with talking about how to deploy on systems and the variety of hooks needed to allow for CI (in this case, Jenkins) to be configure d and modified to help with the deployment of the system. We've also looked at and covered migrations and how to get code copied over to multiple systems if desired.

We are going through the VM and finding issues and fixing them a little at a time so that we can get a feel for the environment. So far, it's been very interesting and very enlightening, plus I'll have a VM with all of this configured to go back and play with.

I'll come back in and add more comments on Adam's workshop but in the meantime I want to say the following:

Giving five talks in one week:

- Using EDGE to Help Coach Software Testers
- Coaching Younger Testers and Interns
- Deconstructing Weekend Testing
- Balancing ATDD, GUI Automation and Exploratory Testing
- Martial Arts, Humility and the Willingness to Learn

Facilitation of Lightning Talk Track

Facilitating the Education Special Interest Group Meeting

Participating with the Test Leadership Special Interest Group

Hanging out with lots of great people and getting feedback on BBST and the future of teaching BBST

Talking about and recruiting additional hands to help with content curation for SummerQAmp

Meeting people I had previously only known from Twitter and in a virtual manner, or having only communicated with via Skype for the TWiST podcast

Recording over 20 hours of audio for potential podcasts (including my own contributions)

Seeing myself get filmed and broadcast so I could go back and review my own talk performance

Spending a full day grokking the entire automation stack for setting up Continuous Integration AND getting my very own VM with problems to discover and try to fix, and the ability to restore and tweak as much as I want with impunity... GOLDEN!!!

All in all, this has been an amazing week. Thanks you to everyone that has made this a highlight of my testing year. I really appreciate what you all do and how much you contribute to what I learn and know, and I hope I in some small way help do the same for you.

Monday, July 16, 2012

CAST 2012 is Underway: Day 1: Live Blog

Today is the first day of CAST, which is the Conference for the Association for Software Testing. After having spent the weekend in San Jose, as well as the majority of this week in San Jose, I feel like I'm on the road, even though I'm really not that far from home. Since the requirements for logistics start early and run late, I've been given a hotel room for the week. I'm now on day four away from home. still, considering the learning and opportunities I have already had this weekend, I'm really grateful for the opportunity to learn, discover, and grow with my friends and peers.

With the start of the conference, I had a chance to make a pitch for BBST and the EdSIG meeting that's happening tonight. I'm going to do my best to include as many people that want to participate as possible, but it may not be possible to get a lot of people online in the meeting, but I'm certainly going to try.

There are a lot of different talks going on and a lot of track sessions, but I have to admit to having an affection for the Emerging Topics track, so I will very likely spend much of my time in here (that, and I have my own talk I'll be giving in here at 4:20 PM PDT :) ).

Emerging topics started with a talk bout biases and what we see and what we don't see by IlariHenrik Aegerter ‏ (@ilarihenrik on Twitter). His talk covered focusing on biases and how we observed different things. Ilari showed a number of different pictures that had things hidden in them that are not obvious to the naked eye, and how it takes us awhile to recognize when strange items appear or how the image is difficult to re-interpret after we have made a prior identification. You can see more of Ilari's comments on his blog at http://www.ilari.com/.

The next talk in Emerging Topics came from Scott Allman and was about "Computers as Causers". this was a talk that described how computers actively drive events. Understanding how events cause other events is an important step in looking at the world and how systems allow us to see what is causing things to happen. Often, there are causes that are going on that we will never be able to describe; so many steps may take place that we cannot have any way of categorizing, yet we have to still take them into consideration. Interesting topic and ideas.

Claire Moss (@claireification) has been talking about Big Visible Testing, which is as it describes, a way that all test efforts can be put into a format that makes the testing initiatives easy to view and to see any and all issues and blockers, and what testers are actively doing and those that are ready to get started. By actively using the Spring board and Kanban, the whole team can see what is happening and how everyone interacts. As the tester, performing testing and vocalizing their experiences helps to, again, make the testing efforts visible and transparent, which frankly is a beautiful thing :).

For those who want to follow along at home, check out the link at


After a quick break and a public service announcement from me about SummerQAmp and what it's all about (and how we need people to help develop content for it), i started up again with an emerging Topics talk from Thomas Vaniotis about Epistemology, and helping to give some solid structure as to what exactly that all means. this is kind of cool in the sense that he's deconstructing the ideas behind epistemology. the basic model is the one that Plato developed, where:

S knows that P is true if and only if:
1. S believes that P is true
2. P is true
3. S is justified in believing that P is true

Note, while this model has held up for millenia, there's ways to counter-act this. Gettier has made some different ways of looking at this, and shows how he can counteract these aspects of TRUTH. This is really interesting in that, while we talk bout epistemology in many places, I think this is the first time I've heard this put into this format.

Ilari cam back for an encore performance and talked about ways in which we learn, but what was really cool was that it was a talk about how to coach and, more to the point, how to effectively read books and which books would be of value to testers. this should come as no surprise to anyone who reads my blog, but many of the best testing books I have read have almost nothing to do with testing.  Well, not directly, in any event. Books on philosophy, business, motivation, creativity and history are just as effective, and sometimes even more so, than "New Title Dedicated to Software Testing". As an added bonus, Ilari gave away three copies of James Bach's "Secrets of a Buccaneer Scholar" to three lucky participants.

Following lunch, we had the opportunity to hear a keynote speech from Tripp Babbit, who was involved with the process of reforming and getting information systems deployed for the state of Indiana to roll out their modernization of welfare and temporary assistance projects. Tripp went through several examples of where the systems and the bureaucratic red tape made for a nightmare or navigating the system. The truth is, most large scale IT projects do not succeed. In his case, the Indiana FSSA had time lines get worse, had error rates get higher, and saw their backlogs grow larger too. Oh, and they were getting sued and seeing contacts get canceled. The problem is that so many other things are managed that the managed items just equate to the wrong thing being done righter, and with agile, we do the wrong thing faster). Tripp walked us through the various challenges that are faced by large scale IT projects, those of scale, feature complexity, and other details that conspire to derail large projects.  Tripp compares this to the work that he is now doing with Vanguard, and how he was able to design a totally different system. By engaging with the people who do the work, then they are able to build the systems that will work for a huge system in the real world.

After the keynote, I jumped into the session on Paired Testing with Kristina Sontag on Pair Testing. Why would this interest me if I'm a lone tester? Because even though I work on my own, I have opportunities to interact with and get involved with a  lot of different people in the organization. Any individual in my organization (developer, content curator, customer service, executive, product owner, designer, what have you) can be an effective testing pair. everyone has unique viewpoints, unique approaches, and "fresh eyes" to observe and consider different options. I asked Kristina if they did something similar in their organization, and while yes, they did use the pair concept with other stakeholders, they made the approach less structured and offered a lot of coaching and suggestions while doing it (their session based testing approach has to date only been tester/tester).

Tony Bruce is one of the testers I have followed since I made the conscious decision to get involved in the overall testing community, and I've enjoyed seeing and reading his comments and thoughts over the past few years. Thus I was happy to sit in on his talk, "Talking About Testing in a Friendly Environment". This approach to get testers to talk to each other works by using a brief format of a short talk (maybe 15 minutes) a short demo) again about 15 minutes, and then have some fun in the process. Getting meetups together will usually start small, but if you are engaged, fun and consistent, then you will get people to come and participate.

Anand Ramdeo is focusing on how to put randomization into our tests and making randomization a more prominent part of our testing. this dovetails nicely into my talk and some of the details that I'm going to cover, so I like the fact that we are back to back. Sorry if I'm not so talkative, but I'm about to go on, and I feel a little anxious (good anxious, not freak out anxious ;) ). I'm recording Anand's talk, so I'll give him a better review after I finish myy talk.

So I'm going to tip my hat a little bit early and share a bit of my talk with y'allz. I cam in with one idea as to how I was going to do this talk, but after spending the weekend with Test Coach Camp, and having a chance to listen to and participate in a discussion with Cem Kaner and Ken Pier, I got a lot of interesting new angles that I will be working into my talk. Besides, I won't be abe to write to you all while I am presenting, so here goes...:

There's a fair amount of hyperbole in my talk, but it's there to make a point. The problem we face today and the way that testing is being sold to organizations is that there is a tribal war between three factions.

TDD/ATDD: Because we define our tests up front, and we write our code to meet the acceptance criteria and test out all of the modules before and during each build, we don't really need to have dedicatee testers.

Front End GUI Automation: With the benefit of [fill in the blank tools] we are able to get fast automation wins and create scripts that cover the important test cases we need. It's so simple, anyone can do it, therefore we can have an automated tester focus on writing scripts for us (and because of this, we don't need a dedicated tester).

Exploratory Testing: All these silly scripts and unit tests that are great for a single module but don't really understand how to interact with a complex system, nothing is going to replace the value and the focus of a dedicated tester and how they can find problems and "explore" areas that nobody else can.

Did I lay it on thick enough? I sure hope so, because that makes it easier to make my primary point; these are equally right and dead wrong. the fact is, it's not an either/or, it's an all of the above, and many of these options work very well together and can be applied in a variety of ways. If you run a script the same way over and over again, and you then decide to do it in a random order to see what will happen, congratulations, you've just done exploratory testing, while using automation (computer aided testing) to help you do it. There will be more in the actual talk, so i hope you all will tune in and hear me deliver it in person (virtually, at least ;) ).

More to come, stay tuned.

Sunday, July 15, 2012

Day 2 of Test Coach Camp: Live Blog

Another day, another opportunity to learn from peers, friends and luminaries. Yesterday was a great deal of fun, and today, we have a number of other topics that are begging to be covered and heard which will make for another vary full day.

We started at 9:00 AM and X-voted on topics, as well as adding additional topics that the participants wanted to cover. With that, we gathered the ideas together and made today's schedule.

An interesting note, since yesterday there was a lot of sit-down talking, this time around, there's quite a bit of active content. Wade Wachs is demonstrating juggling as a model for coaching.

Ben Kelly proposed a talk that was titled "It Would Be Easier to Coach Testers if I Could Hit Them With a Stick". Sounds cheeky, but when we all realized who was proposing the talk, we understood that he meant to use Kendo as a model for coaching. Once we understood that, it quickly became one of the most anticipated talks (I know I'll be there :) ).

The participants also voted again and decided that they did want to hear about how to re-model Weekend Testing, so I'm going to hopefully get a chance to record that for a podcast, as well as to discuss a Scouting related topic, using EDGE to help coach and manage teams.

Session 1 for me is being facilitated by Philip McNeely, and it's titled "How to Coach Testers Back from the Brink". Philip was gracious to let me invade with my microphone and laptop to record this for a podcast. We have been having a terrific discussion on how to help testers who are dealing with burnout, disillusion or other issues affecting their performance. Helping them get more involved, engaged and excited to be dealing with testing again has also been a focus for this session.

For a lot of us, the highest burnout aspect is the sheer repetitious nature of a lot of what we do. Having to be the bearer of bad new also often weighs down on us, and we can often get down on ourselves because of the fact that we are just not often a welcome messenger. Topics such as bug triage, crisis situations, death marches and distractions because of not being engaged are other areas we need to deal with. Putting together a sustainable pace and having a realistic way of dealing with the stresses are all areas that we need to be aware of and work to help get people re-energized and engaged.

The second session was mine, and I had a chance to refine and expand on my talk that I did from last year at CAST, which was the stages of Team development and using the EDGE model to mentor testers.

For those not familiar, EDGE is an acronym that is used in scouting for a number of different disciplines; learning skills, teaching, leading, etc. Edge stands for Explain, Demonstrate, Guide and Enable (or Empower, as suggested by Wade, and I think that works fine as well).

I also explained how the teams go through the stages of team development (Forming, Storming, Norming, Performing) and how the EDGE principles fit with all of that. In the beginning, a leader is practically a dictator, and as the team grows, learns and develops skills and aptitude, that leader moves from being the dictator, to being a teacher, to being an aide, and then ultimately getting out of the way.

The session I looked forward to attending was Ben Kelly's Kendo demonstration. Having participated in martial arts as a kid and young adult (Bok Fu Do and Aikido, respectively) I was curious to see how he was going to tie this into test coaching.

The idea that he wanted to make sure we understood was that there is a considerable amount of physical and muscle details that need to be applied. In addition, the rigor and the time it takes to develop the fundamental skills take a considerable amount of attention, so much so that it causes those not willing to put in the time to self select themselves out of the process. Testers do very much the same thing. the challenge we face is to see how we can help them get through the rigor and repetition without becoming brain dead in the process. We have an opportunity to help make the rigor mean something, and much like Kendo, regularly getting out and sparring with that rigor makes a lot of the difference between an engaged Kendo-ka and a disinterested one (as well as an engaged tester vs a disinterested one).

Following lunch, I had a chance to sit in on a conversation with Ken Pier, Cem Kaner, Claire Moss, Philip McNeely, Doug Hoffman and Matt Barcomb for a session called "A vs. E! Huh?!" This one interested me because it's part of my presentation that I'm giving at CAST tomorrow. What's the debate?

It seems like the debate is between Exploratory Testing vs. Automation, as though it's an either/or situation. For many testers, they are led to believe that they can be a manual tester that is primarily exploratory, or they are automated testers that are programmers, and that there is a division between the two. Cem made an interesting point in that all testing is automated, and no testing is automated, really. The dichotomy as presented is flawed and doesn't really exist.

The goal for exploratory testing is to focus on learning new things. In many ways, that's not something that can be extensively automated, because how do we ask new questions that we haven't even considered before without diving in and actually exploring? Many tests we learn about emerging principles of the design, and in that process, we may find several bugs. However, just because we found them while learning in this process, doesn't mean that automating those same tests will necessarily give us any additional benefits. However, being able to run through those tests to make sure that the system is behaving the way that we have learned (and thus now expect). In other words, as long as we are learning and getting new information, then what we are doing is exploratory the fact that the process is computer assisted is a bonus.

The final session that I personally facilitated/presented was my questions and ideas about what to do with or how we could improve/modify weekend testing. I got some great feedback regarding a number of areas including the actual weekend testing site, the way that we present information, how we announce sessions, and to determine what our core competency and mission is (which isn my mind is to be an effective coaching and mentoring ground for testers, both for newer testers to be mentored as well as experienced testers to provide mentoring. I recorded this session and believe me, I'll be going through this one with a fine tooth comb (not sure if this one will become a podcast, but I hope to act on as many of the suggestions as I can.

Right now, we are doing a retro on the day and the things that worked and what we could improve for next time. My suggested improvement? Anyone who didn't attend this year, make a point to come the next time we schedule test Coach Camp. I had a great time, and I want to definitely participate again.

Saturday, July 14, 2012

Live Blog: Notes from Test Coach Camp 2012

It has been a crazy couple of weeks for me, and those couple of weeks have had a lot to do with why I haven't been writing. The biggest reason this past week or so has been that I've been up at Scout Camp. I had a great time and had some interesting experiences that I will definitely be writing about, but that will have to wait. Today I find myself in a conference room in San Jose with some of the best brains in software testing. I'm at Test Coach Camp, which is an open space conference organized by Matt Heusser, Matt Barcomb and Doug Hoffman. I could probably write out every name of every test star in attendance, and rest assured, I will very likely do exactly that over the next couple of days :).

We started today at 9:00 AM, and I am taking a little time to discuss how an open space conference works. Unlike a traditional conference where sessions and topics are specifically defined, there are some simple "rules":

Everyone comes up with two topics.
We "dot vote" the sessions, and those that get the most votes become the session content.

What are our goals? We had a quick discussion around our table:

Markus Gaertner: How can we help train testers.
Ben Kelly: Good Testers don't necessarily make good coaches.
Dave Liebreich: How can I help mentor younger testers.
Iain McCowatt: How can we context switch between testing, management and coaching.
Ilari Henrik Aegerter: How can I help to build trust with my team and with coaching people so that they can advance.
Cem Kaner: As a University professor, I've seen that most testing training is ineffective. I think that the key is to design experiences and feedback for testers.

As for me, I'm focused on three primary areas. Since I'm a lone tester, I mostly manage myself and my own efforts in my day to day work. I don't actively coach in my day to day work. I do, however, coach in my role as a facilitator in Weekend Testing, as an instructor in BBST, and as a content curator for SummerQAmp. My goal, to gain from this conference, is to learn and talk about some of the challenges I have with these three spheres.

The past hour we have pitched our ideas, and there have been a number of variations on themes. Topics like giving feedback, auto didactic learning, and mentoring younger testers. I've proposed two topics myself. We shall see if they make the cut:

1. Having been involved in Weekend Testing now for two years, and facilitating for almost the same amount of time, I have seen both the benefits and the limits of the Weekend Testing model. To add a little controversy, and to consider different approaches... what if we were to rip the current model of Weekend Testing out by its roots and start again? What could we do to redesign how we deliver it and our goals in coaching other testers?

2. Since becoming involved in SummerQAmp, I've been asking a question often: how do we change our approach when we are discussing Software Testing with teenagers and young adults, people who are still very much trying to figure out what they want to be when they grow up? Even if they decide not to be software testers when they grow up, they can learn about the field and get an appreciation of it that they can carry with them in whatever endeavors they pursue. In short: how can we teach this topic to what is a young and impressionable audience?

With the votes cast, and a schedule established, we had way more topics than we could cover, but some of the topics had common themes and we constructed sessions based around the comments everybody pitched. My proposed topic #1 didn't get enough votes (probably too specific), but proposition #2 blended in with several other ideas, and thus we will be doing a session on mentoring younger testers and those looking to consider testing as a career. More when we do that session.

The sessions themselves are self organizing; go where you want to go, as long as you want to go, and move over somewhere else if you feel you are not getting the value out of the session you think that you should.

For the first session that I chose to attend, I picked Christin Weidemann's "Being a Role Model", which dealt with a number of interesting topics and ideas about how we can get out of the bad habits and bad behaviors. we discussed ideas such as giving and getting feedback, enthusiasm, motivation, our behavior, and how to be a role model in specific areas; among other testers, as well as among non-testers.

Being a role model does not mean that we are an expert, it means that we show where we can learn, where we can teach, and where we can admit our ignorance (because we are all ignorant about *something*). As an added bonus, I've been recording this session, so it's very likely this will become a podcast in the not too distant future :).

During the second session, Anna Royzman and I decided to combine our proposals, so we had a discussion on Test Mentoring and Teaching of Interns.

My focus was to talk about the SummerQAmp initiative, and I shared some of the content that we had developed and structured for the "What Is Testing?" module (the introductory module for the SummerQAmp curriculum).

What was also cool was that I had a chance to get some peer review on the topics and concepts I was presenting, and got some neat feedback on how to present some of the ideas. Paul Carvahlo discussed an option about how I could use an open space model to teach programming, without having to actually open the game that I was planning on presenting (Lightbot 2.0). My thanks to everyone for the feedback and suggestions.

Anna Royzman approached this same topic from the viewpoint of the hiring manager; how can we maximize the success of the learning of interns and younger testers?

We went through a variety of options, such as being able to have testers provide experience reports and give some indication as to how they have developed their ideas. In addition, providing exposure to other teams and giving them a chance to see a variety of contexts and approaches will help them to develop their ideas and approaches to problems.

A focus on professionalism was also discussed, and to help them discover and see opportunities and encouraging them to embrace them.

After lunch, we held a plenary session on tester games and using them as coaching tools. Here's some pictures. I can't talk about the games themselves (well, I could, but then I'd have to kill you ;) ).

The last segment I decided to record for today was a talk by Wade Wachs called "Auto Didactics Unite". It was based on the idea that we need to have the ability to teach ourselves before we are able to teach others. Wade was kind enough to let me record the conversations, so this will likely also become a podcast as well.

And with that, we are bringing the fun for the day to a close. We have a dinner date in Saratoga, so with that, TESTHEAD is signing off for the day. We'll see what tomorrow brings for Day 2.

Monday, July 2, 2012

Giving New Life to Old Obsessions

I'm not entirely sure when it happened. I know why it happened. it was because I was dating a girl in college who had an interest in drawing and in all things folk magic and fanciful. She knew that I had a great love for Native American traditions and history. She handed me a graphic novel some time in early 1987, shortly after my 19th birthday. This graphic novel, I'd come to learn, was one of the groundbreaking movements in the alternative comics scene in the late 70s and early 80s. This "comic" was ElfQuest, and with it, I was introduced to an amazing world built from the imaginations of Wendy and Richard Pini.

These stories consumed me, and I read them voraciously. First starting with the original series of books (my girlfriend at the time had the four volumes that made up the original story) and from there, I discovered that there was a Marvel comic reprint of the series. I went to as many comic books stores as I could in town and surrounding cities, and over a period of several months, picked up all of the reprints. About this time, further stories started coming out from Wendy and Richard, starting with Siege at Blue Mountain and then, later, Kings of the Broken Wheel. By episode #2 of Siege at Blue Mountain, I knew I would follow this series wherever it lead, for as long as it lead. I wasn't much of a comic book collector, but this one just touched me, and made me want to get into it as deeply as I could.

For fifteen years, I followed the stories wherever they went. I followed the side arcs, the fragmentation, and suffered through a variety of artist who took over telling side stories and "alternative universe" adaptations. some of them I liked tremendously, but in many ways, I still loved it when a story arc or a plot line or a special was illustrated by Wendy. It was her series, and her original animation, that hooked me, and for many years, I held her as the gold standard.

Around 2000, the monthly publications one could find in the local comic shops stopped. Along with it, I had other interests. I had a wife and three kids by this time, and so I allowed the "collecting" to stop. Besides, many of the stories were going to be wrapped up online anyway, so I'd just move to see where they wen when they got posted.

Twelve years has passed since then. My older daughter, Karina, has had some interesting challenges in her life. One of them being that, at a young age, we discovered that she had strabismus, far sightedness and astigmatism. She's had to wear some pretty strong glasses for much of her life, and it has limited her world view, even with her glasses or contacts, to a dept of field about three feet away to about 25 feet away. Rather than be frustrated with it or get angry about it, she adapted to having a desire to work with things very close up. The net result is that she has become a very accomplished artist, and has become very good in a number of mediums. As she has had an interest in Japanese animation (fostered not just a little bit by her dad's interest in same), she's become accomplished in recreating many of the signature Anime styles. Still, she said that she wished there was something different, something she could see that had more expression, more variety, and more stylistic difference. It was with this that I smiled and said "hey, I think I have something you might like".

With that, I went into the boxes I hadn't looked at for several years, and in it were over 200 issues of stories, ideas, and inspiration that were being locked away waiting for... what? I had read these many times, but I knew them so well, I hadn't reached for them (or really needed to reach for them) in ages. So they sat in these boxes for... again, what? Was I hoping to sell them someday? Of course not. Was I waiting to reread them again? With the ability to see the series in its entirely online, why would I dig out the comics to reread them when online was much more convenient? Yet I realized that, in my daughter, here was someone who could appreciate them right here and right now. What's more, she didn't have an "attachment" to Wendy and Richard Pini in the same way that I did.  Thus, she could view the story from her own world view, and she could look at the other contributing artists from their own perspectives.

This is all well and good, but you all know this is TESTHEAD, and this is a testing blog, and there has to be a moral in here for testers somewhere, right?

Inspiration Strikes in Weird Ways - Had I not been dating my girlfriend at the time, it's likely I would never have come across this series. It amazed me for years afterwards how hard it was to find issues and how many places I had to look for it. It was an alternative title, and it was not really hugely produced. In fact, I still find there are two groups of people. People who have never heard of ElfQuest, and people who have not only heard of it, but love it and have actively proselytized about it to others. Ideas and concepts are ephemeral, and many things of great value only get passed on by direct word of mouth from one individual to another. Testing is much the same way. the obvious ideas don't always prove to be the good ones. Many times, the truly great insights come from seeking them out, and talking with individuals who have learned from experience what is valuable and what is not.

Don't Get Too Attached To a Way - For many years, I was a total Wendy Pini fanboy. I loved the amazing detail she put into her characters, and the stylistic choices she made. So much so, that I often recoiled when other artists drew side stories, and it diminished my enjoyment of their work.  As I was looking at these artists with my daughter, I refrained from making "fanboy comments" and she showed me some interesting techniques the other artists did, and how she thought their work was really cool. I had put blinders on, in a way, and those blinders prevented me from seeing the value of their work in many ways. In our testing we also often put blinders on when we stick too close to a "way". We need to be ready to always question, always seek the new and different, and challenge what we already think we believe. I heard James Bach say once (para-phrasing), "I am a man of passionate convictions, but they are lightly held". Don't let dogma prevent you from seeing something amazing.

Children Can Have Very Different Contexts - As I was sharing these comics with my daughter, I was tempted to put them in a particular order, or direct her to the start of the story. Instead, though, I wanted to see how she would approach the books I'd given her. What she did was she laid them out in categories based on title (OK, I gave her a little help in doing that). Rather than digging in from the start, she grabbed the first issue from each of the story arcs and leafed through each of them. When I asked her why, she said she wanted to get an idea about the world in which these story take place. I chuckled to myself in that, were it me, I would have never thought to do that. Why? Because that's not how I like to order things. I jokingly pride myself on being a "grouper" with a top down view of things, but when I read, I like continuity, so in that way, I'm very much a stringer and sequencer. Not Karina. She takes my grouping tendency to a whole new level (LOL!). She reminded me that, when I test, I can often get myself into too ordered a way of thinking, and forget the possibilities of de-focusing and taking an entirely different approach to things.

To my daughter Karina, I hope that these stories, these artists and these works will give you as much pleasure as they have given me over these past 25 years. Thank you for showing interest, and thank you for showing me a different way of looking at something I'd take for granted for decades.