Thursday, April 19, 2012

STAR East Day 1 Recap

Yesterday was the first official day of the STAREAST Conference (not counting training and tutorial days ). I would have had this posted earlier, but I spent an entirely way too late night with a bunch of madcap testers... eh, what can you do ;)? .

We started out the day with breakfast, getting registered, and me getting my name tag with a beautiful little attachment. It felt really good to see the tag that says "Speaker" on the bottom :).

First off, right out the gate, we heard from Keith Klain from Barclays and the details of how he has been developing the Global Test Center for Barclays Capital, and the areas that they decided to work on to grow their testers. I found this to be very encouraging, in that a large corporation was actually looking at getting rid of metrics, bug counts and numbers that would seem to be logical, but turn out to not be effective. The talk focused on "Leading Cultural Change in a Community of Testers", and this made the point that, to make such a change, there had to be a large scale effort happening at the top level of the organization. One of the quotes that lead off this discussion was from Dwight D. Eisenhower; "“Leadership is the art of getting someone else to do something you want done - because he wants to do it.”

They focused on developing honesty, integrity and accountability in their testers. In return, they opted to jettison needless Test Maturity Models, Metrics Programs, and arbitrary "Career Development" approaches that are based on numbers, but not on people. Some of the principles they espouse are:

• People start to ignore testing when it is no longer relevant (in short, make sure the work you are doing is relevant) 
• Being responsible sometimes means rocking the boat (you have to drive the change, and sometimes that change will upset people)
• No one has the market cornered on good ideas (you'd be surprised who can come up with an amazing insight or twenty ;) )
• Never stop asking why – question everything (making assumptions or accepting things too readily can lead to blind spots)
• Invest 80% of your energy in your top 20%... (daring, but it makes sense; the people who perform are going to be the ones open to learning and growing)
• Leadership = Simplification (actually, leadership often means clearing obstacles and then getting out of the way)
• Don’t take it personally (you may be right, you may be wrong, you may be overruled, just remember it's not about you, it's about the product, services and customers)
• Think first – then do! 

I liked the fact that Keith included the technique about Start - Stop - Continue and what they would do (it warmed my Wood Badge heart to see that ;) ):

Thinking that the value of the test team is in anyone else's hands and pretending “maturity” driven test metrics will make improvements

Telling the team exactly what's expected of them, supported by systematic training of testing skills, test reporting and business alignment

Driving out fear of failure by creating an environment that enables innovation and rewards collaboration through strategic objectives and constant feedback

Overall, this was a nice rallying cry to the troops. It was nice to see a senior member of the team showing focus on testers and testing, and point out that it's not just that we test, but that we develop meaningful skills and add meaningful value to the team.

Michael Bolton followed up with the 2nd Keynote of the day, titled "Evaluating Testing: The Qualitative Way". This was an interesting talk comparing the physical sciences to the social sciences, and the fact that quantitative examination with metrics and numbers makes sense when we are dealing with things like machining or physics, or other physical properties. Software isn't a true physical science,  it's actually more of a social science, because we deal with the interactions and the feelings of people to inform our tests. Michael quoted Dr. Cem Kaner's talk in which he stated that often times we don't get complete and fully fleshed out answers, but that we often get "partial answers that might be useful". Thus, we need to use different tools to evaluate software testing, or at least we need to focus on different ways of performing our testing that will allow for this social sciences level approach.

The key idea behind Qualitative Analysis is that Qualitative approaches are based on observation, making distinctions of categorization and classification as well as description and narration. By contrast, Quantitative approaches tend to assume that categorizations are accurate, and largely ignore associations with the object of observation. The key takeaway from this talk was that we needed to add the human element to our testing, and to do that well, we need to step away from the numbers and the left brain approaches, and get in and focus on what real people actually want and do (note: they are not always what we think they are). Great stuff from Michael as always (I've heard him do keynotes twice, and so far, I think he's two for two :) ).

Janet Gregory did a track session on Agile Testing practices, and since I had a chance to interact with Janet at the POST peer conference in Calgary, Alberta back in March, I was curious to hear a longer version of her ideas and approaches related to Agile Testing. One of the key areas that Janet wanted to point out was that, unlike in traditional development organizations, where roles are separate and unique, in Agile teams, roles are inter-related and overlap. It's not uncommon to expect that a Programmer, Domain Expert and Tester would overlap, and in many cases, reside in the same person. It's not necessary that everyone be capable of having that level of ability to cover all of those areas, as long as the organization can cover it with that level of overlap.

What this does tell us is that we are reaching a point of convergence for may of the people on these teams, so that a Tester is not just a tester. Key takeaways from this session were on how to implement Acceptance Test Driven Development and Behavior Driven Development. Put your tests first and then code so you know what's being covered. In addition, testers are NOT responsible for quality (the whole team is). Programmers do not code alone (everyone helps them understand what to code), and that the team needs the “right” roles and people (meaning we as testers may need to up our game to take part in these areas. 

The next session was a talk about Weekend Testing and it was delivered by, well, me :). I have written extensively about this topic so I won't recap the entire talk, but I will point out that the goal of this session was to encourage people to learn more about Weekend Testing and, in addition, learn how to bring the approach we used back to their own test teams. By all accounts I think it went really well. One funny comment. Apparently, because of my energy levels, people thought I was a consultant selling my services. They missed the several times I mentioned that Weekend Testing was a free endeavor. They kept waiting for me to say how much I charged, and still didn't quite believe me when I finished the talk and didn't say anything about charging for my services. Maybe giving away the SideReel swag was to blame, I don't know.

I closed out the day by conducting an interview with Lanette Creamer for the STAREAST Virtual conference. I don't know if that interview was just "live" and sent out over the ether, or if it was recorded and will be available later. Once I know, I'll let you know.

The official day closed out with the ever popular "Lightning Strikes the Keynote" which is where 8-9 participants present their best ideas or most compelling content, one after another, and are only allowed 5 minutes each. It certainly kept the energy level high to do that, and made for an enjoyable series of talks. 

While the sessions are definitely valuable, I have to say that I have found the best interactions to come with just talking with the other participants at times outside of sessions. Hey, imagine that, going to a conference to Confer, what a concept :). I've been greatly enjoying my discussions with Matt Barcomb of Lean Dog Consulting. We've been able to discuss a lot of aspects of challenges that Lone testers face, and what we can do to better interact with our teams. Oh yes, there will be blog posts on this forthcoming :).

Testers and tester games go together like, well, pick you favorite comparison, but it's inevitable. I had the chance to work with James Bach, Griffin Jones and Michael Bolton on a few different challenges. Some were frustrating, each made me question how I approached things, and I was quite happy when I figured them out (and walked a couple of other testers through the exercises and saw their different approaches). In the end, I went to bed "way too late" but fully energized from the interactions. Now it's time to see what Day 2 has in store. See you soon!


David Garcia Romero said...

Thanks for sharing this information. It's great for the ppl who was not there. Enjoy it.

Bernice Niel Ruhland said...

Thanks Mike for taking the time to share all these wonderful notes! I attended a few sessions through the virtual conference - which were excellent! I definitely will watch some of them over again once the recorded versions are shared. Some of the sessions will make great journal club sessions!

Bernice Niel Ruhland said...
This comment has been removed by the author.