Well, I am in Seattle, (Lynwood, WA, to be more precise ) and the start of the CAST Conference is officially just a few minutes away. We had a bit of a celebratory get together last night at a Thai restaurant near our hotel here (poor place probably didn't know what hit them with 26 testing geeks pretty much taking over 23/rd of the restaurant. I had a great time sitting with Anne Marie Charret, Scott Allman, Fiona Charles and Paul Holland. We had a rather engaging discussion about ethics in testing and the little ways that we often cross that line each day without even knowing it, and how important it is to keep aware and alert in those instances. I also sat down an learned how to play set, which is an interesting game with shapes, colors and shading, and the ability to determine what a set is within the rules of the game. Matt Heusser, Michael Bolton, Adam Yuret, Paul Holland and Ryan Freckleton made for a fun time of it.
To kick things off this morning, we have a gathering of the Miagi-do School of Software Testing at breakfast. Elena Houser, Markus Gaertner, Matt Heusser, Ajay Balamurugadas and I are all sitting at a table together, and Matt just made a big announcement to all of us... he is planning on having a peer conference specifically for Test though leaders and senior testers looking to make the move towards becoming consultants, with the idea that we can influence more development and understanding with the testers that are just entering the field. The title for the conference is "Test Coach Camp" but hey, don't hold to that, it may very well change. The details are of course, preliminary, but as of right now, the thought is that the conference will be held sometime in October 2012 and at the moment, leaning towards Orlando as a location. More to come on this as we get closer t it happening. Markus just made a presentation to Ajay of Jerry Weinberg's book "Becoming a Technical Leader" with a mock up cover saying "Miagi-do: Best Practices for the Black Belt Level". I'd have to say I agree, this is a great title for this purpose :).
Not to all, if you would like to follow along with Michael Bolton's keynote this morning, it's being broadcast live.
Jon has taken the stage and we have officially kicked off the conference. Jon Bach is on stage now and we are going through the administrivia, including an update on Cem Kaner's health. Cem said that there was an issue with the family, but he didn't give details. we are happy to hear everything is OK, but sad he and Becky are not here.
Doug Hoffman and James Bach are speaking about the program ideas and how they came to be. for those not familiar with the way the program came together this year, there was no call for speakers directly; James got a little flack about this from Fiona Charles and others that there wasn't a direct call for people to present. This did, however, bring to the fore an idea about giving a call for people to talk about things and present an idea to be considered byt a committee to decide who wants to speak. from that the emerging topics track was born... and I'm actually going to be the first person to speak in that track :).
Paul Holland is explaining the three card system that is used at CAST. The facilitator sees one of the cards being held up and then has the opportunity to ask certain questions. A green card represents a new question, a yellow card allows you to ask a question on the same thread. A Red card allows you to jump to the top of the queue. It means "what I have to say is more important than anyone else in the room". this is a cool way to manage questions threads, and allows those participating to actually "confer" in their conference. What a concept :).
Michael Bolton took the stage for the first morning keynote and described a show that CBC (Canadian Broadcasting Corporation) produces called "Ideas". It's a broad range of topics. Often the shows are one-offs, but they occasionally do series programs, the program that Michael highlighted was "How to think about Science" (a program that Michael has sent me to a few times for individual podcasts). One fo those shows described the founding of the scientific method, and the rather wild fight that ensued to make that possible. This was made famous in a book called "the Leviathan and the Air Pump" and it has a number of interesting hypotheses, the idea that we can measure something to get to an actual fact. the challenge is that experimental tests, while they can provide a lot of information, it does not really "prove" that something works. In fact, the unreliability of equipment and other things cause many issues and makes the point that testing and experimentation, while helpful, can easily be gamed and tweaked to support any sides story.
Michael continues to show that the world is a complex, variable and messy place... there's lots of complexity, uncertainty, and challenges that will stymie any direct and "static" approach. Context driven testing makes it clear that there will be messy complexity, and there will be times where what works in one case won't work somewhere else. There is so much detail in Michael's talk i can hardly keep up with it (LOL!). A great opening, and if you can't see it live, check it out later on blip.tv (link to be added later). I'm witnessing my first "card" Q&A session, and this is pretty cool to watch. So many questions and the ability to have them follow a thread has made or a really great discussion about whether or not it is valuable to switch our language to "I disagree" rather than to say "you're wrong". Some agre with this approach, some disagree, and it is interesting to hear so many follow-on comments. Also, I found it amusing that the first red card to be thrown at this conference was by James Bach :).
the first "talk of the morning that I am attending is "Paths for Self-Education in Software Testing" with Markus Gaertner. there's several reasons why I chose this talk (Markus being my Miagi-do Sensei notwithstanding :) ) this is a topic that is very near and dear to my heart. Most of us do not aspire to be software testers. I don't really know of anyone who as a kid or a teenager said "I want to be a software tester when I grow up" or "when I go to college I am going to study software testing". Nope, for most of us we got into the field because of a need, and we discovered we liked doing it, and then we have to figure out how to do our jobs by various methods. Markus makes the point that most of us are going to have to be responsible for our own education. One of the key things to start with is feedback. Here's where writing a blog, or writing articles or presenting talks at conferences, as well as social media interaction will be significant. The beauty of this is that there is often a one:one feedback level. The challenge is that sometimes you don't get *any* feedback in these areas. Another area that Markus recommends it to learn how to program. this has been part of my current reality in my new job. It's the first time that I have had to seriously look at programming and writing tests on a regular basis that has required this, and I can safely say this does provide a new appreciation for what the developers are doing. Some may also argue that this is getting in the way of actively testing; while we focus on programming, we are not actively testing. I can see both sides of this argument, to tell the truth. Markus makes the point that there are two methods for looking at information and learning. There's the Hypothesis approach, where we read about and focus on the analytical aspects. On the other end is synthesis, which focuses on the things that we actively do (weekend testing, testing dojos, Miagi-do, etc.). The key point is that we don't learn just from study, and we don't learn just by practice, we need to do both in conjunction with one another, and then we will maximize our ability to work with and understand the material.
During lunch, we had a chance to talk about the Board of Directors election, where Pete Walen, Matt Heusser and I talked about what we hoped to bring to the Board of Directors .Addtionally, Cem Kaner had an email he sent read in. For the record, I think the entire group of candidates are awesome, and if I make it, I will serve to my full ability, and if I don't, I will not feel bummed because any and all of them will do a great job... but hey, if you want to vote for me, I'll not object :).
So I presented a talk in the Emerging Topics track. My talk was titled "Be Prepared: What Can Boy Scouts Teach Testers?". For those thinking this will be a talk about tying knots or camping, you'll be disappointed. Actually, it's going to be sharing a bit of the training that we espouse and the emphasis on "EDGE", the way that teams develop. Just about every team goes through four stagesof team development. This is actively taught in scouting, and it's part of the advancement criteria for scouts, so they lean this pretty early on. I thought it might be worthwhile to talk about this among testers, because there are many parallels. The four stage of team development are referred to as "Forming" (gathering people together so that they can accomplish a goal), "Storming" (where different ideas about the outcome or goal are expressed, and trying to figure out who wants to go which direction), "Norming"(where the team aligns towards the objective or goal) and then "Performing" (where everyone is working towards the objective). The Acronym EDGE stands for "Explain, Demonstrate, Guide and Enable" and each step is associated with a different stage in team development. the point is that we need to pivot and change our approach to deal with each level, and that external input can change those dynamics, taking a team that is performing well and busting them down to an earlier level.
Ajay Balamuguradas discussed weekend testing and the successes/challenges that they (we face). I wanted to see this talk so that I could see and consider Weekend testing from one of the founder's perspectives, and see how he has dealt with the challenges that we do here in the Americas. It's heartening to see that their challenges are our challenges an vice versa. many of the issues they are dealing with ring true with our experiences. Some feedback from the audience was to make sure we actively considered the feedback from the participant and see if their expectations were met, and if not, why not? Also, it was interesting to hear some of the issues and challenges that testers themselves have with the format of the weekend testing approach. All in all, I think Ajay did a really good job.
Matt Heusser taked about the Economics of test automation and how automation is good when it is part of a "balanced breakfast" (his preferred phraseology) and that in many cases, what we invest in and the return of investment we get is often not at all worth the amount of time we put into it. Matt brought up the idea of "inattentional blindness" and that automation by its very nature (especially the click and follow to get a value type) is akin to walking through a minefield. We may find bugs in the process as we go through making the tests, but in the future, we'll likely never find another bug with those tests unless something changes in the front end, because we're just walking in the path of the mines we've already detonated. Those tests, unless they are specifically varied, will not help us find new bugs because we are limited to the actual paths that we follow. I see the point, and I agree that automation is going to be a part of the equation, but it should not be elevated to being the be all and end all of the test process. Oh, and it's one of the most imaginative placings of "Keynes and Hayek 2nd Round" I've yet seen (LOL!).
During one of the breaks, Matt decided to try me out to see if I was ready to test for a black belt. I won't say anything about the challenge because it will diminish the ability to use the challenge for future students, and I felt that I did pretty well with it. During the debrief, I was told about the things that they felt I did right, but that there were some areas that, upon reflection, were somewhat obvious that I should have considered. Granted, this is outside of software and dealing with a situation that I had not considered, b ut there's some really great information i gleaned from this challenge. I also determined I wasn't quite ready for the black belt challenge, and I'm OK with that. It shows me that there's a lot of areas I should still be looking at that I didn't consider, so there's definitely more for me to learn before I can really consider myself a black belt.
To close out the night, the Miagi-do team got together so that we could participate in the testing challenge. We asked Matt if he'd participate, and he reminded us that he was a conference organizer and that if he was on a team they would be ineligible for the cash prize. We all said "we don't care, we want to see what the Miagi-do school can do together in real life" and with that, we were off to the races. It was a race, in that we pounded on the test application for really close to the 4 hours that we were allotted Markus Gaertner took on the role of being the "testmaster" and we did several 20 minute tight and focused testing sessions. It was really fun, very exhausting, but incredibly cool to be working directly with so many testers that I knew by reputation only, and now I had a chance to really get my hands dirty side by side with them, and it was an awesome time! Will we win? We don't know, and honestly, it doesn't matter (we're not eligible for the prize anyway, but the chance to work together was worth the time and energy all by itself). At the end of the day (10:00 PM, a group of us decided to go and find some lace to eat and decompress. Nothing like 30 testers roaming the streets of Lynnwood at night (LOL!).
All in all an awesome first day, hope you enjoyed the stream of consciousness. More to come as I start talking about Day 2!
thanks for the updates for those of us who cant be there Hope you can continue the live blogging
"No one ever says they want to be a software tester when they grow up."
For some of us, it starts at video game testing (which people blindly DO want to do for some odd reason...) and then ends with us getting sucked into software testing.
Post a Comment