Monday, March 26, 2012
Somewhat Live from #STPCon
Today I'm reporting live from the Sheraton New Orleans Hotel. While I'm not going to be able to attend the entire conference, I had the opportunity to be here for the "pre-game show", which is where a number of participants put in some solid time (not to mention a lot of effort on the part of presenters) to put on workshops and tutorials for conference attendees.
We hear a lot about the track talks, keynotes and special events. Rarely do we get the opportunity to hear about the tutorials and conferences, and if we do, they are usually commented on by a single attendee several days later. My goal over the next several hours is to get in, examine the tutorials and workshops, and give you a feeling for each of them and the content they are covering.
Having just come off the POST workshop last week, I was curious to see what Lynn McKee and Nancy Kelln would have up their sleeves for today. Lynn and Nancy are, as some may know but many may not, almost always joined at the hip. You will rarely see one without the other, and this dynamic duo that I refer to as the "Calgary Wonder Twins", they are presenting today on the details that do on behind the scenes at any given testing project. The fact is, we often have great intentions, and we believe we can offer a tremendous value. However, behind the scenes of each project are the moving parts that put or testing performance either in a great light or completely hidden.
We struggle with expectations, estimates that may or may not be accurate, we struggle with the value (or lack thereof) of metric, and if they are of value or if they are worthless, and in all cases, unless we are selling testing services, software testing does not make money for the organization. Save it? A great possibility. Safeguard it? Even more likely. Still, the simple fact is, we are an add on to the essential business in many people's eyes. The code is what makes the money, but the quality of the code could have a tremendous impact on the ability of that code to actually make money (or reputation, or add value; remember, not everyone writes code for monetary reasons). Lynn and Nancy will spend the better part of today giving the attendees the chance to consider these areas and work through considerations to get their hands dirty with doing the stage work necessary. We'll check in with our stage crew a little later to see how they are doing :).
A couple of rooms over, Doug Hoffman has a full house covering the topic of Exploratory Test Automation. For many, automation has some great benefits, but it also has a number of disadvantages. Automation is great at static tasks. It's also great for doing repetitive set up and take down changes. IT can confirm, it can state if something exists or if it doesn't, it can give a log about what appears on the screen. Automation can see, but it can't feel anything. Automation doesn't have emotions, it doesn't have gut feelings, it can't detect a "code smell". That's up to humans that use a very specific oracle... the human brain. We suffer, however from the fact that most automation doesn't give us the opportunity to inspect and consider a broad range of adaptable opportunities.
To do that, we need exploration skills, and the ability to ask probing questions based on what we learn at any given moment. Humans are remarkably unreliable, slow, and inconsistent, but we have the ability to make broad mental leaps to consider issue and challenges. Automation is very fast, very reliable, can be very consistent, but it's incredibly stupid. I mean stupid compared to the ability of the human brain to make deductions and consider new avenues. Exploratory Test Automation seeks to bridge that gap. There are limits to how effective this is, but there are processes that allow testers to do this in a limited capacity. We can create multiple paths to examine. We can set up in advance multiple oracles to utilize and examine lots of criteria. We can use a lot of random, pre selected data in varying scenarios, and we can tie these pieces together to get a more complete picture than w can with a static and single focusing automated test. How do we effectively do this? Tune in a little later, and I will hopefully have some answers :).
I took a step back to see what Bob Galen was saying about Agile Testing and, more important, how to effectively lead Agile teams. Teams are the ones that design and determine whether or not a project can be effectively organized and run. Agile teams have the ability to shape projects rapidly, to create products and services that can be modified rapidly and pivot to focus on changing requirements. In some environments, especially when hardware development or embedded systems have to be created, it's a stretch to apply some of these techniques, but they have a value and can be applied. Bob opened the session by having everyone post questions and opportunities for the discussion on a white board. the questions from the team are driving the initial discussion, so that they can focus on understanding the challenges for the group in attendance.
Some of the questions are unique to specific contexts, but many of the questions asked can span multiple disciplines, businesses and even development models. Whenever we get a group of testers together, we get a lot of "variations on a theme". We may think we have unique requirements, but in the big picture view, our needs are not so different from one another after all. What often changes are the specific details for the personality of the given team or product.
Anne-Marie Charrett was one of the first people that I have the opportunity to interact with during my very first AST BBST Foundations course a couple of years ago. Much of my initial feedback and extended commentary on my strengths and areas I could potentially modify and improve came from her. I consider her a great representative of the community, and was excited when I saw that her career development workshop was being offered at STPcon.
For those who are not familiar with Anne-Marie, she has an expertise in coaching testers. When we hear about coaching testers, we often consider that to be James Bach's or Michael Bolton's domain, but Anne-Marie is also a solid contributor to that area (she does private sessions with testers willing to work through challenges and get direct feedback and she's super at this). For this session, Anne-Marie and Fiona have combined forces to give testers an opportunity to consider their career opportunities and where they might want to go with their carer. Sometimes we get bogged down into the details of our every day lives, and over time, we get lost in the weeds. There's an old yarn that goes "meet the new boss... same as the old boss". Well, the same can be said for testers when they look for career changes or step into different companies. "It's totally different... just like the last testing gig I had".
The fact is, in today's work world, we have more opportunities to take our career into our own hands. For millennia, there were often barriers to being mobile and dynamic in career choices. If you wanted to farm, you had to have land. to be an artisan, you needed to have tools. To be a blacksmith, you have to have a forge. To create a product for many people, you needed a factory. The barriers and challenges of changing were that the means of production were expensive and frequently unportable. Today, with the confluence of the web and computing power and devices, anyone with a laptop and a smart phone has the ability to carry their "factory" anywhere. In this brave new and weird world, the ability to produce and create your product has never been closer. The up side is that we can be as dynamic and as portable as we would like to be. The downside is that the ability to manage those opportunities.
Anne-Marie and Fiona focused on different career areas to consider. testers have the ability to be individual contributors (software testers), leading teams (test managers), taking their options on the road (test consultants), or helping others develop their skills (coach and trainer). While there are many opportunities in just those areas, there are also many other avenues to consider and examine (writing is a cool opportunity, if I do say so myself). Matt Heusser just finished talking about making the move from individual contributor to roving test consultant, and the rugged realities one faces when making those changes. We've talked about this many times, but it's always interesting to see the perspectives that other people's questions help bring to that discussion. Don't you wish you were here to hear that ;)?
I decoded to jump back to Lynn and Nancy to pick up and see where they were at with regards to the behind the scenes details of testing projects. I came in on a exercise related to communication and discovering the explicit and hidden motives of teams. If I don't consider the fact that there are hidden motives, not only am I being somewhat naive, but I'm also missing a huge component of any project. When we act as advocates for testing teams and processes, we are much more likely to be able to communicate the benefits of our approaches and our methods if instead of hoping we are addressing our up front expectations, instead look to the hidden agendas each group has. the core to this process is communication and getting feedback.
Note, we can't just say we want feedback and blissfully stand around thinking it isn't going to come. Oh believe me, we'll get it if we really want it. the key is we have to be ready, willing and able to take the critiques we will receive. When we show we are ready and willing to react to and incorporate the feedback we are given, then something interesting happens. the stakeholders will trust us with more and more of the "hidden realities". Are we show we are able to react to feedback, we will also be able to offer our own feedback and have it be more "personal". Again, it's the hidden world of motives, needs and insecurities that really shape projects. Being willing to adapt and modify the approach based on what you learn and can incorporate will help tremendously with developing a rapport with "the stage crew".
Let's check back in with Doug and see where we stand with this wild and woolly world of Exploratory Test Automation. As many people already know, there is never a truly repeatable test. that may sound counter intuitive, but it's true. The system state of any two machines will never be 100% the same, even if they theoretically have exactly the same hardware, software, driver configuration and environment variables. There are enough variations in even running two exact replicas of virtual machines that, after just a handful of tests, there is no way to guarantee that the environments will be 100% in sync. This isn't such a big deal in most cases, but yes, even minute voltage differences can make a difference in your results.
This isn't meant to be a downer, it's meant to show all of the aspects that we have to keep in mind when we create what is, on the surface, repeatable tests. To be able to bring an exploration level to these tests that is meaningful, there is a need for a model that takes a lot of things into consideration. Our test inputs, precondition data, precondition program state and environmental aspects all need to be considered and, ideally, part of our testing criteria. If that seems like a lot of stuff to have to maintain, you are right. It's also only half of the story. We also need to incorporate and express decisions on test results, post condition data, post condition program state and the results of our environment after our tests. Looking at al of these conditions and considering them is a lot of stuff to keep track of, but it's not impossible. The reason is, we do it all the time. Our "master oracle", i.e. our brain, somewhat does this for us every time we test and look oer the system. we look at these things and we work with "test smell" and our testers feeling" to determine if we have something to be concerned about.
Over in Anne Marie's talk, Jon Bach is talking about the test manager relationship with the team, though his role has changed somewhat since he got to eBay. One of the things that Jon recommended when it came to do a one on one with his testers was to have it be in a two-pronged method. The first is the casual one on one in a conference room, but the second is at their workstation, where he can observe and see what they are doing and how they do it. this is not meant to be an "a-ha" or "gotcha",but more to see if there are areas that they could use some coaching and more important, to see if they were actually able to be coached. I think this really important, not just in the sense that we get to see what can be improved, but that we can also learn hat they are doing that is exceptionally interesting or adds significant value to the team.
Many testers live in isolation, especially those of us who are Lone Wolf's. Having someone sit with you, see what you really do, and offer feedback based on the way that you actually do things can be exceptionally helpful. Test managers are not overlords, at least the good one's aren't. The good ones are servant leaders, who look to see if they can help their teams and are working to solve authentic problems. Jon focused on the ability to help his team mates develop their reputations as testers. We have a lot of what-ifs that we need to explore, and the ability to help focus on problems that, in a manner of speaking, help the developers really appreciate the efforts that the testers are putting forth.
Jon used an amusing mimic of Marlon Brando's Vito Corleone saying "I have provided a favor... know that some day, I may ask for a favor in return". It's not literally what he means, but he's right. By creating credibility with the development team based on our efforts, we are able to develop a reputation that will allow us to call in on those times when we really need help from developers. When we work and focus on their most pressing issues and develop a reputation for doing solid work, we will be able to be in a position to get that help when we need it.
Following lunch, I came back in to see what Bob Galen's group were discussing, and had a chance to discuss how various teams approach getting over the hurdles of Agile implementation and how to "sell the drama" to others. there has been a spirited discussion about how to deal with dysfunctional organizations. In some ways, we have a limited ability to impact this without using our strongest weapon, which is our feet. If we don't stand up for what we need to and believe, then there is no real benefit or reason for a team to change or to adapt. This reminds me of Jerry Weinberg's quote (paraphrased) where "we can change our organization or we can change our organization".that van feel like a rough statement, but in some ways, to quote Nick Lowe, "You've Gotta' Be Cruel to be Kind".
In this case, we may need to be willing to step out of a toxic environment, and encourage others to do the same. If enough people do it, one of two things will happen. The organization will ultimately change to address the criticisms, or adapt to the new reality. Another col discussion point was that the development team could only work on three things at a time. This was meant to address the situation where the testers weren't feeling they were able to communicate with their development team. When the teams are only "allowed" to work on three items at a time, then that dries collaboration and getting the exams to focus on working with each other. I liked this message and appreciated the idea. The implementation, well, that might take awhile.
So what's Doug's group up to? I walked in to see a discussion going on about the challenges surrounding oracles and the way that oracles are implemented. With Exploratory test automation, you are not just dealing with a single oracle, but perhaps several (or maybe even dozens). There's a lot of costs associated with having multiple oracles. These costs are not just money to pay for something, but also opportunity costs and opening to the potential for litigation. Oracles can range anywhere from "No oracle at all" to dealing with multiple oracles. Remember, an oracle is something that allows a user to say "this is working well" to "now wait, that can't be right!" An oracle is what basically tells you that your program is not doing what it says it is doing. When you run tests without an oracle, how can you tell that the application is failing? Or passing, for that matter? Even simple oracles can be effective. Without them, you may not be able to tell anything but the most spectacular potential failures has happened.
I saw Adam Goucher walk in during lunch, and as I was curious regarding his recent tweets about his "How to be Awesome at Testing" talk and slides, when I found out he was going to be presenting this talk during Anne Marie's workshop, I had to see it in person. I'm glad I did. The key takeaway from his talk, if you must have just one takeaway, is that "being amazing" is the end goal. Being a tester, ignore 90% of the standard criteria. Your customer doesn't care if your product meets standards, they care if it's fantastic! Quality is important, but if you have to make a quality shortcut to make a customer happy, then do it! It may go against the grain, it may be different than what you envisioned, it may not even meet your standards of what should go out, but at the end of the day, your customer being happy with what you do is the key thing. Neat perspective :).
The chalenge of context switching today has been tremendous, there are so many details I am leaving out, partly because I want to preserve some of the content for the attendees (they did pay for it after all ;) ), but also, I wanted to see these workshops from an experiential level, and that's what each of these workshops do. They are not just a day long tutorial, they are a chance to get your hands dirty. They allow the participants to actually get into the guts of an idea, which is nearly impossible in an hour long track session. These are intimate, they are a chance to explore not just your ideas, but other people's ideas. Instead of the speaker's context, you learn more about the other attendees contexts, and those help inform your own world view. My thanks to the crew at Software Test Professionals for giving me the opportunity to participate today. I found these sessions to be very valuable and I learned a great deal, even with my bouncing in and out of the sessions. Shortly, I'll be saying farewell to the city of New Orleans, and I'll be gone 2300 miles and probably 7 more hour until my day is done (sorry, I had to do that).