Monday, July 16, 2012

CAST 2012 is Underway: Day 1: Live Blog



Today is the first day of CAST, which is the Conference for the Association for Software Testing. After having spent the weekend in San Jose, as well as the majority of this week in San Jose, I feel like I'm on the road, even though I'm really not that far from home. Since the requirements for logistics start early and run late, I've been given a hotel room for the week. I'm now on day four away from home. still, considering the learning and opportunities I have already had this weekend, I'm really grateful for the opportunity to learn, discover, and grow with my friends and peers.

With the start of the conference, I had a chance to make a pitch for BBST and the EdSIG meeting that's happening tonight. I'm going to do my best to include as many people that want to participate as possible, but it may not be possible to get a lot of people online in the meeting, but I'm certainly going to try.

There are a lot of different talks going on and a lot of track sessions, but I have to admit to having an affection for the Emerging Topics track, so I will very likely spend much of my time in here (that, and I have my own talk I'll be giving in here at 4:20 PM PDT :) ).

Emerging topics started with a talk bout biases and what we see and what we don't see by IlariHenrik Aegerter ‏ (@ilarihenrik on Twitter). His talk covered focusing on biases and how we observed different things. Ilari showed a number of different pictures that had things hidden in them that are not obvious to the naked eye, and how it takes us awhile to recognize when strange items appear or how the image is difficult to re-interpret after we have made a prior identification. You can see more of Ilari's comments on his blog at http://www.ilari.com/.

The next talk in Emerging Topics came from Scott Allman and was about "Computers as Causers". this was a talk that described how computers actively drive events. Understanding how events cause other events is an important step in looking at the world and how systems allow us to see what is causing things to happen. Often, there are causes that are going on that we will never be able to describe; so many steps may take place that we cannot have any way of categorizing, yet we have to still take them into consideration. Interesting topic and ideas.

Claire Moss (@claireification) has been talking about Big Visible Testing, which is as it describes, a way that all test efforts can be put into a format that makes the testing initiatives easy to view and to see any and all issues and blockers, and what testers are actively doing and those that are ready to get started. By actively using the Spring board and Kanban, the whole team can see what is happening and how everyone interacts. As the tester, performing testing and vocalizing their experiences helps to, again, make the testing efforts visible and transparent, which frankly is a beautiful thing :).

For those who want to follow along at home, check out the link at

http://www.ustream.tv/channel/castlive

After a quick break and a public service announcement from me about SummerQAmp and what it's all about (and how we need people to help develop content for it), i started up again with an emerging Topics talk from Thomas Vaniotis about Epistemology, and helping to give some solid structure as to what exactly that all means. this is kind of cool in the sense that he's deconstructing the ideas behind epistemology. the basic model is the one that Plato developed, where:

S knows that P is true if and only if:
1. S believes that P is true
2. P is true
3. S is justified in believing that P is true

Note, while this model has held up for millenia, there's ways to counter-act this. Gettier has made some different ways of looking at this, and shows how he can counteract these aspects of TRUTH. This is really interesting in that, while we talk bout epistemology in many places, I think this is the first time I've heard this put into this format.

Ilari cam back for an encore performance and talked about ways in which we learn, but what was really cool was that it was a talk about how to coach and, more to the point, how to effectively read books and which books would be of value to testers. this should come as no surprise to anyone who reads my blog, but many of the best testing books I have read have almost nothing to do with testing.  Well, not directly, in any event. Books on philosophy, business, motivation, creativity and history are just as effective, and sometimes even more so, than "New Title Dedicated to Software Testing". As an added bonus, Ilari gave away three copies of James Bach's "Secrets of a Buccaneer Scholar" to three lucky participants.

Following lunch, we had the opportunity to hear a keynote speech from Tripp Babbit, who was involved with the process of reforming and getting information systems deployed for the state of Indiana to roll out their modernization of welfare and temporary assistance projects. Tripp went through several examples of where the systems and the bureaucratic red tape made for a nightmare or navigating the system. The truth is, most large scale IT projects do not succeed. In his case, the Indiana FSSA had time lines get worse, had error rates get higher, and saw their backlogs grow larger too. Oh, and they were getting sued and seeing contacts get canceled. The problem is that so many other things are managed that the managed items just equate to the wrong thing being done righter, and with agile, we do the wrong thing faster). Tripp walked us through the various challenges that are faced by large scale IT projects, those of scale, feature complexity, and other details that conspire to derail large projects.  Tripp compares this to the work that he is now doing with Vanguard, and how he was able to design a totally different system. By engaging with the people who do the work, then they are able to build the systems that will work for a huge system in the real world.

After the keynote, I jumped into the session on Paired Testing with Kristina Sontag on Pair Testing. Why would this interest me if I'm a lone tester? Because even though I work on my own, I have opportunities to interact with and get involved with a  lot of different people in the organization. Any individual in my organization (developer, content curator, customer service, executive, product owner, designer, what have you) can be an effective testing pair. everyone has unique viewpoints, unique approaches, and "fresh eyes" to observe and consider different options. I asked Kristina if they did something similar in their organization, and while yes, they did use the pair concept with other stakeholders, they made the approach less structured and offered a lot of coaching and suggestions while doing it (their session based testing approach has to date only been tester/tester).

Tony Bruce is one of the testers I have followed since I made the conscious decision to get involved in the overall testing community, and I've enjoyed seeing and reading his comments and thoughts over the past few years. Thus I was happy to sit in on his talk, "Talking About Testing in a Friendly Environment". This approach to get testers to talk to each other works by using a brief format of a short talk (maybe 15 minutes) a short demo) again about 15 minutes, and then have some fun in the process. Getting meetups together will usually start small, but if you are engaged, fun and consistent, then you will get people to come and participate.

Anand Ramdeo is focusing on how to put randomization into our tests and making randomization a more prominent part of our testing. this dovetails nicely into my talk and some of the details that I'm going to cover, so I like the fact that we are back to back. Sorry if I'm not so talkative, but I'm about to go on, and I feel a little anxious (good anxious, not freak out anxious ;) ). I'm recording Anand's talk, so I'll give him a better review after I finish myy talk.

So I'm going to tip my hat a little bit early and share a bit of my talk with y'allz. I cam in with one idea as to how I was going to do this talk, but after spending the weekend with Test Coach Camp, and having a chance to listen to and participate in a discussion with Cem Kaner and Ken Pier, I got a lot of interesting new angles that I will be working into my talk. Besides, I won't be abe to write to you all while I am presenting, so here goes...:

There's a fair amount of hyperbole in my talk, but it's there to make a point. The problem we face today and the way that testing is being sold to organizations is that there is a tribal war between three factions.

TDD/ATDD: Because we define our tests up front, and we write our code to meet the acceptance criteria and test out all of the modules before and during each build, we don't really need to have dedicatee testers.

Front End GUI Automation: With the benefit of [fill in the blank tools] we are able to get fast automation wins and create scripts that cover the important test cases we need. It's so simple, anyone can do it, therefore we can have an automated tester focus on writing scripts for us (and because of this, we don't need a dedicated tester).

Exploratory Testing: All these silly scripts and unit tests that are great for a single module but don't really understand how to interact with a complex system, nothing is going to replace the value and the focus of a dedicated tester and how they can find problems and "explore" areas that nobody else can.

Did I lay it on thick enough? I sure hope so, because that makes it easier to make my primary point; these are equally right and dead wrong. the fact is, it's not an either/or, it's an all of the above, and many of these options work very well together and can be applied in a variety of ways. If you run a script the same way over and over again, and you then decide to do it in a random order to see what will happen, congratulations, you've just done exploratory testing, while using automation (computer aided testing) to help you do it. There will be more in the actual talk, so i hope you all will tune in and hear me deliver it in person (virtually, at least ;) ).

More to come, stay tuned.

No comments: