Tuesday, February 21, 2012
#STPSummit: Agile Transitions, Day 1
Today is day one for the Agile Transitions Online Summit that SoftwareTestPro.com is sponsoring. I had the pleasure to be asked to be one of the speakers for this online conference (I'll be doing my part on Thursday at 11:00 AM PST), but until then, I get to participate and hear the other speakers... and in a greatly abridged and condensed manner, so can you :).
The first talk was given by Scot Barber, and kicked off the conversation with "What is Agile, Really?" There's so many answers, and an immediate reference to the original manifesto (I have a feeling a lot of people are using this; I know I am for my talk :) ). For those who want to refer to the manifesto, you are welcome to go to http://agilemanifesto.org. The important thing is to see that they said nothing about automation, lack of documentation, or even a presence or lack of presence of testers.
The people and how they relate are what really matter. the phrase that I often hear is "we practice little a Agile rather than big A Agile". Ultimately, "agility" is just a way of saying "we agree with the principles in the manifest and will do our best to apply them". Testing is also important in the Agile world, contrary to some opinions. Testing may be different, it may be distributed, it may be done in different ways, but it is still needed, and it still matters. We also need to let go of the idea that software needs to be tested exhaustively to be successful. A key takeaway to me was that "Good testing leads to business success, as quickly and cheaply as possible". That's not heresy, that's honesty. If you've ever hear Scott speak, you can probably already hear his voice and inflections with these statements :).
Talk 2 was given by Robert Walsh, and focused on the differences between "Agilists" and "Traditionalists". Note the quotes. This is by design and is meant to show that these are perceptions of each, and that these are how we often see these designations in broad stroke fashion. Traditionalists are often seen using waterfall/iterative processes, where the output of one section of the process is the input to the next, and how, if things change, the whole process is jeopardized. By contrast, Agile allows for change and options to be modified for each iteration. The Scope is different for each approach.
Scheduling is another difference, where testing done towards the end of development and is a distinct phase of the project. What often happens is that, if the development phase runs long, the testing phase is either curtailed, or it's bunched into "crisis mode" with lots of crunch time towards the end. More testing in a smaller space, but how good is the testing in these cases? By contrast Agile Schedules, while much tighter, are focused on smaller slices of functionality. Rather than a huge set of tests for a large round of features, there may be jut one feature to look at, or even one aspect of a feature (or even just one story).
The Approach to testing likewise differs in Traditional And Agile spaces. In traditional development, we may not even get access to the code until it is mostly completed. Once we do get the code, then we need to walk the various test scripts that have been designated (we may have some automation or computer aided testing, but often, we don't, and these are done manually). Changes late in the game run the risk of derailing the project. In agile environments, change is much less threatening, since the slices of product are so much thinner. Time to test is much smaller, and so, if there are needed changes, they can be made without totally threatening a project's delivery.
Bugs and defects can be a real "crusher" when it comes to traditional products and projects. There's a lot of cycle and churning that happens when we have to deal with a large number of features delivered at the sam time to be tested in bulk. Agile processes, and the slicing of the stories, as well as testing happening from the first iteration, helps make sure that the more catastrophic bugs are found early and by numerous different mechanisms.
Talk 3 covers transitioning from Traditional tio Agile Testing, and was presented by Bob Galen after a couple of technical challenges (what would a Webinar be without them ;)?). Bob addressed a number of myths and realties related to making changes from Traditional to Agile teams, and the first is that you must be highly technical to make the move (not true, though some programming and scripting will certainly be a plus). Automation is another myth; you don't need 100% automation to be agile, but if you can get a lot of your mundane stuff automated, start where you can and get as much as you can where it makes sense to be so. It also takes time and there are variations in what Automation actually means. It's also important to realize that Testers are not specifically responsible for automation; it needs to be a whole team effort. Ideally, all of the tests developed should filter into the Continuous Integration steps and processes.
A huge myth is that there is no test planning or scripts in Agile. It's not true, though the approach is different. I can attest to this personally, as many of my plans go into our Tracker or that I set up as shared docs. More times than not, though, they are discussed with the developers and they are agreed to on many of the stories. It's an emphasis on "just enough process" and "just enough documentation" to do the job, not pounding out large documents that will never be read or referenced. Another myth is that all of the tests must be run all of the time within the sprint. That may be realistic at times, but at times it may not be. It may be especially difficult with a legacy application with a lot of regression and tons of tests that would run for an extended period. The fact is, tests are always scaled on a risk-based criteria. We want to run the mission critical tests first and often. Running everything may be valuable, but there may be times, even with automation, that that may be a challenge. It takes time to get things scaled and optimized.
Another myth I'm happy to see die is that Testers are the "Quality Gate", the last tackle on the field mentality. Everyone needs to be responsible for quality, and a Whole Team view is much more appropriate. Another myth is that the hand off from development to testers ends up happening late in the sprint, and that's just "the way it is". My own experiences show it doesn't have to be that way, but yes, it sometimes happens. It doesn't have to happen, though, especially if the work is split out as it's supposed to be, so that micro handoffs occur. Having a good communication flow is important, and again, we come back to the conversation. Have those conversations. Don't focus on filing issues, focus on getting them fixed.
Many times, testers believe they are second class citizens in Retrospectives,and that they don't have the ability to influence change or offer valid feedback. This is definitely false, and it's something that testers have a role in, but they have to make the effort to be part of the conversation. We can't expect to be heard if we don't make the effort.
So that's it for today! We'll pick this back up again tomorrow at 10:00 AM PST. Hope you'll join me again :).