Thursday, October 11, 2012

#ATONW: Agile Open Testing Northwest, Live from Portland

OK, I admit it, I thought I was going to have a travel day and take it easy, but amazingly, it turns out that a local group in Portland decided to piggy-back onto PNSQC and have a dedicated Agile Testing day. Thus, here I am, hanging out with mostly people from Portland and thereabouts, with Matt, Ben, and Doc also in tow, for what looked like too good an opportunity to pass up.

Agile Open Testing Northwest, an open space conference, is happening right now.  The theme is "Agile Testing: How Are WE Doing It?" For those not familiar with open space, it has five principles:

Whoever shows up are the right people.
Whenever it starts, it starts.
Whenever it's over, it's over.
Wherever it happens is the right place.
Whatever happens is the only thing that could.

That chaotic aspect totally appeals to me. What's really cool is, I have no idea where this is going to go, and what will be covered. If you want to come along for the ride, I welcome you to :).


---


Matt Heusser and  Jane Hein joined forces to talk about "Refactoring Regression" or more specifically, how can we work to make regression more effective, less costly, less time consuming and more effective? A god group of contributors are sharing some of their own challenges, especially with Legacy development  being converted to Agile. Some of the ideas we discussed and considered were:

To have our Sprints include "Regress" meaning to have a sprint that's heavy on Regression testing to ensure that we are covering our bases (and potentially automating as much as we realistically can so we limit our "eyeball essential tests" to those that our most critical steps are done.

Utilize Session Base Test Management to capture and see what we are really covering.

Co-Opt as much of our regression testing into our Continuous Integration Process. even if it adds time to our builds, it may ultimately save us time by leaving just the most critical human interacting tests to those that we need to manually run to check via regression.

Ultimately, the key nugget to this is that we learn and adapt based on what we learned. During stabilization, are there any patterns that we can see that will help us with further development and testing? Where can we consolidate? Can we use tag patterns to help us divide and conquer. Interesting stuff, but we gotta move on...


---


Our next session, facilitated by Michelle Hochstetter, is "Bridging the Gap Between Dev and QA" and looking to see what is going on in our organizations to help make it possible to help each side of the equation level up and get more involved and be more effective. Many development teams profess to be Agile, but when it comes to testing (outside of unit testing, mocks, stubs, TDD, ATDD, Automated GUI Testing, etc), it's rare that the developers get involved that early. Often, what we see is a Scrum development team and a hand off to testing after the story has been mostly completed. In the Scrum world, this is often called "Scrummerfall". When this happens, then the test roles and the Dev roles are often isolated from one another. How can we prevent that? Or if we can't entirely prevent that, how can we effectively minimize it?

One approach suggested was to have developers be effective and savvy with software testing skills. Likewise, putting effort and emphasis so that the software testers could boost their development skills.  Moving to a Kanban style system where the team had one piece flow (just one story at a time).  Another aspect that we can use is to pair dev-test. That can sometimes face challenges with status, role, and who can do what. We sometimes get into a situation where we "protect our fiefdoms". One word, stop protecting them (OK, that's three words). Developers are perfectly capable in doing quality testing, they may just not have the vocabulary or the experience with the skills. Same way with testers. testers can very often code. They may not be full stack developers, but they often understand the basics and beyond, and can do effective work and appreciate development and design skills. Leverage them. What's more, have the humility to recognize and appreciate the differences, and do the homework necessary to get to a point where respect can be earned.

An interesting question... how could us testers be of better value and use to our programmers? One area that was discussed was the idea that the testers have to do a lot of useless testing (useless being a loaded term here deliberately; consider it repetitive, overdone testing on areas that programmers may not consider important or relevant). If the programmers have a lot of knowledge of the areas of the architecture (where on the blue print did you make changes; if you were in room C, does it make sense for me to look at the door between room B and C?), then help us as testers understand where and how those changes are relevant. Additionally, giving our programmers some of our exploratory testing tools and how to use them. Consider pairing developers and testers for both testing areas and programming maintenance. Encourage software testers to dive into code. Don't be afraid of developing bias. More information is more information. You may not use it all the time, but having it and knowing where to look and how they work can be very important and relevant.


---

Uriah McKinney and I led a combined session on "Building Teams: Will it Blend?" in which we look at areas where can help build solid Agile development and testing teams, and specifically the blend of skills where we can be as effective as possible. very often, we make snap judgements about people based on very little information. Sometimes that snap judgement is very favorable, but it limits the details that could be seen as detrimental. In other cases, the snap judgement could be very negative, but upon further reflection, shows a tremendous number of strength in other areas. Considering the ways that we interview and hire, there are unique pressures to different organizations. there's a difference between "filling a req" and making sure that you make a good cultural fit for a team.

Some things that we often need to consider is that, especially for junior team members, they may not have the jargon bingo down. Explaining Session Based Test Management may be a value for a team, or it may be more important to have someone who id adaptable and can be creative on the fly in a short amount of time. Don't penalize them for not knowing the term, engage them and see if they actually do what you are after. You may be surprised at how well they respond. We discussed this very thing with college interns.

We discussed misleading job titles and requirements. There's a frustration with a company advertising that they are looking for testers, when in reality they are looking for programmers whose goal is  to write and create testing frameworks and supporting tests. Let's be honest with what we are looking for, and let's call them what they are. Don't say you are looking for a quality assurance engineer when you are looking for a programmer who writes test frameworks.

When we hire our people, it's also important to have the team members work in the way that will be most effective for the team. That requires a level of communication, and that communication means a clear understanding of the team mores and values needs to be respected, and that respect needs to run through for all. You can't play with different rules for your development team as compared to UX, graphic Design, Product Management, or testing.  If the team has shared values, work to encourage everyone is consistent with practicing and sharing those values.


---

Michael "Doc" Norton led a discussion about "Managing Automated Tests" and ways that we can get a handle of what can often seem like an out of control Leviathan. One of the main issues that makes this challenging is that we tend to focus on running "everything".  We have a zillion tests and they a;l have to be run, which means they have to take the time to be run, and the more frequently we need to run the full suite, the more time we consume. Thus, it's critical that we try to find ways to get a handle on which tests we run, when we run them, and under what external conditions. If the tests all run fantastically,  then we are OK minus the amount of time. If something breaks, then we have to figure out what and where. If it's a small isolated area, that won't take very long. If we break a large scale, laddered test with lots of configuration options and dependencies, that gets harder to determine the issue and how to fix the issue.

One of the bigger issues is the story tents that, over time, get moved into more generalized areas. Over time, these generalized tests keep growing and growing, and ultimately, test either reach a level of long term regression test case, or they become stake and the links break and become unusable. Fragile integration tests tend to get more and more flaky over time, so rather than waiting until they become a large mess, take some time to make sure that you prune your tests from time to time.

Many tests can be streamlined and reused by setting them up to be data driven. With data driven tests, only the data values and expected output (or other parameters) need to be applied. You can keep the rest of the test details the same, and only the data changes (it won't work for everything, but you would be surprised how many areas it does work).

Another danger is to have tests that are treated like a religious ceremony. This is especially tricky when it comes to suites that run repeatedly and almost never find anything. Perhaps one thing to consider is that some of those tests are not so relevant. Consider consolidating those tests or even moving them out to another area.

Automated tests are ode too, they need to be treated like code. Being a Quality Analyst and being a programmer starts to blend together in this point. Often, this can get some funny reactions, ranging from developers and testers thinking each side has it easy. Coders think that testers have an easy job, and if they just learned it, they'd be able to do it. Too. Testers often think the same about programmers. Here's the thing. It took each of us close to 15 years to get good at our respective efforts, so a little respect is in order :).


---

The last session for the day was Janet Lunde's "Balancing Between Dev & QA on Automated Tests" (gee, can you tell where both my personal energy and my anxieties are today ;)?). Who owns the test that are made for automation? Does the programming team? Does the testing team? Should there be a balance between them? Dale Emory said that he sees a lot of institutionalized "laissez faire" with the way that tests are implemented within many organizations. Often, tests are seen as just those things that testers run and then there's some numbers and a "red light/ green light" and we go on our way.

What if we could re frame these arguments? Who is the audience for the automated tests? What's the ultimate value of running them? Often, having the testers writing the tests prove to be a bottleneck due to experience level with coding. Ben Simo shared an example like this, and ho the development team was asked to run a number of the tests on the testers framework and critique/offer suggestions. With that, the development team started using the framework, and started writing tests on the framework. This allowed for a lot of bottlenecks to be cleared. More to the point, it allowed the developers to better understand the testers point of reference. This helped flesh out their tools and their knowledge, so that the testers, later, would be able to take over the automation. At that point, they were in a much more stable place to write more tests more quickly.

Talking about the balances of automation has been amusing to see how much automation and how much manual testing is done. A lot of manual testing isn't just desirable, it's necessary. from my own paper, automation is great at parsing data, piping inputs to outputs, looking for regular expressions and creating logs. What it can't do is create curiosity, make a real sapient decision, and "look" at the software. If an automation test is looking for elements to appear on the screen, it will tell you elements are there. If it's not programmed to tell you if the CSS rules are loaded. That file could be missing, and the raw HTML without styling displayed, and the tests would still pass. The machine would miss the error, the human eyes would not. Another great comment... "automation doesn't get frustrated". What that means is "automation doesn't get irritated at response time or how much data needs to be entered. A real human will voice displeasure or frustration, and that's a valid piece of test data.

The key takeaway from the group was that "automated testing is a necessary item, but it does not replace real world, active, dynamic testing". While we can take out a lot of the mind numbing, repetitive stuff with automated testing, we can also use it to bring us to interesting place we might never have considered (see my presentation and my comments about Taxi cab automation).


---


And with that, I think I'm done for the day. I have to get ready to close out this great conference day, take the Met to the Portland Airport, and impatiently wait to get home to see my family :). thanks for spending the last several days with me. I've learned a lot. I hope you have learned something through me, too :).

No comments: