Tuesday, April 1, 2014

Live From Seattle, it's #ALMForum: A TESTHEAD Live Blog

Good morning everyone. I'll be coming at you live from Seattle at various times of the day. This is a live blog, and as such, it's going to be stream of consciousness, it may contain mistakes, and it may also have gaps in logical flow. If you want to see the real time feed, an ability to handle ambiguity will help. If you can't handle a touch of ambiguity, wait until later in the day when I get a chance to clean things up a bit ;).

We start out with Scott Ambler (@scottwambler on Twitter) and a discussion of  Disciplined Agile Delivery and how to scale Agile practices in larger organizations. Scott made an few points about the fact that Agile is a process with a lot of variations on the theme. Methodologies and methods are all nice, but each organization has to piece together for themselves which of the methods will actually work. Scott has written a book called Disciplined Agile Delivery (DAD). The acronym of DAD is not an accident. Key aspects of DAD are that it is people first, goal drive, it';s a hybrid approach, learning oriented, utilizes a full delivery lifecycle, try to emphasize the solution, not just the software. In short DAD tries to be the parent; it gives a number of "good ideas" and then lets the team try to grow up with some guidance, rather than an iron hand.

Questions to ask: what are the variety of methods used? What is the big picture? While we can look at a lot of terminology, and we can say that Scrum or agile processes are loose form and just kind of happen, that's not really the case at all.  Solution delivery is complex, and there's a lot of just plain hard reality that takes place. Most of us are not working on the cool new stuff. We're more commonly looking at adding new features or enhanced features to stuff that already exists. Each team will probably have different needs, and each team will probably work in different ways. DAD is OK with that.

Scott thankfully touched on a statement in a keynote that made me want to throw the "devil horns" and yell "right on!" there is no such thing as a best practice; there are good practices in some circumstances, and those same practices could be the kiss of death in another situation. Granted, those of us who are part of the context-driven testing movement, this is a common refrain. the fact that this is being said in a conference that is not a testing conference per se brought a big smile to my face. the point is, there are many lean and agile options for all aspects of software delivery. The advice we are going to get is going to conflict at times, it's going to fit some places and not others, and again, that's OK.

Disciplined agile delivery comes down to asking the questions around Inception (How to we start?), Construction (What is the solution we need to provide?), Transition (How to we get the software to our customers?) and Ongoing (what do we do throughout all of these processes?).

For years, we used to be individually focused. We all would do our "best practices" and silo ourselves in our disciplines. Agile teams try to break down those silos, and that's a great start, but there's more to it than that. Our teams need to work with other teams, and each team is going to bring their own level of function (and dysfunction). this is where context comes into play, and it's one of the ways that we can get a handle on how to scale our methods. While we like the idea of co-location, the fact is that many teams are distributed. Some teams are partially dispersed, others are totally dispersed (reminds me of Socialtext as it was originally implemented; there was no "home office" in the early days).  Teams can range from small (just a few people), medium (10-30 people), and large teams (we think 30+ is large, other companies look at anything less than 50 people as small teams). The key point is that there are advantages and disadvantages regarding the size of your team. Architecture may have a full architecture team with representatives in each functional group. Product owners and product managers might also be part of an over arching team where representatives come from smaller groups and teams.

The key point to take away from this is that Agile transformations are not easy. They require work, they take time to put into place, there will be mis-steps, there will be variations that don't match what the best practices models represent. the biggest challenge is one of culture, not technology. Tools and scrum meetings are fairly easy. Making these a real part of the flow and life of the business takes time, effort and consistent practice. Don't get too caught up in the tools doing everything for you. They won't.  Agile/Scrum is  a good starting point, but we need to move beyond this. Disciplined Agile Delivery helps us up our game, and gets us on a firmer footing. Ultimately, if we get these challenges under control with a relatively small team, we can look to pulling this off with a large enterprise. If we can't get the small team stuff working, Agile scaling will be pretty much irrelevant.

My thanks to Scott for a great first talk, and now it's time to get up and see what else ALM forum has to offer.
-----

I'm going to be spending a fair amount of my time in the Changing Face of Testing Track. I've already connencted with some old friends and partners in crime. Mark Tomlinson and I are probably going to be doing a fair amount of cross commenting, so don't be surprised if you see a fair amount of Mark in my comments ;).

Jeff Sussna is taking the lead for us testers and talking about how QA is changing, and how we need to make a change along with it. We're leaving industrialism (in many ways) and we are embarking on a post-industrial world, where we share not necessarily things, but we share experiences. We are moving from a number of paradigms into new paradigms:

from products to services: locked in mechanisms are giving way to experiences that speak to us individually. The mobile experience is one of the key places to see this. People who have negative experiences don't live with it, they drop the app and find something else.

from silos to infusion: being an information silo used to give a sense of job security. It doesn't any longer. Being able to interact with multiple organizations and to be adaptable is more valuable that being someone who has everything they know under lock and key.

from complicated to complex: complicated is predictable, it's bureaucratic, it's heavy. Complex is fragmented. It's independent, it doesn't necessarily follow the rules, and as such it's harder to control (if control is possible at all).

from efficient to adaptive: efficiency is only efficient when the process is well understood, and the expectations are clearly laid out. Disruption kills this, and efficiency gives way when you can't predict what is going to happen. This is why adaptability is more valuable than just efficiency. Learn how to be adaptive and efficient? Now you've got something ;).

The disruption that we see in our industry is accelerating. Companies that had huge leads and leverage that could take years to erode are eroding much faster. Disruption is not just happening, it's happening in far more places. Think about Cloud computing. Why is it accelerating as a model? Is it because people are really all that interested in spinning up a bunch of Linux instances? No, not really. The real benefit is that we can create solutions (file sharing, resource options, parallel execution) where we couldn't before. We don't necessarily care about the structure of what makes the solution, we care that we can run our tests in parallel in far less time than it would take to run them on a single machine in serial. Dropbox is genius not because it's in the cloud, it's genius because any file I really care about I can get to anywhere, at any time, on any device, and I can do it with very little physical setup and maintenance (changes delivered in an "absorbable manner").


Think of Netflix and their “chaos monkey”. They go in and turn instance off. They deliberately break stuff. They want to see what they might be able to find. “I don’t always test my code, but when I do, I do it in production”. That’s supposed to be a joke, but believe it or not, there is a great benefit to testing in production. this is why I am very invested in using my company’s product on their production servers, and looking at issues based on workflows I depend upon.

So what does this all mean for testers and testing? Does this mean that our role is being usurpsed? No, but it does mean our role is changing. Instead of having to babysit machines and be the isolated gatekeeper, we can test more intelligently and with a greater sense of adventure. We can also emphasize that testing goes beyond just performing scripted steps. We can also test more than just the code that we receive, when we receive it. We can test requirements. We can provoke questions. More to the point, we can be a feedback loop to the organization. If an organization believes in being truly adaptive, then it is, effectively, an environment that is friendly to QA.

Mark and I had a little fun considering some of the ramifications as presented, and since Mark said he has some debatable comments he'll be sharing in his talk, I'm going to hold off and not comment until then (stay tuned for further details ;) ).  Suffice it to say, testers are notorious for not necessarily agreeing across the board. That's also part of testing. If we agreed 100%, I'd be deeply worried about the state of our profession.

Testing covers a lot of areas. User testing validates usability. Unit tests can cover code functionality, but there's a lot of space in between those areas that get so much attention. There are lots of "ilities" we need to be paying attention to.

Retros are a good opportunity to see what went well and what can go better, but the technique only works when it's done on a frequent enough level, and the feedback is substantive.

What we definitely need to get away from is "Discontinuous Quality". Let's stop talking about QA wagging the dog. Let's not save testing until the end, where we find problems and tell people about them, only to be said that we are the bottleneck stopping the organization from releasing. Instead, let's get to the party earlier. Let's check out ideas earlier. Let's understand what we are able to contribute, and in as many places as we can.  Ultimately, we are not delivering functionality, we are delivering the ability to help accomplish goals and accomplish objectives. How we do it is not nearly as important as the fact that we actually do it, and do it in a way that is both effective and adaptable.

For me, the one most common thing I can think of to help this is the term "QA". I do my best to not use that term at all if I can get away with it. If I'm asked if I'm in QA, I always answer "yes, I'm a software tester". We have to get out of the business of assuring quality, because we really can't do that. we can inform, we can evangelize, we can enlighten, but we really can't assure anything. What we can do is test, and weave a compelling story. Ultimately, the story is the most important thing we can deliver, as it's the narrative that really defines if a solution goes out or doesn't.

-----

Ken Johnston (@rkjohnston) is talking about EaaSY, or 'Everything as a Service, Yes!".  Ken wants to help us see what the role of testing actually is. It's not really about quality assurance, but more about Risk assessment and management. I agree with this, in the sense that, in the old school environments I used to work in, especially when I worked for a game publisher, when a bug shipped to production, unless is was particularly egregious, it was eternal.  In the services world, and the services model, since software is much more pliable, and much more manageable, there's no such thing as a "dated ship". We can udate all the time, and with that, problems can be addressed much more quickly. With this model, we can be less forced into slotted times. We can update a bug in the same day. we can release a new feature in a week where it used to take a quarter or a year.

EaasY covers a number of parts to be made to be effective.

Componentization: break out as much of the functionality from external dependencies as possible.

Continuous Delivery: Requires Continuous Stability. It needs to have a targeted set of tests, an atomic level of development, and likely is an area that can be deployed/fixed with a low number of people being impacted by the change (the more mission critical, the less likely a Continuous Delivery model will be the desired approach. Not impossible, but probably not the best focus (IMO).

User Segmentation: When we think of how to deploy to users, and we can use a number of methods to do that. we can create concentric rings, with the smallest ring being the most risk tolerant users, and expanding out to a larger set of users, the farther out we get, the more risk averse the users. Additionally, we can use tools like A/B testing, to see how two groups of people react to a change as structured one way or another (structure A vs. Structure B). This is a way to put into production a change, but have a small group of people see it and react to it.

Runtime Flags: Layers can be updated independently. We can fork traffic through the production path and at key areas, data can be forked and routed through a different setup, and then reconvene with the production flow (this is pretty cool, actually :) ). Additionally, code can be pushed, but it can be "pushed dark", meaning it can be put in place but turned on at a later time.

Big Data: Five "Vs" (Volume, Variety, Velocity, Verification, Value). These need to be considered for any data driven project. The better each of these is, the more likely we will be successful in utilizing big data solutions.

Minimum Viable Product: Mark callup on Seth Eliot's "Big Up Front Testing" (BUFT) and says "say no to "BUFT". With a minimum viable product, we need to scale our testing to a point where we can have a MVP, and appropriate testing for the scale of the MVP. Additionally, there are options where we can Test in Production (not full scale, of course).

Overall, this was a very interesting approach and idea. Many of the ideas and approaches described sound very similar to activities we are already doing in Socialtext, but it also gives me areas where I can see that we can do better.

-----

James Whittaker (@docjamesw) is doing the next plenary session, called "A Future Worth Wanting". First we start with our own devices, our own apps, we own them, they're ours, but they aren't particularly useful if they don't connect to a data source somewhere (call it the web and the cloud for simplicity). James is making the point that there's a fair amount of stuff in between that we are not including. The Web browser is one of these middle point items. The app store is another. We know what to do and how to do it, we don't give it much thought. Could we be doing better?

Imagine getting an email, then having to research where an event is, how much tickets are, and how we could handle transactions (using "entities") and we can use those entities and we can find out information and perform transactions based on those entities. Frankly, this would be cool :).

What if we were a calendar? We are planning to do something, some kind of activity that we need to be time focused for. What do we naturally do? We jump to a browser and go figure out what we need. what of our calendar could use those entity relationships and do the search for us, or better yet, return what has already been searched for based on the calendar parameters? Think of writing code? Wouldn't it be cool to find a library that could expand on what you are doing or do what you are hoping to do?

The idea here is to be able to track "entities" to "intents", and execute those intents. Think about being able to call up a fact checking app in PowerPoint, and based on what you type, you get a return specific to your text entry. Again, very interesting. The key takeaway is that our apps, our tools, our information needs are getting tailored to exactly the data we want, from the section of the web or cloud that we actually need.

This isn't a new concept, really. This is the concept of "agents" that's been talked about for almost two decades. The goal we want is to be able to have our devices, our apps, our services, etc, be able to communicate with each other and tell us what we need to know when we need to know it. It's always been seen as a bit of a pipe dream, but every week it seems like we are getting to see and know more examples that make that pipe dream look a little less far fetched.

Goals we want to aim for:

- Stop losing the stuff we've already found
- Localize the data and localize the monetization
- Apps can understand intent, and if they don't, they should. Wouldn't it be great if based on a search or goal, we can download the appropriate apps directly?
- make it about me, not my device


Overall, these are all cool ideas, and yes, these are ideas I can get behind (a bit less branding, but I like the sentiment ;) ).

-----

Alexander Podelko (@apodelko) wants to have us see a "Bigger Picture" when it comes to load testing. There's a lot of terminology that goes into load testing and they are often interchangeable, but not always. the most common image we have of Load testing (and yes, I've lived this personally) is the last minute before deployment, we put some synthetic tests together in our lab, try to run a bunch of connections, see what happens, and call it a day and push to production. as you might guess, hilarity ensues.

The problem with this is not just the lateness, or the inability to really match our environment, but that we miss a lot of stuff. There's a lot of options to load testing that can give us a more broad picture (as the talk suggests). Some of the other issues that load testing brings is the fact that each tool has limitations to what it can cover, as well as the robustness that can be provided by the various tools (as you might guess, JMeter does not solve every load testing problem... I know, contain your shock and dismay ;) ).

As Alexander points out quite appropriately, web sites were simple for a very brief window of time. They are expanding to be more complex and less controllable through standard and simple tools that would cover everything in one place. There are a variety of tools that can be used, ranging from open sources to commercial tools. The more complicated the system, the less likely one tool will be able answer the needs.

Overall, load testing looks to have some of the broadest challenges for the systems that are meant to be tested, at least if we want to create load that is not completely synthetic and generally meaningless. Making load tests that are complex, heterogeneous, and indicative of real world traffic are possible, but the more unique and real world the traffic you wish to emulate, the more difficult the process to actually provide that simulated traffic actually is.


-----

In the mid afternoon, they held a number of Birds of a Feather sessions, to provide some more interactive conversations, and one of the was specifically about how to use GIT. Granted, I'm pretty familiar with GIT, but I always appreciate seeing how people use it and seeing different ways to use it that I may have not considered.

One of the tools that they used for the demonstration was to "Learn Git Branching", which displays a graphical representation of a variety of commits, and shows what commands actually do when they are run (git commit, git merge, rebase, etc.).

-----

The last session of the day is being delivered courtesy of Allan Wagner, and the focus is on continuous testing, or why we would want to consider doing continuous testing. The labor costs are getting higher, even with outsourcing options considered, test lab complexity is increasing, and the amount of testing required keeps growing and growing. OK, so let's suppose that Continuous Testing is the approach you want to go with (I hope it's not the only approach, but cool, I can go with it for this paradigm), where do you start?

For testers to be able to do continuous testing, they need:

- production like test environments (realistic and complete
- automated tests that can un unattended
- orchestration from build to production which is reliable, repeatable and traceable

One very good question to ask is "how much time do you spend doing repetitive set up and tear down of your test environments?" In my own environment, we have gotten considerably better in this area, but we do still spend a fair amount of time to set up our test environments. I'm not entirely sure that, even with service virtualization, there would be a tremendous increase in time saved for doing spot visual testing. While I do feel that having automated tests is important, I do not buy into the idea that automated testing only is a good idea. It certainly is a big plus and a necessary methodology for unit tests, but beyond that, trying to automate all of the tests seems to fall under the law of diminishing returns. I don't think that that is what Allan is suggesting, but I'm adding my commentary just the same ;).

Service Virtualization looks to try to create, as its name describes, the ability to make elements hat are unavailable available for testing. It requires mocks and stubs to work, where you can simulate the transactions rather than try to configure big data hardware or front end components that don't yet exist for our applications.

Virtual Components need to fit certain parameters. They need to be simple, non-deterministic, data-driven, using a stateful data model, and have functionality where we can easily determine their behavioral aspects.

The key idea is that, as development continues, the virtual components will be replaced with the real components and start looking at additional pieces of later functionality.  In other examples, the virtualized components may be those that simulate a third party service that would be too expensive to have part of the environment as a regular part of the development process.

Allen made the point in his talk that Continuous Testing is not intended to be the be all and end all of your testing, but it is meant to be a way to perform automated testing as early as possible and as focused as possible so that the drudge work of set-up tear down, configuration change and all of the other time consuming steps can be automated as much as possible. This is meant to allow the thinking testers to do the work that really matters, which is to perform exploratory testing and let the tester genuinely think. That's always a positive outcome :).

-----

From here' it's a reception, some drinks, and some milling about, not to mention dinner and chilling with the attendees. I'll call this a day at this point and let you all take a break from these updates, at least for today. Tomorrow, I'm going to combine two events, in that I'll be taking part in SEALEAN (Lean Coffee event) and then picking up with the ALM Forum conference again  after that. Have a good night testing friends, see you tomorrow morning :).

End of Entry: 04/01/2014: 05:20 p.m. PDT

No comments: