Wednesday, April 6, 2016

Continuous Testing: Live from #STPCON

Continuous Testing is one of the holy grails of deployment. We've got the continuous integration piece down, for the most part. Continuous testing, of course fits into that paradigm. The integration piece isn't helpful if changes break what's already there and working. From my own experience, the dream of push button build, test and deploy feels close at hand, but somehow there's enough variance that we don't quite get there 100%. This is for an organization that deploys a few times a week. Now imagine the challenge if you are at an organization that deploys multiple times every day.

Neil Manvar is describing his time at Yahoo! working on their mail tool, and some of the challenges they faced getting the automated testing pieces to harmonize with the integration and deployment steps. Some of the ways they dealt with making a change from a more traditional waterfall development approach to an Agile implementation was to emphasize more manual testing, more often. Additionally, the development team aimed to help in the process of testing. Plus in that there were more testers, but minus in that programmers weren't programming when they were testing. Later, the brute force release approach became too costly to continue, so the next step was to set up a CI server using selenium tests, running an internal Grid and coordinating the build and test steps with Jenkins. Can you guess where we might be going from here ;)?

Yep, unreliable tests, need to rework and maintain brittle tests, limited testability, limited testability baked in, etc. Thus, while the automation was a step towards the goal, there was still so much new feature work happening that the automation efforts couldn't keep up (hence, more of a reliance on even more manual testing). A plus from this was that the test teams and the development teams started talking to each other about ways of writing robust and reliable tests, and providing the infrastructure and tooling to make that happen.

This led to a focus on continuous delivery, rapid iteration, and allowing for the time to develop and deploy the needed automated tests. In addition, the new management mandate was regular delivery of software, not just over a period of weeks, but multiple deploys per day. What helped considerably was that senior management gave the teams the time and bandwidth to help them implement the automation, including health checks, maintenance steps, iron out the deployment process, and ultimately enforce a discipline to pull requests and merges so that the deployment pipeline, including build, test, and deployment, would be as hands off as possible. Incrementally updating also improved the stability of the product considerably (which, I can totally appreciate :) ).

Ok, so this is all interesting, but what was the long term effects of making these changes? Ultimately, it allowed QA to expand their skill set and take the busywork off of their plates (or a large amount of it) so they could focus on more interesting problems. Developers were able to emphasize development, including unit tests and more iterative improvement. Accountability was easier to implement, and see where the issues were introduced, and by who. Additionally, a new standard of quality was established, and overall, features were delivered more quickly, uptime improved, new features were deployed and overall satisfaction with the product improved (and with it, revenue, which is always a nice plus ;) ).


No comments: