Sunday, November 28, 2010

BOOK CLUB: How We Test Software at Microsoft (3/16)

Well, I overestimated the amount of time I'd get to put this together (traveling for the holidays left me little time to write up this review; for a relatively short chapter, it’s rather meaty :) ), so this is appearing a little later than intended.

Chapter 3. Engineering Life Cycles

In this chapter, Alan makes a comparison between engineering methodologies and cooking. The ideas that work for a single person need to be modified for a family and what works for a family requires a bit more specific discipline when cooking for 100 (the measurements for ingredients need to be more precise). Software is the same; the conditions that will work or be acceptable for a small group require a different approach when made for a large audience.


Software Engineering at Microsoft

Microsoft doesn't use just "one model" when creating software. Time to market, new innovation or unseating a competitor will require the ability to use different approaches if necessary. Testers need to understand the differences between common engineering models and how they interact with them to be successful.



Waterfall Model

The Waterfall Model is an approach to software development where the end of one phase coincides with the beginning of the next phase; Requirements flows into Program design, which flows into Implementation/Coding, which flows into Testing and then flows into Maintenance. One advantage is that when you begin a phase, ideally, each previous phase is complete. Another benefit is requires design to be completed before coding starts. A disadvantage is that it doesn’t really allow phases to repeat. If there are issues found in testing, going back to the design stage can be difficult if not impossible. That's now how its founder, Winston Royce planned it.

"An interesting point about waterfall is that the inventor, Winston Royce, intended for waterfall to be an iterative process. Royce’s original paper on the model discusses the need to iterate at least twice and use the information learned during the early iterations to influence later iterations. Waterfall was invented to improve on the stage-based model in use for decades by recognizing feedback loops between stages and providing guidelines to minimize the impact of rework. Nevertheless, waterfall has become somewhat of a ridiculed process among many software engineers—especially among Agile proponents. In many circles of software engineering, waterfall is a term used to describe any engineering system with strict processes." - Alan Page

In many ways, the waterfall model is what was used in the early part of my career. I don't think I've ever been on a project where true and strict adherence to the waterfall model was practiced. We used a variation that I jokingly refer to as the "waterfall/whirlpool". Usually we would go through several iterations of the coding and testing to make sure that we got the system right (or as close to right as we could make it) so that we could ship the product.


Spiral Model

In 1988, Barry Boehm proposed a model based on the idea of a spiral. Spiral development is iterative, and contains four main phases: determine objectives, evaluate risks, engineering, and planning the next iteration.


Determine objectives: Focus on and define the deliverables for the current phase of the project.


• Evaluate Risks: What risks do we face, such as delays or cost overruns, and how can we minimize or avoid them completely.


• Engineering: The actual heavy lifting (requirements, design, coding, testing).


• Planning: Review the project, and make plans for the next round.

This is more akin to what I do today. It sounds almost agile-like, and it's driven mostly by customers who need something special and our willingness to listen and provide it. It's not as rigid as a true waterfall project, but it doesn't quite have the hallmarks of an agile team. Still, it's a functional approach and it can work well with small teams.


Agile Methodologies

Agile is a popular method currently in use. There are many different approaches to Agile, but the following attributes tend to be present in all of them:


  • Multiple, short iterations: Deliver working software frequently through "sprints".
  • Emphasis on face-to-face communication and collaboration: Less walls, more direct communication and sharing of efforts.
  • Adaptability to changing requirements: When the requirements need to change, the short iterations make it possible for quick adjustment and the ability to include small changes in each sprint.
  • Quality ownership throughout the product: an emphasis on test-driven development (TDD) and prevalent unit testing of code so that developers are doing specific tests to ensure that their code does what they claim it does.



The key idea behind Agile is that the development teams can quickly change direction if necessary. With a focus on "always working" code, each change can be made in small increments and the effects can be known rapidly. The goal of Agile is to do a little at a time rather than everything at once.


This is the way that my company is looking to head for future product development, but we still have a scattering of legacy product code that will not be able to be fully integrated into agile practices. We have done a couple of iterations with our most recent products using and agile approach and so far, it looks promising.


Other Models

There are dozens of models of software development. There isn’t a best model, but understanding the model used and creating software within the bounds of the determined model you choose can help to create a quality product.

Milestones

The milestone schedule establishes the time line for the project, and key dates where project deliverables are due. The milestone model makes clear that specific, predefined criteria must be met. The criteria typically include items such as the following:


  • "Code complete" on key functionality: all functionality is in place, if not fully tested
  • Interim test goals accomplished: Verify code coverage or test case coverage goals are met.
  • Bug goals met: We determine that there are no catastrophic bugs in the system.
  • Nonfunctional goals met: Perhaps better stated as "para-functional" testing, where such things as usability, performance, load and human factors testing have been completed.



Milestones give the developers and the tester a chance to ask questions about the product, and determine how close to "go" they really are. Milestone releases should be an opportunity to evaluate an entire product, not just standalone pieces.


The Quality Milestone

This is a side story that Alan uses to talk about a topic often referred to as "technical debt" and he refers to Matt Heusser and his writings on the subject here (xndev.blogspot.com). The key takeaway is the notion of what happens when a large quantity of bugs are deferred until the next release, or shortcuts are taken to get something out that works but doesn't fulfill 100% of what it should. That shortcut will surely come back and rear its ugly head in the form of technical debt, which means that you are betting on your organization's "future self" to fix the issues, much the way that individuals expect their "future selves" to pay for their car, student loans, or credit card balances. Technical debt and consumer debt have a very big commonality and danger... just how reliable is your future self? We all love to believe that we will be better able to deal with the issues in the future, but how often has that proven to truly be the case? Alan argues that the Quality Milestone is a mid-way point between dealing with everything now and putting yourself or your organization at the mercy of the "future self".


Agile at Microsoft and Feature Crews

Alan states that Agile methodologies are popular within Microsoft, and that the popularity is growing. Agile however is best suited to smaller teams (around 10 or so). Microsoft has a number of large initiatives that have thousands of developers. To meet this challenge, Microsoft scales Agile practices to large teams by using what it calls "feature crews".

Feature crews are designed to meet the following goals:


  • It is independent enough to define its own approach and methods.
  • It can drive a component from definition, development, testing, and integration to a point that shows value to the customer.


As an example, for the Office 2007 project, there were more than 3,000 feature crews.

The feature crew writes the necessary code, publishes private releases, tests, and iterates while the issues are fresh. When the team meets the goals of the quality gates, they migrate their code to the main product source branch and move on to the next feature. Code grows to integrate functions. Functions grow to become features, and features grow to become a project. Projects have defined start and end dates with milestone checkpoints along the way. At the top level, groups of related projects become a product line. Microsoft Windows is a product line; Windows 7 is a project within that product line, with hundreds of features making up the project.


Process Improvement

Most testers at one point or another come into contact with Dr. W. Edwards Deming's PDCA cycle. PDCA stands for "Plan, Do, Check, Act":


  • Plan: Plan ahead, analyze, establish processes, predict results.
  • Do: Execute on the plan and processes.
  • Check: Analyze the results (note that Deming later changed the name of this stage to "Study" to be more clear).
  • Act: Review all steps and take action to improve the process.


Simple enough, right? Common sense, you might say. Perhaps, but simple can be very powerful if implemented correctly. Alan uses the example of issues that were found by testers during late testing that could have been found with better code review. By planning a process around code reviews, the group then performs code reviews during the next milestone. Then over the course of the next round of testing, the group monitors the issue tracker to see if the changes have made an impact on the issue reported in the later stages of testing. Finally, they review the entire process, metrics, and results see if the changes they made had enough of an effect to become standard practice.

Microsoft and ISO 9000

Oh Alan, I feel for you! As a young tester, I watched Cisco go through the process over a few years of standardizing everything to meet ISO 9000 requirements, and honestly, I wonder if it was wholly worth the investment. Granted, Cisco is one of the biggest tech companies in the world, and at the time they were going through this certification process, they were doubling every year if not more so. Still, in many ways, I think the compliance was a trade-off that hampered Cisco's at the time legendary ability to innovate in a nimble and rapid fashion, and testing became a very heavy and process oriented situation. I cannot speak for their current processes, as I've been away from them for a decade now, but I think they managed to find ways to meet the letter of the ISO 9000 certifications yet still do what was necessary to be innovative and adapt as needed. From Alan's description it sounds like Microsoft has done the same thing.


Shipping Software from the War Room

Is a product ready to be released? Does the product meet requirements? At Microsoft, these decisions are analyzed in the "War Room". This seems to be a pretty common metaphor, as it's a phrase that has been used in just about every company I've ever worked with, so it's not just a Microsoft phenomenon. The idea is that a "War Team" meets throughout the product cycle and examines the product quality during the release cycle. The War team is tasked with looking to see which features get approved or cut, which bugs get fixed or punted, whether a team or teams need more people or resources, and whether to stick to or move the release date. At Microsoft, typically the war team is made up of one representative from each area of the product.

Alan uses the following suggestions to make the most of any war room meeting:


  • Ensure that the right people are in the room. Missing representation is bad, but too many people can be just as bad.
  • Don’t try to solve every problem in the meeting. If an issue comes up that needs more investigation, assign it to someone for follow-up and move on.
  • Clearly identify action items, owners, and due dates.
  • Have clear issue tracking—and address issues consistently. Over time, people will anticipate the flow and be more prepared.
  • Be clear about what you want. Most ship rooms are focused and crisp. Some want to be more collaborative. Make sure everyone is on the same page. If you want it short and sweet, don’t let discussions go into design questions, and if it’s more informal, don’t try to cut people off.
  • Focus on the facts rather than speculation. Words like "I think," "It might," "It could" are red flags. Status is like pregnancy—you either are or you aren’t; there’s no in between.
  • Everyone’s voice is important. A phrase heard in many war rooms is "Don’t listen to the HiPPO"—where HiPPO is an acronym for highest-paid person’s opinion.
  • Set up exit criteria in advance at the beginning of the milestone, and hold to them. Set the expectation that quality goals are to be adhered to.
  • One person runs the meeting and keeps it moving in an orderly manner.
  • It’s OK to have fun.


The next installment will deal with Chapter 4, and that will be posted on Tuesday.

No comments: