Sure, we all have heard about the principe of the Three Amigos meeting. Many of us may actually participate regularly in those meetings (it's a core part of every story kickoff where I work) but do we really utilize these meetings in the best way? When I saw George Dinwiddie and Stephan Kämper were presenting a workshop on this topic, I had a good feeling that this would be time well spent.
What does the Three Amigos mean to you? In my organization, three amigos are fairly well represented and consistently so. In our organization, we have a programmer, a product owner and a tester. Each kickoff has a review of the story in question, and we look at the story in question to voice questions and concerns. It's a time to talk about basic implementation, but not specifics. It's a time to ask testability questions, and to see what assumptions have been made and what ones haven't been made. After we have all had our say, we consider the story kicked and away we go. That's fairly typical for us. It's worked pretty well, but could we do better?
Though I think we have done a pretty good job articulating the stories and their details, we might be more prone to rubber stamping when we get the same people together too often. I'm not saying this is deliberate or in some way nefarious, but it's a natural by-product of how we work together and get to know how each other works. Occasionally, we can all miss something because we've gotten a little too comfortable with the process.
Several questions to consider when it comes to the Three Amigos approach are as follows:
- What might happen if you have fewer or many more than three participants?
- What is different about the viewpoints of the business, the programmer and the tester?
- What other roles/points of view might be useful, and why?
- What could you do if one of the the roles was not available? What are the dangers to avoid?
- How should decisions be made? Based on merit? Democratic? Other? Why?
- Should the Amigos document something? Why or why not? What? How?
One of the more telling examples we shared was the ability to communicate requirements "blind". In the process of trying to share requirements, three easels were set up, one with a diagram of the requirements (a triangle, a square and a circle that overlapped). The goal was for one of the participants to describe the shapes and their placement, and their ability to communicate the requirements effectively to others. As you might guess, no one had the same image. When all was said and done, Communicating requirements is hard.
We are on a break at the moment, so more when I get back (this is the first part of a two-parter :) ).
In the second part of the hour, we are looking at a Global Parking example. This is significantly more involved than the simple example we tried before the break. While we worked through and tried to discuss the requirements and semantics of the document, we started considering a broad range of questions. Among them were:
- What was our Shared Understanding?
- Did we use a Common Vocabulary? (short answer: no ;) )
- Could we describe the rational of our decisions?
- What are the "rules" of the system?
- Do we have examples that illustrate those rules?
- What assumptions do we have based on these discussions? Do we have questions that developed by reviewing the document?
It's been interesting to me to see how many of the interactions with my team in similar situations were much smoother because we had a shared set of tacit knowledge. I could imagine our meetings being much more involved if we were to bring someone in cold in one of the roles who had never been part of the process, more so because they don't have the shared history of the team to draw upon. That's a powerful undercurrent, and it should not be underestimated. This exercise took significant time to clarify meaning and make sure we understood each other and our assumptions. I can see in teams that are more in sync with one another that friction being much less.
One thing that was very helpful is the idea of "Example Mapping", where we too the basis of the story, determined rules of the story requirements and created individual examples that mapped to each of the rules. By doing this, we were able to block out the needs of the app, and some questions that we developed by blocking out the examples and mapping them to the rules. This is a technique I have never used before, or at least not in this manner, and it looks like something that would be tremendously helpful as a preparation for a Three Amigos meeting. As a tester, it could be very helpful to confirm if the rules are essential, or if there are things that are out of scope. If you notice you have a lot of examples, maybe the story is too big. Could you split the story into one story for each rule? How about one story for each example?
This Example mapping is a method of "testing before development", or as Jon Bach once described to me, an example of "provoking requirements". He doesn't mean that he's being deliberately confrontational, but that he is looking to talk out the requirements, and be sure that we understand if the requirements as we think they are presented are understood, as well as communicated.
the last part of the workshop was an example showing how to use Cucumber to automate a test scenario. Though this workshop is not a Cucumber workshop, it's a good follow-through to describe how the process could be used to generate test steps and the automation that could happen based on the requirement requests.