Eric Reis wrote an interesting book called "The Lean Startup" back in 2011, and it's become quite the buzzword du jour. Thomas Vaniotis has decided that he wants to get beyond the buzzwords and actually discuss Lean Startup and make the case that Lean Startup is all about testing.
Thomas recently made the shift from Technical Tester to Product Manager, working with a "Lean Startup" and he states categorically that he is doing as much testing as ever, if not more so. Thomas uses Eric Reis' definition of a startup, which is "a human institution designed to create a product or service within an environment with extreme uncertainty". "Lean" is the idea that we want to focus on the essential value, and remove the waste from the system wherever possible. Put the two concepts together, and you get "Lean Startup". This means that organizations that embrace this are going to look rather different from one another. It also means that what is considered value and what is considered waste will be different in each organization. Testers, this means that context matters a LOT!
Waste can vary. Overproduction is a real issue in a literal product factory, but it's also visible in software for features that are not used or have no value. Code needs to be maintained, tests need to be run, refactoring needs to take these empty features into account, and other features that may be more important are not made because the time is being spent working on stuff that is not relevant. Idle machines are a reality in factories, and backlogs in moving features forward is every bit as bad. I see this when we have a number of stories in DevComplete but with no testers with open cycles to work on them. What happens? They sit there until they can be picked up and addressed. Over time, this lag can be significant; getting a feature from PM proposal to shipped can take days, but some times it can take months or even years. In all, identifying waste takes some time and talent to recognize, and it can be a real struggle to eradicate it once it builds up.
There's waste in programming, there's waste in design, there's waste in testing,and there's waste in release. It just happens. the goal is not to eliminate it, but it is important to look to see where waste occurs and see what can be minimized. Testing is part of of the waste production and waste prevention culture. It's a solid part of what we do as testers, not just to find problems but to also find inefficiencies that we can work on. How do we do that? We can do that with Validated Learning, which means we subject failure to scrutiny, which in turn opens us up to failure. It's possible our investigation may disprove our hypothesis. Our experiment may show that we are wrong in our assumptions, but learning that also helps us expose and remove waste, even if the waste we remove is faulty assumptions of our own.
One of the ways we can help drive the learning is with a "Minimum Viable Product". This is an ideal method for applying what is called the "Build-Measure-Learn" loop. By building the smallest possible product, we can learn if our ideas our sound, if our product meets the need and if the data we receive supports the notion that we are on the right track. testers do this all the time, even with something that's not a MVP. New ideas are spun out from each cycle of this process. If this looks a lot like exploratory testing, indeed, it is :).
Thomas used a number of interesting examples (Zappos, Dropbox, etc.) and how they made their minimum viral product. For Dropbox, it was a video explaining why they had a solution for something most people hadn't figured out was even a problem. Additionally, Dropbox did it in a way that everyone could understand... it's a folder. That's it! For most people, that's all they need to know or deal with, and it's proven to be wildly successful. Food on the Table designed their system around initially a single customer. By working specifically with that one customer, they discovered what they needed to do to refine and create the system that would ultimately develop, and with each new family they added, they refined it even more, including automating a lot of the processes.
When we make MVP's, we need to measure the effectiveness of our efforts. In short, we need actionable metrics. What is the data telling us about our product, and are we accurately measuring something that is relevant? Does our vanity influence this choice of metrics? It certainly can. Case in point, I love seeing the hits on my blog each day. Yes, I pay attention. The problem is, hits alone tells me very little. It may mean people come to see my page, through some means, but does it mean they read the whole post? Does it mean they liked what they read? Does it mean they shared the link with someone else? Hit count won't tell me any of that, but it sure sounds good to say "hey, I received thousands of hits when I posted this". that's a vanity metric. It feels good, but it doesn't really tell me much, and it certainly doesn't guide me to action.
Once you have data that tells you that something isn't working, you need to change. The term for this is "pivot", and pivoting is much harder with large and bulky products. MVP's are easier to pivot with. Additionally, pivoting isn't just doing something different, but it's helping people see a pain point that they don't know about and you are ready to address. In this case, testing can help the organization not just confirm quality, but also see avenues to pivot into as well.
Lean Startups are often associated with Continuous Delivery. The reasoning is that, if we can deploy more frequently, releases themselves become much smaller and more manageable. When an organization gets to pushing multiple times in a day, releases can literally be a single story or a single bug fix. The turnaround time, instead of being counted in days or weeks, could be counted in hours, or even minutes. This approach doesn't minimize testing, it exposes it as even more relevant and necessary.
While MVP's are a starting point, the fact is, the product needs to mature and quality needs to continually improve. By using the Lean approach, that process becomes easier to manage, because the experiments are smaller and the turnaround time to getting a broken build fixed becomes easier with smaller incremental changes. Regardless, some experiments fail, and the line needs to be stopped at times. Be ready for that.
Time to put on my facilitator hat. Thanks, Thomas for a great talk, and now let's see what the audience has to say :).