Monday, October 9, 2017

Lean Startup Lessons: #PNSQC Live Blog



Wow, a full and active day! We have made it to our last formal track talk for today, and what's cool is that I think this is the first time I've been in a talk with Lee Copeland that hasn't been physically running a conference.

Lee discussed the ideas behind Lean Startup, which was a book written back in 2011 by Eric Ries. In Lean Startups, classical management strategies often don't work.

The principles of Lean Startup are:

Customer Development: In short, cultivating a market for their products so that they can continue to see sales and revenue, and to help develop that relationship

Build-Measure-Learn Loop: Start with ideas about a product, build something, put in front of customers to measure their interest, then learn about what they Like or Don't Like and improve on the ratio of Like to Don't Like. Part of this is also the concept of "pivoting", as in "should we do something else"?

Minimum Viable Product: Zappos didn't start with a giant warehouse, they reached out to people and sought out what they might want to order, and then physically went to get the shoes that people ordered and sent them to them.

Validated Learning: "It isn't what we don't know that gives us trouble, it's what we know that ain't so" - Will Rogers
"If I had asked people what they wanted, they would have said faster horses." - Henry Ford (allegedly)

One Metric That Matters: This measure may change over time, but it's critical to know whatever this metric is because if we don't know it, we're not going to succeed.

So what Quality lessons can we learn here?

Our Customer Development is based on who consumes our services. Managers, Developers, Users, Stakeholders. Who is NOT A customer? The testing process. We do this wrong a lot of the time. We focus on the process, but the process is not the customer and it doesn't serve the customer(s). With this in mind, we need to as "what do they need?  what do they want? what d they value, what contributes to their success? what are they willing to pay for?"

The Build-Measure-Learn Loop basically matches up with Exploratory Testing.

Minimum Viable Product: We tend to try to build testing from the bottom up, but maybe that's the wrong approach. Maybe we can write minimum viable tests, too. Cover what we have to as much as we have to, and add to it as we go.

Validated learning equals hypotheses and experiments to confirm/refute hypotheses. In short, let's get scientific!

How about the One Metric That Matters? What it definitely does not include are vanity metrics. What does the number of planned test cases mean? How about test cases written? Test cases executed? Test cases passed? Can we actually say what any of those things mean? Really? How about a metric that measures the success of your core business? How do they relate to the quality of the software? Is there an actual Case and effect relationship? Does the metrics listed lead to or inform next actions? Notice I haven't actually identified a metric that meets those criteria, and that's because it changes. There are a lot of good metrics but they are only good in their proper context, and that means that we have to consistently look at what we are and what that metric that matters and what really matters when.





No comments: