Thursday, November 16, 2017

Good Ideas are Spawned From Bad Or Failed Ideas

My elder daughter and I have been working on a number of small projects together over the past few months and every once in awhile, I will get a comment from her where she says "wow, how did you know what to do in that situation? We hadn't faced it before, and you quickly figured out a way to deal with it. How is it so easy for you? Why can't I do that?"

I chuckled a little and then I've told her a maxim that I've used a bunch (and I'm sure I've said it here at some point, too)... "Good ideas/solutions come from good judgment. Good judgment comes from experience. Experience comes from bad ideas/solutions/judgment." In short, she sees that I can come in and consider an idea and implement it (in some things, I do have my limits, and they are legion). What she doesn't hasn't seen is the countless times I've tried similar things, failed at them, regrouped, reconsidered, remeasured, tried again, and repeated this process until I happened upon something that worked.

As a Scout leader, I've had the benefit of decades of teaching a few generations of kids how to do some basic stuff (note, I said basic, not easy or simple). We teach how to tie a handful of knots. We teach some basic cooking techniques. We teach how to handle items like an ax, a knife, and a saw. We teach how to safely use fire. We teach some basic wilderness survival tips. Each time through this process there is always a similar "wave" that I witness. At first, there's an excitement level, but that quickly gives way to a mild boredom. Seriously? This is such a big deal? Snooze! Still, I push on and demonstrate what I can and encourage them to practice what I am showing them. A hallmark of our Scouting year typically takes place three months in for a typical scout. That's the "silent campout". Not silent in the sense that there's no talking or interaction, but silent in that the leaders (i.e. me and the other adults) make it a point to not discuss any of the campout particulars with the troop. They have their campsite, we have ours. they are within eyesight of each other, and we reserve the right/authority to intervene if a situation is deemed unsafe. Outside of that, we let them pick the camp area, bring in all needed items, and then we leave them to it. They construct the camp, they cook their meals, they clean, they tend fires, and do all of the other things that we have taught them over a few months.

Each time, the outcome has been similar. The bored expressions often give way to genuine concern or in some cases panic. Wait, what was I supposed to do at this point? Did I pack what we needed? Did I cook that long enough? Am I going to be able to properly contain the fire? You get the idea. They make mistakes, they get frustrated, and then they approach the problem(s) from different angles. They confer. They discuss options. They experiment. Some of those experiments fail, but some succeed. They note the ones that were successful. The next morning, fewer mistakes, less frustration, and almost no panic. The process, while ragged, gets smoother and more refined. Almost to a person, this experience makes for a change of attitude, and then when we talk about "the basics", they are not so jaded and bored. they realize that basic stuff often is harder to physically do in a regular and smooth manner. Like everything, it takes actual practice and it takes some working through frustration. Do it enough, and you start to actually get good at those basics and then you forget that there was a learning curve at all.

My point with this today is that, too often, I think we approach aspects of what we do (testing, coding, administration, learning new stuff, getting out of our comfort zone) with the same mindset. We start out enthusiastic, we get bored and jaded and then we panic when what was supposed to be so simple doesn't work out to be. It's OK to feel these things. In fact, it's necessary. Over time, as we stumble, learn, practice and perfect, we too might forget exactly what it takes to do basic things and make them look easy. May I encourage you not to? You never know who may be watching and feeling discouraged because they can't seem to "get it". We've been there, we know how that feels. Let's make sure we remind others that basic doesn't necessarily mean easy, and that good ideas/solutions often come from bad ideas/attempts.


Tuesday, November 14, 2017

The Frustration of "Too Much Choice"

Hello, Internet world. My name is Michael. I'm a serial collector of informational tidbits.

"Hi, Michael!"

Seriously, I recently went through and realized something both frustrating and enlightening. I am a huge fan of Bookmarking and Favoriting (Liking on Twitter now, but I still think of it as Favoriting). In my world, "Favoriting" serves a specific purpose. It's not so much to say "hey, I want to show you I like what you've posted" (though I do that from time to time) but to say "this is something I don't have the time to look at right now, but I really want to look at it later". I subscribe to lots of services that send me emails with cool tips and tricks to test, code, and administer stuff. I have a digital library that has hundreds of titles on all sorts of topics. I have categorized listings of websites, forums and other services that are there to help me learn and do things better and easier.

The thing is, when I get up in the morning and I scan my Inbox, most of the time I just delete the notifications, unless there's something that really piques my interest.

Those links? Rarely visited.
That list of Favorites (Likes) on Twitter? Rarely reviewed.
That massive list of books? It's so big that most titles hide in plain sight.

I remember Cem Kaner saying at one point that having the information doesn't necessarily mean that it will be useful to you at that moment, but being able to reference it and know about it or where to find it is of value. Thus, for many of us, resources are just that, they are raw lumps that are there when and if we need them, but we have to understand what we have access to and when that access is relevant.

For me, I struggle with too much choice. If there are too many options, I simply get overwhelmed and never make a decision. It's all clutter. It's a challenge to organize it. I have a couple hundred CDs and whenever I go on a road trip, I spend a ridiculous amount of time trying to pick something to listen to. Often, I give up and listen to the podcast I downloaded on my phone. Oh, that's another thing, which podcast to listen to and when? So many choices, so many options, but do I really have time for a deep dive? Have I truly let that one podcast build up to ten unlistened episodes? Yikes! When am I going to find the time to listen to that? Since my phone has a limited amount of storage, I tend to be a little more deliberate with what goes on it and I cycle what I download, so I have fewer choices. The net result is that I actually listen to what I put on the phone.

As I've stated in this blog before, I don't write about these things because I'm particularly good at them. I write about them because I'm particularly terrible at many things but want to do better. Thus, I'm trying my best to constrain those things that overwhelm me. Yes, I belong to a service that lets me download a free ebook every day. Many (perhaps most) of those books are "someday maybe" propositions that tend to crowd out the books that are actually immediately relevant. Therefore, I'm trying something different. Each week, I'm going through a particular category of expertise and/or criteria I need to understand or become more proficient with. I'm looking at this from a Marie Kondo approach. I'm looking at the resources I've collected, taking some time to categorize them into "immediately relevant", "relevant later", and "someday maybe". My goal is to locate the items that are immediately relevant and then focus on those for a short period of time.

In other words, I'm putting a physical constraint on the information I have an use, not to block out all of the resources I have, but to meaningfully work on the ones that can be most effective here and now. It's great that I have books that will help me master a particular technology, but if I'm just learning about it or trying to get beyond the Advanced Beginner stage, do I really need to deal with topics that relate to mastery at this stage? No. Yet just by their being there in my line of sight, I lose focus and my attention wanders. I also do something similar regarding other endeavors in my office. I have a lot of interests and it's tempting to have a variety of things out and ready to use. The net result, though, is that I dabble in lots of things and don't put any appreciable time into the areas that are most important. Frequently I end up dealing with what's urgent or pressing, and that's great for the moment, but it can leave me lacking in areas that are indeed important but aren't urgent.

I'm not sure if this is going to be helpful to anyone else, but it's currently helping me. Take some time to block out items you want to work on, that you need to work on and then think of the things that will directly help you meet those goals in the very near-term future. If they don't, don't delete them but perhaps put them in a place where you know they will come in handy later, and try to set a hard time for when "later" might be. If you can't do that, put them in the "someday maybe" container. The ability to pick and choose is wonderful, but sometimes, it helps a lot to limit what can be picked so that you actually make a choice and move forward with it :).

Wednesday, October 11, 2017

Machine Learning Part 2 With Peter Varhol: The Testing Show

As has become abundantly clear to me over the last several weeks, I could be a lot more prolific with my blog posts if I were just a little bit better and more consistent with self-promotion. Truth be told, a lot of time goes into editing The Testing Show. I volunteered a long time ago to do the heavy lifting for the show editing because of my background in audio editing and audio production from a couple decades back. Hey, why let those chops go to waste ;)? Well, it means I don’t publish as often since, by the time I’ve finished editing a podcast, I have precious little time or energy to blog. That is unless I blog about the podcast itself… hey, why not?


So this most recent episode of The Testing Show is “Machine Learning, Part 2” and features Peter Varhol. Peter has had an extensive career and has also done a prodigious amount of writing. In addition, he has a strong mathematical background which makes him an ideal person to talk about the proliferation of AI and Machine Learning. Peter has a broad and generous take on the current challenges and opportunities that both AI and Machine Learning provide. He gives an upbeat but realistic view of what the technologies can and cannot do, as well as ways in which the tester can both leverage and thrive in this environment.




Anyway, I’d love for you to listen to the show, so please either go to the Qualitest Group podcast page or subscribe via Apple Podcasts. While you’re at it, we’d love it if you could leave us a review, as reviews help bubble our group higher in the search listings and help people find the show. Regardless, I’d love to know what you think and comments via this page are also fine.

Tuesday, October 10, 2017

Seek JOY: #PNSQC Live Blog

Wow, we're at the end of the day already? How did that happen? Part of it was the fact that I started a few conversations with people that cut into talks being delivered, but as is often the case, those discussions can take priority and can often be the most important conversations you have at a conference.

Long story short though is that we are at the closing Keynote for the main two-day conference. Rich Sheridan of Menlo Innovations believes that we can do work that we care about and that we can have joy in the work that we do and in the workplaces we actively move in. Rich shared his story of how he came up into the world of computers and computing starting in the early seventies and how the profession he loved was starting to sap the life out of him and how he was contemplating leaving the industry entirely. He was experiencing the chaos of the industry. Issues, bugs, failed projects, blown deadlines, lack of sales, and all of the fun stuff any of us who have worked in tech recognize all too well. Chaos often ends up leading to bureaucracy, where we can't get anything done to not being able to get anything started.

The fact Rich wants to impart is that Joy is what all of us hope for in most of the things that we do. We often see it as some form but it's often nebulous to us. Additionally, jobs and companies cannot guarantee our success or our happiness. We have to have an active role in it and be willing to make it happen for us as we endeavor to make it work for others.

Why joy? Joy is service to others and being able to see the reaction to that service. IT's why we do the work that we do. We want to see our work out in the world. We want to see it get a response. We want to see people react to it. and we want to have that moment that swells up inside of us and that cheers us and makes us jump for (wait for it!) joy.

It's one thing to say that you want to build a joyful career, but it requires human energy. In most of the work environments that I have enjoyed the most, the work has almost always been secondary. What made the work enjoyable? The people and the interactions with those people are what makes for memorable experiences.

One of the most important things to foster joy is the idea of trust. We have to trust one another. Trust allows us to be open and frank. We can get into hard discussions and deal with conflict in a positive manner. When we can debate issues with trust and consideration, while still being committed to trying to get our issues resolved, we can deal with the hard issue and still be positive and remain friends.

Rich describes his office as a hodgepodge of machines, but the most astounding aspect is the fact that no one has their own computer. People change pairs and move onto other machines every five days, and with those moves, people move onto other machines. That means there is no such thing as "it works on my machine" because there is no dedicated machine for anybody.

Simplicity goes a long way to helping develop joy. Complexity for its own sake sucks the life out of us, and Rich showed the way that they manage their work. It's all on paper. It's all based on the size of commitment, and it's all based on creating meaningful conversations. Pans are then put on the wall, for all to see so that it's completely transparent. Honesty matters and with transparency comes honesty. With honesty and a willingness to express it, empathy comes into play. With empathy, we can see what others see and feel what others feel. To borrow from Deming, as Rich did "All anyone wants is to be able to work with pride" With that pride comes joy.

Rich Sheridan is the author of "Joy, Inc." and it is scheduled to be released very soon. Yep, I want a copy :).

And with that, the two-day technical program for PNSQC is over. Tomorrow will be another work day for me, but I think it's safe to say that I'll be literally buzzing with the kinetic energy that these past two days have provided for me. Thank you PNSQC team for putting on a great event. Thank you, speakers, for sharing your insights and experiences. Thank you, participants, for coming out and participating, and especially, thank you to everyone who came to my talk and offered positive feedback (and constructive criticism, too). Wishing everyone to get back to their destinations safely, and if you are here for the extended workshop day, I hope you all enjoy it and get to bring back great insights to your teams.

TESTHEAD out!!!

Off the Cuff Accessibility and Inclusive Design: #PNSQC Live Blog

And here I was wondering what I was going to say to give a highlight and wrap-up for my own talk that I gave yesterday.

The PNSQC Crew beat me to it. They asked me to give a video summary of my talk and some of my impressions of Accessibility and Inclusive Design.

This video chat was unrehearsed, so if you are used to my nice and polished podcasts, where I seem to speak super smooth and without flubs, today you get to see the real me: no edits, no do overs and a much more realistic representation of how I talk most of the time, hand flails and all.


To quote Ms. Yang Xiao Long... "So... that was a thing!"

Product Ecology and Context Analysis: #PNSQC Live Blog

Ruud Cox is an interesting dude who has done some interesting stuff. To lead off his talk, he described a product that he was involved with testing. A deep brain stimulator is effectively a pacemaker for the brain. Ruud described the purpose of the device, the requirements, and issues that have come to light during the procedures. I've worked on what I consider to be some interesting projects, but nothing remotely like this.


Ruud was responsible for organizing testing for this project, and he said he immediately realized that the context for this project was difficult to pin down. In his words, the context is a blur. However, by stepping back and addressing who the users of the product are (both the doctors and medical staff performing the procedures and the individuals who are having the procedures performed).

One tool that Ruud found to be helpful was to create a context diagram, and in the process, he was able to sketch out all of the people, issues, and cases where the context was applicable, and that it was somewhat fluid. This is important because as you build out and learn about what people value, you start to see that context itself shifts and that the needs of one user or stakeholder may differ from, or even conflict with, another user.



Patterns and giving those patterns meaning are individual and unique. Ruud points out that, as he as making his context diagram, he was starting to see patterns and areas where certain domains were in control and other domains that had different rules and needs. Our brains are "belief" engines", meaning we often believe what we see, and we interpret what we see based on our mental model of the world. Therefore, the more actively we work with diagramming context, the more we understand the interactions.

Ruud refers to these interactions and the way that we perceive them as "Product Ecologies". As an example, he showed how a person has been asked if they can make an apple pie. Once the person says yes, they then look at the factors they need to consider to make the pie. The PRoduct Ecology considers where the apples come from, what other ingredients are needed and where they come from, the tools and necessary methods of preparing and combining to create an apple pie suitable for eating. In short, there' a lot that goes into making an apple pie.

Areas that appear in a Context analysis need to be gathered and looked at in regards to their respective domains. Ruud has the following  approach to gathering factors:


  • Study artifacts which are available
  • Interview Domain experts
  • Explore existing products
  • Hang out at the coffee machine (communicate with people, basically)
  • Make site visits
Another way to get insights on your context is to examine the value chain. Who benefits from the arrangements and in what way. Who supplies the elements necessary to create the value? Who are the players? What do they need? How do they affect the way that a product is shaped and developed over its life?

User scenarios try to diagram the various ways that a user might interact with a product. The more users we have, the more unique the scenarios we will accumulate and the more likely that we will discover areas that are complementary and contradictory. Ruud showed some examples of a car park with lighting that was meant to come on when being used and go dark when not. As he as diagramming out the possible options, he realized that the plane and the angle of the pavement had an effect on the way that the lights were aligned, how they turned on or off, or even if they turned on or off.

Currently, Ruud is working with ASML, which creates tiny circuit elements on chips. One of the factors he deals with is that, to create a wafer in a scanner and fabricator, it can take months to produce a single wafer. Testing a machine like this must be a beast! Gathering factors and requirements likewise is also a best, but it can be done once the key customers have been identified.

Owning Quality in Agile: #PNSQC Live Blog Day 2

Good morning everyone! I actually had a full night's sleep and no need to rush around this morning, so that was fantastic. I should also mention that I was really happy with the choice to go to Dar Salam restaurant last night as part of the Dine Around Town event. Great conversation and genuinely amazing food.

This morning, the theme for 2018 was announced. It will be "On The Road to Quality" and I am already thinking about what I'd like to pitch for a paper and talk next year. No guarantee it will be accepted, but I'm looking forward to the process anyway.


The first talk this morning is "Who Owns Quality in Agile?" by Katy Sherman and she started out with a number of questions for us. Actually, she started the morning with a dance session/audience participation event. No, I'm not kidding :).

Katy shared the example of her own company and the fact that her organization had two pulling interests. In some cases, testers felt they didn't have enough time to test. On the other hand, developers felt like they had too much time with nothing to do. One of the challenges is that there is still a holdover from older times as to what quality means. Is it conformance to requirements? Or is it something else? How about thinking of quality as meeting both the explicit and implied needs of the customers? We want conformance, we want user satisfaction and we want our applications to perform well, even if they don't know what performing well even means. Ultimately, what this means is that Quality cannot be achieved through testing (or at least not testing alone.

In some ways, there's still an us vs them attitude when it comes to being a developer or being a tester in an Agile team. For my view, silos are unhealthy and perpetuate problems. There's an attitude that testing in extra, that having a dedicated tester is a frustrating overhead and wouldn't it be great if we could do away with it? Some developers don't want to test. Some testers struggle with the idea of becoming programmers. Many of us are in between. As a tester that has reported to test managers and to development managers, there's a need to be direct and show what you can bring to the table and to express issues and concerns in a similar way (perhaps not exactly the same but feel confident that both can be taught to understand the issues). In my own work, I have determined that "being a tester" alone isn't a realistic option at my company. There is too much work and there are too few people to do it, so everyone needs to develop their skills and branch into avenues that may not necessarily be their specific job title. Consider the idea that we are all software engineers. Yes, I realize that engineer is a term that has baggage, but it works for the current discussion.



There is a lot of possible cross training and learning that can happen. Developers being taught how to test is as important as testers being taught how to code. Perhaps it is even more important. Testers can learn a lot about the infrastructure. Testers can learn how to deploy environments, how to perform builds, how to set up a repository for code checkout, and yes, the old perennial favorite, work with automating repetitive tasks. Please do not believe that more automation will eliminate active human testing. Yes, it is true that it may well end a lot of low-level testing and that it will eliminate a number of jobs focused on that very low level, but testers who are curious and engaged in learning and understanding the product are probably the least likely to be turned out. I'm mimicking Alan Page and Brent Jenson of A/B Testing at this point, but if testers are concerned about their future at their current company, perhaps a better question to ask would be "what value am I able to bring to my company and in what areas am I capable of bringing it?

One of the key areas that we can make improvements is in the data sets and the data that we use to both set up tests as well as to gather to analyze the output. Data drives everything, so being able to rive the testing by data and analyze the resulting data produced (yes, this is getting meta, sorry about that ;) ). By getting an understanding of the underlying data and learning how to manipulate it (whether directly or through programmatic means) we can help shape the approaches to test as well as the predictions of application behavior.

Is QA at a crossroads? Perhaps yes. There is a very real chance that the job structure of QA will change in the coming years Let's be clear, testing itself isn't going anywhere. It will still happen. For things that really matter, there will still be testing and probably lots of it. Who will be doing it is another story. It is possible that programmers will do a majority of the testing at some point. It's possible that automation will become so sophisticated and all-encompassing that machines will do all of the testing efforts. I'm highly skeptical of that latter projections, but I'm much less skeptical of the former. IT's very likely that there will be a real merging of Development and QA. In short, everyone is a developer. Everyone is a tester. Everyone is a Quality Engineer. Even in that worldview, we will not all be the same and do exactly the same things. Some people will do more coding, some people will do more testing, some people will do more operations. The key point will be less of a focus on the job title and a rigorously defined role. Work will be fluid and the needs of the point in time will determine what we are doing on any given day. Personally, that's a job I'd be excited to do.




This is a 90-minute talk, so I'll be adding to it and updating it as I go. More to come...

Monday, October 9, 2017

Lean Startup Lessons: #PNSQC Live Blog



Wow, a full and active day! We have made it to our last formal track talk for today, and what's cool is that I think this is the first time I've been in a talk with Lee Copeland that hasn't been physically running a conference.

Lee discussed the ideas behind Lean Startup, which was a book written back in 2011 by Eric Ries. In Lean Startups, classical management strategies often don't work.

The principles of Lean Startup are:

Customer Development: In short, cultivating a market for their products so that they can continue to see sales and revenue, and to help develop that relationship

Build-Measure-Learn Loop: Start with ideas about a product, build something, put in front of customers to measure their interest, then learn about what they Like or Don't Like and improve on the ratio of Like to Don't Like. Part of this is also the concept of "pivoting", as in "should we do something else"?

Minimum Viable Product: Zappos didn't start with a giant warehouse, they reached out to people and sought out what they might want to order, and then physically went to get the shoes that people ordered and sent them to them.

Validated Learning: "It isn't what we don't know that gives us trouble, it's what we know that ain't so" - Will Rogers
"If I had asked people what they wanted, they would have said faster horses." - Henry Ford (allegedly)

One Metric That Matters: This measure may change over time, but it's critical to know whatever this metric is because if we don't know it, we're not going to succeed.

So what Quality lessons can we learn here?

Our Customer Development is based on who consumes our services. Managers, Developers, Users, Stakeholders. Who is NOT A customer? The testing process. We do this wrong a lot of the time. We focus on the process, but the process is not the customer and it doesn't serve the customer(s). With this in mind, we need to as "what do they need?  what do they want? what d they value, what contributes to their success? what are they willing to pay for?"

The Build-Measure-Learn Loop basically matches up with Exploratory Testing.

Minimum Viable Product: We tend to try to build testing from the bottom up, but maybe that's the wrong approach. Maybe we can write minimum viable tests, too. Cover what we have to as much as we have to, and add to it as we go.

Validated learning equals hypotheses and experiments to confirm/refute hypotheses. In short, let's get scientific!

How about the One Metric That Matters? What it definitely does not include are vanity metrics. What does the number of planned test cases mean? How about test cases written? Test cases executed? Test cases passed? Can we actually say what any of those things mean? Really? How about a metric that measures the success of your core business? How do they relate to the quality of the software? Is there an actual Case and effect relationship? Does the metrics listed lead to or inform next actions? Notice I haven't actually identified a metric that meets those criteria, and that's because it changes. There are a lot of good metrics but they are only good in their proper context, and that means that we have to consistently look at what we are and what that metric that matters and what really matters when.





Gettin' Covey With It: #PNSQC Live Blog

This talk is called "7 Habits of Highly Effective Agile" which is a play on words of the Steven Covey book, hence my silly title. Yes, I'm getting a tad bit punchy right now, but I am lucky in that I get to have an energy boost. Nope, no food or drink or anything like that. I get to have Zeger Van Hese next to me doing his sketch notes.

I've waited years to actually see him do this. How am I going to concentrate on doing a live blog with that level of competition (LOL!)?

Seriously though, "Seven Habits" is a book from the 80s that is based on the idea of moving from dependence to interdependence with a midpoint of independence.  For those wondering about the original Covey Seven Habits, they are:


  1. Be Proactive
  2. Begin With the End in Mind
  3. Put First Things First
  4. Seek to Understand, Then be Understood
  5. Think Win/Win
  6. Synergize
  7. Sharpen the Saw


The Agile Seven Habits break down to:

  1. Focus on Efficiency and Effectiveness (make sure that your processes actually help you achieve these, such as cutting down, technical debt, providing necessary documentation, etc.)
  2. Treat the User As Royalty (understand your customer's needs, and what you are providing for them)
  3. Maintain an Improvement Frame of Mind (adapt, develop, experiment, learn, be open, repeat)
  4. Be Agile, Then Do Agile (we have to have internalized and actively live the purpose before we can do the things to get the results. Communicate, collaborate, learn, initiate, respond)
  5. Promote a Shared Understanding (develop a real team culture, break away from roles, stop thinking it's not my job, learn from each other, wear different hats and take on different tasks, pairing as a natural extension, determine what is important)
  6. Think Long Term (develop a sustainable process, pacing that's doable and manageable, making habits that stick and persisting with them)

Well, there's my thoughts. and here's Zeger's. His are much cooler looking ;):








Customer Quality Dashboard: #PNSQC Live Blog

OK, the late night flight arrival into Portland and the early morning registration is catching up to me. I will confess that I'm hunting down Diet Coke and alertness aids of the non-coffee or tea types. If I seem a little hazy or less coherent, well, now you will know why ;).

In any event, it's time for John Ruberto's "How to Create a Customer Quality Dashboard" and he started with a story about losing weight, and a low-fat diet that he was on for awhile. The goal was to get less than 20% of his calories from fat. He soon realized that having a couple of beers and eating pretzels helped make sure that the fat calories consumed were less than 20%. Effective? Well, yes. Accurate. In a way, yes. Helpful? Not really, since the total calories consumed went well above those needed for weight loss, but the fat calories were under 20% all the time ;).

This story helps illustrate that we can measure stuff all day long, but if we aren't measuring the right things in context, we can be 100% successful in reaching our goals and still fail in our overall objective.

To create a usable and effective dashboard, we need to be able to set goals that are actually in alignment with the wider organization, focusing on what's most important to stakeholders, providing a line of sight from metrics to goals, and build a more comprehensive view of our goals.

Let's put this into the perspective of what might be reported to us from our customers. What metrics might we want to look at? What does the metric tell us about our product? What does the metric not tell us about our product?

Some things we might want to consider:

Process metrics vs Outcomes: Think vines of code per review hour vs. defects found per review.
Leading Indicators vs. Lagging Indicators: Think code coverage vs. delivered quality.
Median vs. Average: Median page load vs. average page load. Average can skew numbers
Direct Measures vs Derived Measures: Total crashes vs reported crash codes

There are a lot of potential issues that can cause us problems over time. One is gaming the system, where we set up metrics that we can easily achieve or otherwise configure in a way that is not necessarily supported in reality. See the example of the fat percentages. We could adjust our total intake so our fat calories were below 20%,

Confirmation Bias: is a preconceived notion of what things should be, and therefore we see or support results that help us see that reality. 

Survivor Bias: The act of seeing the surviving instances of an aspect or issue as though it's the whole of the group.         

Vanity Metrics: Measuring things that are easily manipulated.