Wednesday, October 11, 2017

Machine Learning Part 2 With Peter Varhol: The Testing Show

As has become abundantly clear to me over the last several weeks, I could be a lot more prolific with my blog posts if I were just a little bit better and more consistent with self-promotion. Truth be told, a lot of time goes into editing The Testing Show. I volunteered a long time ago to do the heavy lifting for the show editing because of my background in audio editing and audio production from a couple decades back. Hey, why let those chops go to waste ;)? Well, it means I don’t publish as often since, by the time I’ve finished editing a podcast, I have precious little time or energy to blog. That is unless I blog about the podcast itself… hey, why not?

So this most recent episode of The Testing Show is “Machine Learning, Part 2” and features Peter Varhol. Peter has had an extensive career and has also done a prodigious amount of writing. In addition, he has a strong mathematical background which makes him an ideal person to talk about the proliferation of AI and Machine Learning. Peter has a broad and generous take on the current challenges and opportunities that both AI and Machine Learning provide. He gives an upbeat but realistic view of what the technologies can and cannot do, as well as ways in which the tester can both leverage and thrive in this environment.

Anyway, I’d love for you to listen to the show, so please either go to the Qualitest Group podcast page or subscribe via Apple Podcasts. While you’re at it, we’d love it if you could leave us a review, as reviews help bubble our group higher in the search listings and help people find the show. Regardless, I’d love to know what you think and comments via this page are also fine.

Tuesday, October 10, 2017

Seek JOY: #PNSQC Live Blog

Wow, we're at the end of the day already? How did that happen? Part of it was the fact that I started a few conversations with people that cut into talks being delivered, but as is often the case, those discussions can take priority and can often be the most important conversations you have at a conference.

Long story short though is that we are at the closing Keynote for the main two-day conference. Rich Sheridan of Menlo Innovations believes that we can do work that we care about and that we can have joy in the work that we do and in the workplaces we actively move in. Rich shared his story of how he came up into the world of computers and computing starting in the early seventies and how the profession he loved was starting to sap the life out of him and how he was contemplating leaving the industry entirely. He was experiencing the chaos of the industry. Issues, bugs, failed projects, blown deadlines, lack of sales, and all of the fun stuff any of us who have worked in tech recognize all too well. Chaos often ends up leading to bureaucracy, where we can't get anything done to not being able to get anything started.

The fact Rich wants to impart is that Joy is what all of us hope for in most of the things that we do. We often see it as some form but it's often nebulous to us. Additionally, jobs and companies cannot guarantee our success or our happiness. We have to have an active role in it and be willing to make it happen for us as we endeavor to make it work for others.

Why joy? Joy is service to others and being able to see the reaction to that service. IT's why we do the work that we do. We want to see our work out in the world. We want to see it get a response. We want to see people react to it. and we want to have that moment that swells up inside of us and that cheers us and makes us jump for (wait for it!) joy.

It's one thing to say that you want to build a joyful career, but it requires human energy. In most of the work environments that I have enjoyed the most, the work has almost always been secondary. What made the work enjoyable? The people and the interactions with those people are what makes for memorable experiences.

One of the most important things to foster joy is the idea of trust. We have to trust one another. Trust allows us to be open and frank. We can get into hard discussions and deal with conflict in a positive manner. When we can debate issues with trust and consideration, while still being committed to trying to get our issues resolved, we can deal with the hard issue and still be positive and remain friends.

Rich describes his office as a hodgepodge of machines, but the most astounding aspect is the fact that no one has their own computer. People change pairs and move onto other machines every five days, and with those moves, people move onto other machines. That means there is no such thing as "it works on my machine" because there is no dedicated machine for anybody.

Simplicity goes a long way to helping develop joy. Complexity for its own sake sucks the life out of us, and Rich showed the way that they manage their work. It's all on paper. It's all based on the size of commitment, and it's all based on creating meaningful conversations. Pans are then put on the wall, for all to see so that it's completely transparent. Honesty matters and with transparency comes honesty. With honesty and a willingness to express it, empathy comes into play. With empathy, we can see what others see and feel what others feel. To borrow from Deming, as Rich did "All anyone wants is to be able to work with pride" With that pride comes joy.

Rich Sheridan is the author of "Joy, Inc." and it is scheduled to be released very soon. Yep, I want a copy :).

And with that, the two-day technical program for PNSQC is over. Tomorrow will be another work day for me, but I think it's safe to say that I'll be literally buzzing with the kinetic energy that these past two days have provided for me. Thank you PNSQC team for putting on a great event. Thank you, speakers, for sharing your insights and experiences. Thank you, participants, for coming out and participating, and especially, thank you to everyone who came to my talk and offered positive feedback (and constructive criticism, too). Wishing everyone to get back to their destinations safely, and if you are here for the extended workshop day, I hope you all enjoy it and get to bring back great insights to your teams.


Off the Cuff Accessibility and Inclusive Design: #PNSQC Live Blog

And here I was wondering what I was going to say to give a highlight and wrap-up for my own talk that I gave yesterday.

The PNSQC Crew beat me to it. They asked me to give a video summary of my talk and some of my impressions of Accessibility and Inclusive Design.

This video chat was unrehearsed, so if you are used to my nice and polished podcasts, where I seem to speak super smooth and without flubs, today you get to see the real me: no edits, no do overs and a much more realistic representation of how I talk most of the time, hand flails and all.

To quote Ms. Yang Xiao Long... "So... that was a thing!"

Product Ecology and Context Analysis: #PNSQC Live Blog

Ruud Cox is an interesting dude who has done some interesting stuff. To lead off his talk, he described a product that he was involved with testing. A deep brain stimulator is effectively a pacemaker for the brain. Ruud described the purpose of the device, the requirements, and issues that have come to light during the procedures. I've worked on what I consider to be some interesting projects, but nothing remotely like this.

Ruud was responsible for organizing testing for this project, and he said he immediately realized that the context for this project was difficult to pin down. In his words, the context is a blur. However, by stepping back and addressing who the users of the product are (both the doctors and medical staff performing the procedures and the individuals who are having the procedures performed).

One tool that Ruud found to be helpful was to create a context diagram, and in the process, he was able to sketch out all of the people, issues, and cases where the context was applicable, and that it was somewhat fluid. This is important because as you build out and learn about what people value, you start to see that context itself shifts and that the needs of one user or stakeholder may differ from, or even conflict with, another user.

Patterns and giving those patterns meaning are individual and unique. Ruud points out that, as he as making his context diagram, he was starting to see patterns and areas where certain domains were in control and other domains that had different rules and needs. Our brains are "belief" engines", meaning we often believe what we see, and we interpret what we see based on our mental model of the world. Therefore, the more actively we work with diagramming context, the more we understand the interactions.

Ruud refers to these interactions and the way that we perceive them as "Product Ecologies". As an example, he showed how a person has been asked if they can make an apple pie. Once the person says yes, they then look at the factors they need to consider to make the pie. The PRoduct Ecology considers where the apples come from, what other ingredients are needed and where they come from, the tools and necessary methods of preparing and combining to create an apple pie suitable for eating. In short, there' a lot that goes into making an apple pie.

Areas that appear in a Context analysis need to be gathered and looked at in regards to their respective domains. Ruud has the following  approach to gathering factors:

  • Study artifacts which are available
  • Interview Domain experts
  • Explore existing products
  • Hang out at the coffee machine (communicate with people, basically)
  • Make site visits
Another way to get insights on your context is to examine the value chain. Who benefits from the arrangements and in what way. Who supplies the elements necessary to create the value? Who are the players? What do they need? How do they affect the way that a product is shaped and developed over its life?

User scenarios try to diagram the various ways that a user might interact with a product. The more users we have, the more unique the scenarios we will accumulate and the more likely that we will discover areas that are complementary and contradictory. Ruud showed some examples of a car park with lighting that was meant to come on when being used and go dark when not. As he as diagramming out the possible options, he realized that the plane and the angle of the pavement had an effect on the way that the lights were aligned, how they turned on or off, or even if they turned on or off.

Currently, Ruud is working with ASML, which creates tiny circuit elements on chips. One of the factors he deals with is that, to create a wafer in a scanner and fabricator, it can take months to produce a single wafer. Testing a machine like this must be a beast! Gathering factors and requirements likewise is also a best, but it can be done once the key customers have been identified.

Owning Quality in Agile: #PNSQC Live Blog Day 2

Good morning everyone! I actually had a full night's sleep and no need to rush around this morning, so that was fantastic. I should also mention that I was really happy with the choice to go to Dar Salam restaurant last night as part of the Dine Around Town event. Great conversation and genuinely amazing food.

This morning, the theme for 2018 was announced. It will be "On The Road to Quality" and I am already thinking about what I'd like to pitch for a paper and talk next year. No guarantee it will be accepted, but I'm looking forward to the process anyway.

The first talk this morning is "Who Owns Quality in Agile?" by Katy Sherman and she started out with a number of questions for us. Actually, she started the morning with a dance session/audience participation event. No, I'm not kidding :).

Katy shared the example of her own company and the fact that her organization had two pulling interests. In some cases, testers felt they didn't have enough time to test. On the other hand, developers felt like they had too much time with nothing to do. One of the challenges is that there is still a holdover from older times as to what quality means. Is it conformance to requirements? Or is it something else? How about thinking of quality as meeting both the explicit and implied needs of the customers? We want conformance, we want user satisfaction and we want our applications to perform well, even if they don't know what performing well even means. Ultimately, what this means is that Quality cannot be achieved through testing (or at least not testing alone.

In some ways, there's still an us vs them attitude when it comes to being a developer or being a tester in an Agile team. For my view, silos are unhealthy and perpetuate problems. There's an attitude that testing in extra, that having a dedicated tester is a frustrating overhead and wouldn't it be great if we could do away with it? Some developers don't want to test. Some testers struggle with the idea of becoming programmers. Many of us are in between. As a tester that has reported to test managers and to development managers, there's a need to be direct and show what you can bring to the table and to express issues and concerns in a similar way (perhaps not exactly the same but feel confident that both can be taught to understand the issues). In my own work, I have determined that "being a tester" alone isn't a realistic option at my company. There is too much work and there are too few people to do it, so everyone needs to develop their skills and branch into avenues that may not necessarily be their specific job title. Consider the idea that we are all software engineers. Yes, I realize that engineer is a term that has baggage, but it works for the current discussion.

There is a lot of possible cross training and learning that can happen. Developers being taught how to test is as important as testers being taught how to code. Perhaps it is even more important. Testers can learn a lot about the infrastructure. Testers can learn how to deploy environments, how to perform builds, how to set up a repository for code checkout, and yes, the old perennial favorite, work with automating repetitive tasks. Please do not believe that more automation will eliminate active human testing. Yes, it is true that it may well end a lot of low-level testing and that it will eliminate a number of jobs focused on that very low level, but testers who are curious and engaged in learning and understanding the product are probably the least likely to be turned out. I'm mimicking Alan Page and Brent Jenson of A/B Testing at this point, but if testers are concerned about their future at their current company, perhaps a better question to ask would be "what value am I able to bring to my company and in what areas am I capable of bringing it?

One of the key areas that we can make improvements is in the data sets and the data that we use to both set up tests as well as to gather to analyze the output. Data drives everything, so being able to rive the testing by data and analyze the resulting data produced (yes, this is getting meta, sorry about that ;) ). By getting an understanding of the underlying data and learning how to manipulate it (whether directly or through programmatic means) we can help shape the approaches to test as well as the predictions of application behavior.

Is QA at a crossroads? Perhaps yes. There is a very real chance that the job structure of QA will change in the coming years Let's be clear, testing itself isn't going anywhere. It will still happen. For things that really matter, there will still be testing and probably lots of it. Who will be doing it is another story. It is possible that programmers will do a majority of the testing at some point. It's possible that automation will become so sophisticated and all-encompassing that machines will do all of the testing efforts. I'm highly skeptical of that latter projections, but I'm much less skeptical of the former. IT's very likely that there will be a real merging of Development and QA. In short, everyone is a developer. Everyone is a tester. Everyone is a Quality Engineer. Even in that worldview, we will not all be the same and do exactly the same things. Some people will do more coding, some people will do more testing, some people will do more operations. The key point will be less of a focus on the job title and a rigorously defined role. Work will be fluid and the needs of the point in time will determine what we are doing on any given day. Personally, that's a job I'd be excited to do.

This is a 90-minute talk, so I'll be adding to it and updating it as I go. More to come...

Monday, October 9, 2017

Lean Startup Lessons: #PNSQC Live Blog

Wow, a full and active day! We have made it to our last formal track talk for today, and what's cool is that I think this is the first time I've been in a talk with Lee Copeland that hasn't been physically running a conference.

Lee discussed the ideas behind Lean Startup, which was a book written back in 2011 by Eric Ries. In Lean Startups, classical management strategies often don't work.

The principles of Lean Startup are:

Customer Development: In short, cultivating a market for their products so that they can continue to see sales and revenue, and to help develop that relationship

Build-Measure-Learn Loop: Start with ideas about a product, build something, put in front of customers to measure their interest, then learn about what they Like or Don't Like and improve on the ratio of Like to Don't Like. Part of this is also the concept of "pivoting", as in "should we do something else"?

Minimum Viable Product: Zappos didn't start with a giant warehouse, they reached out to people and sought out what they might want to order, and then physically went to get the shoes that people ordered and sent them to them.

Validated Learning: "It isn't what we don't know that gives us trouble, it's what we know that ain't so" - Will Rogers
"If I had asked people what they wanted, they would have said faster horses." - Henry Ford (allegedly)

One Metric That Matters: This measure may change over time, but it's critical to know whatever this metric is because if we don't know it, we're not going to succeed.

So what Quality lessons can we learn here?

Our Customer Development is based on who consumes our services. Managers, Developers, Users, Stakeholders. Who is NOT A customer? The testing process. We do this wrong a lot of the time. We focus on the process, but the process is not the customer and it doesn't serve the customer(s). With this in mind, we need to as "what do they need?  what do they want? what d they value, what contributes to their success? what are they willing to pay for?"

The Build-Measure-Learn Loop basically matches up with Exploratory Testing.

Minimum Viable Product: We tend to try to build testing from the bottom up, but maybe that's the wrong approach. Maybe we can write minimum viable tests, too. Cover what we have to as much as we have to, and add to it as we go.

Validated learning equals hypotheses and experiments to confirm/refute hypotheses. In short, let's get scientific!

How about the One Metric That Matters? What it definitely does not include are vanity metrics. What does the number of planned test cases mean? How about test cases written? Test cases executed? Test cases passed? Can we actually say what any of those things mean? Really? How about a metric that measures the success of your core business? How do they relate to the quality of the software? Is there an actual Case and effect relationship? Does the metrics listed lead to or inform next actions? Notice I haven't actually identified a metric that meets those criteria, and that's because it changes. There are a lot of good metrics but they are only good in their proper context, and that means that we have to consistently look at what we are and what that metric that matters and what really matters when.

Gettin' Covey With It: #PNSQC Live Blog

This talk is called "7 Habits of Highly Effective Agile" which is a play on words of the Steven Covey book, hence my silly title. Yes, I'm getting a tad bit punchy right now, but I am lucky in that I get to have an energy boost. Nope, no food or drink or anything like that. I get to have Zeger Van Hese next to me doing his sketch notes.

I've waited years to actually see him do this. How am I going to concentrate on doing a live blog with that level of competition (LOL!)?

Seriously though, "Seven Habits" is a book from the 80s that is based on the idea of moving from dependence to interdependence with a midpoint of independence.  For those wondering about the original Covey Seven Habits, they are:

  1. Be Proactive
  2. Begin With the End in Mind
  3. Put First Things First
  4. Seek to Understand, Then be Understood
  5. Think Win/Win
  6. Synergize
  7. Sharpen the Saw

The Agile Seven Habits break down to:

  1. Focus on Efficiency and Effectiveness (make sure that your processes actually help you achieve these, such as cutting down, technical debt, providing necessary documentation, etc.)
  2. Treat the User As Royalty (understand your customer's needs, and what you are providing for them)
  3. Maintain an Improvement Frame of Mind (adapt, develop, experiment, learn, be open, repeat)
  4. Be Agile, Then Do Agile (we have to have internalized and actively live the purpose before we can do the things to get the results. Communicate, collaborate, learn, initiate, respond)
  5. Promote a Shared Understanding (develop a real team culture, break away from roles, stop thinking it's not my job, learn from each other, wear different hats and take on different tasks, pairing as a natural extension, determine what is important)
  6. Think Long Term (develop a sustainable process, pacing that's doable and manageable, making habits that stick and persisting with them)

Well, there's my thoughts. and here's Zeger's. His are much cooler looking ;):

Customer Quality Dashboard: #PNSQC Live Blog

OK, the late night flight arrival into Portland and the early morning registration is catching up to me. I will confess that I'm hunting down Diet Coke and alertness aids of the non-coffee or tea types. If I seem a little hazy or less coherent, well, now you will know why ;).

In any event, it's time for John Ruberto's "How to Create a Customer Quality Dashboard" and he started with a story about losing weight, and a low-fat diet that he was on for awhile. The goal was to get less than 20% of his calories from fat. He soon realized that having a couple of beers and eating pretzels helped make sure that the fat calories consumed were less than 20%. Effective? Well, yes. Accurate. In a way, yes. Helpful? Not really, since the total calories consumed went well above those needed for weight loss, but the fat calories were under 20% all the time ;).

This story helps illustrate that we can measure stuff all day long, but if we aren't measuring the right things in context, we can be 100% successful in reaching our goals and still fail in our overall objective.

To create a usable and effective dashboard, we need to be able to set goals that are actually in alignment with the wider organization, focusing on what's most important to stakeholders, providing a line of sight from metrics to goals, and build a more comprehensive view of our goals.

Let's put this into the perspective of what might be reported to us from our customers. What metrics might we want to look at? What does the metric tell us about our product? What does the metric not tell us about our product?

Some things we might want to consider:

Process metrics vs Outcomes: Think vines of code per review hour vs. defects found per review.
Leading Indicators vs. Lagging Indicators: Think code coverage vs. delivered quality.
Median vs. Average: Median page load vs. average page load. Average can skew numbers
Direct Measures vs Derived Measures: Total crashes vs reported crash codes

There are a lot of potential issues that can cause us problems over time. One is gaming the system, where we set up metrics that we can easily achieve or otherwise configure in a way that is not necessarily supported in reality. See the example of the fat percentages. We could adjust our total intake so our fat calories were below 20%,

Confirmation Bias: is a preconceived notion of what things should be, and therefore we see or support results that help us see that reality. 

Survivor Bias: The act of seeing the surviving instances of an aspect or issue as though it's the whole of the group.         

Vanity Metrics: Measuring things that are easily manipulated.


Keep Your Project off the Autopsy Slab: #PNSQC Live Blog

All right, my talk is over, and from what I can tell from the comments back it seems to have gone over well :). Lunch was good and we got to get into the 35th Anniversary cake. The committee also put out cupcakes and I'm pretty sure this had to have been put out there as a joke:

I am happy to hear that no Vegans were harmed in the making of these cupcakes ;).

The first talk for the afternoon is to talk about Pre-Mortems, with the subtitle of "Keep Your Project off the Autopsy Slab". To make it simple, there'are three basic things to consider in a pre-mortem:

1. Something has failed.
2. You and a group have determined and found the issue.
3. Steps are put in place so that the issue doesn't happen again.

Put simply, projects frequently fail, and they fail for predictable reasons. Sometimes they are event-based, such as a person quits, or the schedule changes, or an integration is needed. Sometimes it's people related, such as a skill set is missing, or someone is not dedicated to the project. At times the culture works negatively, such as fostering overt competitiveness or being overly skeptical. Additionally, the goal to get it out there overrides the willingness and ability to do the job right. Technology can be a problem, as in external third party software elements seem great on paper but we end up struggling to try to use it.

One additional issue that can sabotage a project is a lack of knowledge, specifically with the idea of "unknown unknowns" as in "we don't know it and we don't even know we don't know it".

If we are looking at a pre-mortem as a way to get a feel for where things are likely to go, we first have to identify as many risks as possible, but just as important is determining if, and how, it will be possible to mitigate those risks. Mitigation is an oddball term, in that it's not entirely clear as to what we are doing. Are we minimizing risk? Reducing risk? In general, it's lessening the impact of risk as much as possible.

There are eleven ideas to consider with pre-mortems:

  1. What is the scope?
  2. How do we determine the logistics?
  3. What instructions do you send?
  4. How can you imagine the project failing?
  5. What are the reasons for potential failure (brainstorm)?
  6. Now, let's create a master list (consolidate, de-duplicate and clarify).
  7. What items can we remove from the list due to low impact (filtering)?
  8. What reasons of potential failure could be real risks to the real project?
  9. What does the group consider important? Reorder based on importance (VOTE).
  10. Action Items to mitigate the top risks.
  11. Action items need to actually be done. They are the whole point of the pre--mortem.

The Good, The Bad and the Ho-Hum: #PNSQC Live Blog Continues

One of my favorite parts of coming to PNSQC is the fact that it has a high rate of recidivism. A lot of the attendees have been several times and it's great fun getting back in touch and catching up between sessions. I met my friend Bill here at PNSQC in 2010, and what we find hilarious is that, even though we both live in the Bay Area, it seems we only see each other here at PNSQC ;).

Heather Wilcox leads off the breakout track talks with a talk called "Four Years of Scrum: The Good, The Bad and the Ho-Hum". For the record, I work in a Kanban shop currently but worked in a Scrum shop from 2011-2012. Prior to that, none of the companies I worked for could be considered Agile (not for lack of trying for the last company I worked for up through 2010). What I've experienced is that each company does Agile just a little bit differently, and in many cases, they think they are doing Agile, but not really.

Heather described how in the early part of the process of getting to Agile, they were often performing the rituals of Agile, but they were not really sharing those rituals with other teams. As I have experienced that in a small way in the past, it's not often a recipe for success, especially since you have to explain Agile principles in a non-Agile space. Sounds like it wouldn't be much of a big deal, but you would be surprised.

What Heather described was the way that they implemented and perfected their practices, and in some cases, they determined that they were going to do things that were non-standard and that they were OK with it. Her team does things that may be considered unorthodox. Their teams are somewhat Siloed, they use various Agile Methodologies, some teams do formal scrum or wing it on a theme of Scrum.

Her direct teams use standups, planning meetings, demos, retros, chartering and a focus on "The Goal" which is to respect and preserve Scrum ceremonies while being as mindful of time as possible. I like that Heather uses the terms "Scrum Ceremonies" so freely because, in many ways, we don't want to admit that we are somewhat beholden to "rituals". If they do a demo, they only demo when they have something interesting to show. Retros are done with the same rules and techniques each time so as to let the team know what they need to do and be prepared for. As long as it works, that will be what they do. If it stops working, they will change the process, so long as it gets them closer to the overall goal.

Does Scrum Always Work?

To put it simply, no. When predictable but not precise is important, or there are small changes being made that can be accomplished in a sprint or two, then it works very well (and that tends to correlate with my own experience). When dealing with a hard stop date or having to focus on a large project, and there just doesn't seem to be a way to break this down, Scrum itself starts to break down. Regardless, it's been determined that points matter, and that those points are vital to being consistent. Their planning improves considerably when points are adhered to. They also help avoid massive stories. In short, Scrum is a technique, it's a planning approach, it's a methodology, but it is not gospel and it need not be ironclad. There is a discipline requirement and it can be uncomfortable, but it allows for flexibility if done correctly.


Just to let you all know it will be awhile until my next update because I'm delivering my talk next :). I will do a "somewhat summary" of that after the conference.