Tuesday, October 9, 2018

Talking About Quality - a #PNSQC Live Blog

Kathleen Iberle is covering a topic dear to my heart at the moment. As I'm going through an acquisition digestion (the second one, my little company was acquired and that acquiring company has recently been acquired) I am discovering that words we are used to and the way that we defined quality is not necessarily exactly in line with the new acquiring company. Please note, that's not a criticism, it's a reality and I'm sure lots of organizations face the same thing.

In many ways, there are lots of conversations we could be having and at times there are implicit requirements that we are not even aware of the fact that we require. I consider that an outside dependency where, if I don't know I need to do something, I can't do it until I get the knowledge that needs to be given to me so I can act on it. There's a flip side to this as well. That is the implicit requirement where there's something I do all the time, so much that I don't even think about it, but someone else has to replicate what I am doing. If I can't communicate to them the fact that there is a step they need to do because of me literally jumping through it so fast that I don't notate it, can I really be mad when someone doesn't know how to do that step?

Many of us are familiar with the idea of SMART Goals, i.e. Specific, Measurable, Achievable, Relevant and Time-based.  This philosophy also helps us communicate requirements and needs for products. Taking the time to make sure that our goals and our stories take into account whether or not they add up to the SMART goal model is a good investment in time.

An interesting distinction that Kathleen is making is the differentiation between a defect and technical debt. A defect is a failure of quality outside of the organization (i.e. the customer sees the issues). Technical debt is a failure of quality internal to the team (which may become a defect if it gets out in the wild).

An approach that hearkens back to older waterfall testing models (think the classic V model) is the idea of each phase of development and testing having a specific gate at each point in the process. Those gates can either be ignored (or have them only used at the very end of the process) or they are given too large an amount of attention out of cope or context for the phase in question. Breaking up stories into smaller atomic elements can help improve this process because the time from initial code to delivery might (can) be very short. using terms like "Acceptance Test", "Definition of Done", "Spike", "Standard Practice", etc. can help us nail down what we are looking at and when. I have often used the term "Consistency" when looking at possible issues or trouble areas.

Spikes are valuable opportunities to gain information and to also determine if there is an objective way to determine the quality of an element or a process. They are also great ways to be able to determine if we might be able to tool up or gain skills that we don't have or need more abilities with to be effective.

No comments: