So this is fun. Lalit and I have known each other for years. We have attended conferences together. I've been interviewed by Lalit for Tea Time With Testers. We've worked together within the Association for Software Testing on a variety of initiatives. Having said all that, I think this is the first time I've actually seen/heard Lalit speak :).
It's interesting to realize that with all of the technological advances we have had over the past thirty years (probably longer but I've only been in the game for three decades), we still have to pick from the speed, cost, and quality triangle (you know it, "Fast, Cheap, Good. Pick two!"). It seems that if any shift is going to happen, it typically happens at the "Good" part, meaning that if any pressure comes into the situation, the quality side is the side that ends up bending. Granted, often that means we get "good enough" and for many people, that is sufficient.
The irony is that we don't have to settle for good enough but it will require that upfront planning and resources be allocated to make sure that quality is reinforced. This comes down to requiring people to be motivated to provide not just good testing but a mindset of the importance of testing beyond the busywork of automation and declaring that testing has been performed.
We had a little discussion about apps we actually like using. Recently due to life circumstances, I've become more familiar with healthcare apps. My company's insurance provider has done big on telemedicine recently and to that extent, it seems they have either found out I'm a tester or I just tick off a lot of boxes for them because they have thrown just about every app and tool my direction to manage healthcare and treatment options from a digital perspective. Some of these apps have been really helpful and some of them have been... less than desirable, to say the least. To be clear, these apps are not developed by the insurance provider but they are either partnerships or investments that my insurance provider has made and encourages use. It's interesting to compare them and see what makes them "quality" products vs. not-so-good quality. Also, some apps have great quality in some areas while being less good in others. A perfect example of this is an app I'm involved with that focuses on weight management specifically for people who are at risk for Type 2 Diabetes (which family history points to me being, so I'm part of their initiative for that reason). In areas where real human interaction takes place, it's great. However, they have recently made decisions to limit website updates and almost exclusively manage via their mobile app. In one way, this makes sense, as we are more likely to be within arms reach of our phones at meal breaks as compared to our computers. Still, I type way faster on a computer than I do on my phone, so invariably, their change has resulted in my updates being delayed, sometimes by days, because it's less convenient for me to enter the details. I'm curious if anyone on their team even brought up this possibility.
Lalit emphasized three "P" areas of quality consideration. You have a Project aspect, a People aspect, and a Product aspect. He additionally emphasizes the 4 "E"s of quality. Enable, Engage, Execute, and Evaluate. The 4Es apply to each of the 3Ps. Granted, each of these elements has a context based on where it is applied and there are biases that come into play, it uses the story of the "Parable of the Elephant" where our limitations often constrain our vision and view of an aspect of something (I had a good laugh realizing that Lalit chose Dieter F. Uchdorf's telling of this story. It took me a minute but I kept thinking "wait, I know that voice" (LOL!) ).
Over time, we need to be open to learning new things and incorporating more knowledge of the potential for requirements, and translating that to concrete actions that can be used. As testers, we may be doing a lot of work but we may not actually be aligned with actual business issues or challenges. As I love to borrow from Steve Covey, how infuriating is it to know we've climbed a significantly tall ladder only to realize we've placed it against the wrong wall?
We know that we cannot engineer quality, at least not in a literal sense. What we can do is preserve as much of the product's intended integrity as possible and take steps to make sure that we are learning and focusing on areas to make sure that we are creating the best product we can. To that end, having quality experience sessions can help to inform how a product is being used and what can be done going forward. Ideally, these considerations are made early on in the life of the product or as it is being developed. For that to be effective, it requires people with a focus on quality to ask questions and experiment with requirements early on. The later this happens, the less likely they will be of value in design but it might be very demoralizing to realize the "right ladder, wrong wall" problem is happening after we've climbed quite a bit.
To wrap this up, if you need to have a simple thing to consider and practice, "test early, test small, and test continuously" is a pretty good approach, and apply it to all of the areas you interact with. if you find it valuable, share the approach and help it expand through the organization.