Tuesday, October 11, 2022

Value Streams, Quality Engineering And You: a #PNSQC2022 Live Blog



Wow, it's been an eventful couple of days but we have reached the end of the formal talk phase of the conference. This is the last talk before the festivities that follow and we get to go out and have fun in and around Portland. With moderating talks and being called in to pinch hit for a session, this has definitely been an eventful conference. Still, all good things must come to an end, and with that... 

Kaushal Dalvi, UKG






Today we have a new "D-D" to add to our list. Specifically, Value Stream Driven Development. So what does that actually mean?  We have a vast proliferation of development methodologies, so what does VSDD add to the DD nomenclature? Better yet, what is our value stream to begin with? Basically, our software's availability, robustness, performance, resilience, and security all add to the value stream. Anything that has an effect on any of those aspects can degrade that value stream. Thus, if we are looking at Value Stream Driven Development, what we are aiming to do is make sure that any change, any update, or any modification effectively adds to the overall value of your offerings. Additionally, as Lean Engineering concepts point to, we also want to eliminate waste wherever we can. 


When we take on a new approach, or a new library or framework, we can often be enticed by "the new shiny". I get this. Tools are awesome, they are fun, and they are nifty to learn. However, there are costs associated with these tools and changes. We have to ask ourselves what the actual gain is by using or implementing these tools, libraries, or changes. Can we vocalize or express what we are doing effectively? Does what we do benefit the entire organization? If not, can we explain why we are doing what we are doing and how those changes will benefit the rest of the organization?


Value is a subjective term. We could say anything that makes us money adds value. We could say anything that saves us time adds value. Additionally, anything that makes our product safer, more resilient, or perform better could be interpreted as adding value. Also, what may be seen as valuable to one part of the organization may be seen as less valuable to another part. What is valuable to the organization may be negligible to the customer or even detrimental. Thus value is context-dependent. 


The lean principles fall into these five areas:


- Specify value from the standpoint of the end customer by product family.

- Identify all the steps in the value stream for each product family, eliminating whenever possible those steps that do not create value.

- Make the value-creating steps occur in a tight sequence so the product will flow smoothly toward the customer.

- As flow is introduced, let customers pull value from the next upstream activity.

- As value is specified, value streams are identified, wasted steps are removed, and flow and pull are introduced, repeat this process again and continue it until a state of perfection is reached in which perfect value is created with no waste.


(Womack and Jones, 1996)


This is a great reminder to help us focus on ways to make sure that we make the main thing... "the main thing". By focusing on value-add and making sure our efforts specifically target value add, we are better able to implement the five Lean principles and make them meaningful and actionable. 


Software Quality As It Relates To Data: a #PNSQC2022 Live Blog

Well, sorry I've been quiet... I was asked to give an impromptu conference talk since the scheduled speaker couldn't attend. Fortunately, I had a number of talks downloaded to my laptop so I was able to pick another talk from a few years back but hey, I had it :). So yeah, just something any and all conference speakers should consider... keep an archive of your talks available on your system or quickly retrievable from the cloud. You never know when you might be needed/asked to give a talk on short notice.

Natasha NicolaiNatasha Nicolai




Back to today's other festivities (woo!)... 

How much thought do we give to  Data Management and Security? What happens to our data as we are trying to perform workflows? Where does our data go on its journey? At what point is our data standing in the line of fire or in a position to be compromised, stolen, or tainted?

Natasha Nicolai is discussing ways in which we can better manage and maintain our data and how that data is accessed, modified, deleted, and secured in the process of us doing our work. 

Odds are most organizations at this point are not using a monolithic data model, where everything is in one place and suffering a single point of failure or where a single vector being exploited could bring the whole system down or compromise all of the data.

I'm somewhat familiar with this by virtue of frequently testing data transformations. Most of these data transformations are being done on actual live customer data. That means I have to be exceptionally careful with this data and make sure that it cannot fall into the wrong hands. Additionally, I need to also make sure that none of the interactions I perform will mess up or modify that data.

Natasha is sharing a variety of strategies to make sure in production environments and specifically in Cloud environments like AWS. She makes the case that we want to make sure that the data that flows through our apps and what is visible is appropriately given permission to do exactly that. She refers to the s steps and gates as "data pillars" to make sure that we are allowing visibility to just those who need to see it and hiding/protecting the data from all who do not. The idea of "data lakes" is again ways to make sure that we maintain data integrity but to also give us the ability to store data and pack it away so as to not be accessed when it isn't meant to be.

There's a lot here that I must confess I have limited exposure to but I'd definitely be interested in seeing ways to learn more about these data security options.


Digitizing Testers: A #PNSQC2022 Live Blog with @jarbon


I must confess, I usually smile any time I see that Jason Arbon is speaking. I may not always agree with him but I appreciate his audacity ;). 

I mean, seriously, when you see this in a tweet:

I’m sharing perhaps the craziest idea in software testing this coming Tuesday. Join us virtually, and peek at something almost embarrassingly ambitious along with several other AI testing presentations.


You know you're going to be in for a good time.

Jason Arbon 





I'm going to borrow this initial pitch verbatim:

Not everyone can be an expert in everything. Some testers are experts in a specific aspect of testing, while other testers claim to be experts. Wouldn’t it be great if the testing expert who focuses on address fields at FedEx could test your application’s address fields?  So many people attend Tariq King’s microservices and API testing tutorials–wouldn’t it be great if a virtual Tariq could test your application’s API? Jason Arbon explores a future where great testing experts are ultimately digitized and unleashed will test the world’s apps–your apps.  

Feeling a little "what the...?!!" That's the point. Why do we come to conferences? Typically it's to come and learn things from people who know a thing or three more than we do. Of course, while we may be inspired to learn something or get inspired to dig deeper, odds are we are not going to develop the same level of expertise as, say, Tariq King when it comes to using AI and ML in testing. For that matter, maybe people look to me and see me as "The Accessibility and Inclusive Design Expert" (yikes!!! if that's the case but thank you for the compliment). Still, here's the point Jason is trying to make... what if instead of learning from me about Accessibility and Inclusive Design, *I* did your Accessibility and Inclusive Design Testing? Granted, if I were a consultant in that space, maybe I could do that. However, I couldn't do that for everyone... or could I?

What if... WHAT IF... all of my writings, my presentations, my methodologies & approaches, were gathered, analyzed, and applied to some kind of business logic and data model construction. Then, by calling on all of that, you could effectively plug in all of my experience to actually test your site for Accessibility and Inclusive Design. In short, what if you could purchase "The Michael Larsen AID" testing bot and plug me into your testing scripts. Bonkers, right?! Well... here's the thing. Once upon a time, if someone were to tell me that I could effectively buy a Mesa Boogie Triple Rectifier tube amp and a pair of Mesa 4x12 cabinets loaded with Celestion Vintage 30s, be able to select that as a virtual instrument and impulse controllers, and get a sound that sounds indistinguishable compared to the real thing? Ten years ago. Impossible. Today? Through Amplitube 5, I literally own that setup and it works stunningly well.

Arguably, the idea of taking what I've written about Accessibility and Inclusive Design and compartmentalizing that as a "testing persona" is probably a lot easier than creating a virtual tube amp. I'm not saying that the results would be an exact replica of what I would do while I test... but I think the virtual version of me could reliably be called upon to do what I at least have said I did or at least what I espouse when I speak. Do you like my overall philosophy? Then maybe the core of my philosophy could be written into logic so that you can have my overall philosophy applied to your application.

I confess the idea of loading up the "Michael Larsen AID" widget cracks me up a bit. For it to be effective, sure, I could go in the background and look at stuff and give you a yes/no report. However, that skips over a lot of what I hope I'm actually bringing to the table. When I talk about Accessibility and Inclusive Design, only a small part of it is my raw testing efforts. Sure, it's there and I know stuff but what I think makes me who and what I am is my advocacy and my frenetic energy of getting into people's faces and advocating about these issues. Me testing is a dime a dozen. Me advocating and explaining the pros and cons as to why your pass might actually be a fail is where I can really be of benefit. Sure, I could work in the background, but I'd rather be the present Doctor as we remember him on Star Trek: Voyager.

Thanks, Jason. This is a fun and out-there thought experiment. I must confess the thought of buying me as a "Virtual Instrument" both cracks me up and intrigues me. I'm really curious to see if something like this could really come to be. Still, I think you may be able to encapsulate and abstract my core knowledge base but I'd be surprised if you could capture my advocacy. IF you want to try, I'm game to see if I could be done ;).

Does Low Code Mean Low Testing? A #PNSQC2022 Live Blog



There has been a number of increases in various software development and deployment options referred to as "low code" or "no code". What this usually means is that the software development tools in question have created systems and abstractions to either hide the code created or to effectively minimize the amount of new code that needs to be written based on built-in methods and implementations. Intriguing, but with that methodology, does that limit our ability to test or interact with systems?

Jan Jaap Cannegieter









Jan Jaap Cannegeiter argues that many of these systems do have benefits and ways of interacting with how a system is pieced together. BY dragging and dropping elements that have already been constructed/implemented, lots of reusable pieces can be put together more like Lego blocks rather than by writing individual code blocks and methods. The idea of reuse and repurposing is not new. Animated programs have been doing this for decades. Heck, Hanna Barbera was famous for reusing whole blocks of animation and repurposing them in different scenes (ever noticed that The Brady Kids and The Archies when they perform their "songs" on their respective cartoons have the exact same movements ;)?). 

Again, the idea and benefit of low-code platforms is that they have four layers: processes, screen flows, business logic, and data model. The top two are likely the easiest to plug together, while the bottom two probably have the biggest challenges to implement and make reusable and pluggable. However, I would assume that, if the business logic and data model were "understood", then there would be an easier way to plug everything together. I can help but ask... who tests the business logic and the data model? How do we know that these are correct? With code, we can research and figure out whether or not the implementation is effective or if we are missing key areas or have areas we can exploit. If these are abstracted away, I'd argue that those areas become harder to test because we cannot verify (or we would have difficulty verifying) what that business logic actually is. That's not to say we can't test the logic gates but I feel we leave a lot on the table that doesn't get properly examined.

Anyway, I picked this talk because I specifically do not have a lot of experience with this topic or these tools. Does my skepticism hold up to scrutiny? I honestly do not know but I'm curious to explore more of these options so I can see if I'm right or wrong. Let's just say the jury's out at the moment but I freely confess my biases and doubts ;)