Showing posts with label PNSQC. Show all posts
Showing posts with label PNSQC. Show all posts

Wednesday, October 16, 2024

What Are We Thinking — in the Age of AI? with Michael Bolton (a PNSQC Live Blog)

In November 2022, the release of ChatGPT 3 brought almost overnight the world of the Large Language Model (LLM) to prominence. With its uncanny ability to generate human-like text, it quickly led to lofty promises and predictions. The capabilities of AI seemed limitless—at least according to the hype.

In May 2024, GPT-4o further fueled excitement and skepticism. Some hailed it as the next leap toward an AI-driven utopia. Others, particularly those in the research and software development communities, took a more skeptical approach. The gap between magical claims and the real-world limitations of AI was becoming clearer. 

In his keynote, "What Are We Thinking — in the Age of AI?", Michael Bolton challenges us to reflect on the role of AI in our work, our businesses, and society at large. He invites us to critically assess not just the technology itself, but the hype surrounding it and the beliefs we hold about it.

From the moment ChatGPT 3 debuted, AI has seen a lot of immense fascination and speculation. On one hand, we’ve heard the promises of AI revolutionizing software development, streamlining workflows, and automating complex processes. On the other hand, there have been dire warnings about AI posing an existential threat to jobs, particularly in fields like software testing and development.

For those in the testing community, we may feel weirdly called out. AI tools that can generate code, write test cases, or even perform automated testing tasks raise a fundamental question: Will AI replace testers?

Michael’s being nuanced here. While AI is powerful, it is not infallible. Instead of replacing testers, AI presents an opportunity for testers to elevate their roles. AI may assist in certain tasks, but it cannot replace the critical thinking, problem-solving, and creativity that human testers bring to the table.

One of the most compelling points Bolton makes is that **testing isn’t just about tools and automation**—it’s about **mindset**. Those who fall prey to the hype of AI without thoroughly understanding its limitations risk being blindsided by its flaws. The early testing of models like GPT-3 and GPT-4o revealed significant issues, from **hallucinations** (where AI generates false information) to **biases** baked into the data the models were trained on.

Bolton highlights that while these problems were reported early on, they were often dismissed or ignored by the broader community in the rush to embrace AI’s potential. But as we’ve seen with the steady stream of problem reports that followed, these issues couldn’t be swept under the rug forever. The lesson? **Critical thinking and skepticism are essential in the age of AI**. Those who ask tough questions, test the claims, and remain grounded in reality will be far better equipped to navigate the future than those who blindly follow the hype.

We should consider our relationship with technology. As AI continues to advance, it’s easy to become seduced by the idea that technology can solve all of our problems. Michael instead encourages us to examine our beliefs about AI and technology in greater depth and breadth.

- Are we relying on AI to do work that should be done by humans?
- Are we putting too much trust in systems that are inherently flawed?
- Are we, in our rush to innovate, sacrificing quality and safety?

Critical thinking, and actually practicing/using it, is more relevant than ever. As we explore the possibilities AI offers, we must remain alert to the risks. This is not just about preventing bugs in software—it’s literally about safeguarding the future of technology and ensuring that we use AI in ways that are ethical, responsible, and aligned with human values. 

Ultimately, testers have a vital role in this new world of AI-driven development. Testers are not just there to check that software functions as expected, this is our time to step up and be the clarions we claim we are. We are the guardians of quality, the ones who ask “What if?”, and probe the system for hidden flaws. In the age of AI, we need to be and do this more than ever.

Michael posits that AI may assist with repetitive tasks, but it cannot match the *intuition, curiosity, and insight that human testers bring to the job. 

It’s still unclear what the AI future will hold. Will we find ourselves in an AI-enhanced world of efficiency and innovation? Will our optimism give way to a more cautious approach? We don't know, but to be sure, those who practice critical thinking, explore risks, and test systems rigorously will have a genuine advantage.

The Test Automation Blueprint: A Case Study for Transforming Software Quality with Jeff Van Fleet (a PNSQC Live Blog)

Today, delivering high-quality software at speed isn’t just a goal, it’s a necessity. Whether your organization has a small Agile team or a huge corporation, creating a streamlined, efficient testing process can dramatically reduce costs and accelerate time to market. But how do you actually achieve that transformation? Jeff Van Fleet, President and CEO of Lighthouse Technologies, goes into depth with some practical tips and proven principles to guide organizations toward effective test automation. 

One of the most important steps in transforming your organization’s approach to test automation is engaging your leadership team. Test automation initiatives often require significant investment in tools, training, and process changes—investments that can only happen with leadership support. Jeff highlights the importance of showing clear ROI by presenting leaders with real-time reporting dashboards that demonstrate how automation accelerates delivery and improves quality.

These dashboards provide visibility into the success of the test automation effort, making it easy for leadership to see the value in continuing to invest. Data-driven views and knowledge keep leadership engaged and committed to long-term quality improvement.

It's a big leap from manual testing to automation. I know, I've been there! Many manual testers may feel apprehensive about making that transition. However, Jeff emphasizes that with the right training and support, manual testers can successfully transition to automation and get fully involved in the new process. Lighthouse Technologies focuses on equipping testers with the tools, skills, and confidence to tackle automation.

We have to approach this training with empathy and patience. Many manual testers bring invaluable domain expertise, which, when combined with automation skills, can significantly enhance the quality of the testing process. Investing in your existing team, instead of sidelining them, can transform teams and build a strong, motivated automation workforce.

we've pushed the idea of shift-left testing for a while now.  Many organizations are eager to adopt it, but few know how to implement it effectively. Moving testing earlier in the development cycle helps catch bugs before they snowball into more complex, costly issues.

By collaborating closely with developers to improve unit testing, teams can identify and address defects at the code level, long before they reach production. 

One of the challenges teams face is trying to implement automation while managing in-flight releases. Jeff offers practical strategies for balancing catch-up automation (automating legacy systems or current processes) with ongoing development work. His advice: start small, automate critical paths first, and build incrementally. This allows teams to gradually integrate automation without derailing existing release schedules.

Engaging with developers is another critical component of successful test automation. Often, there’s a disconnect between QA and development teams, but Lighthouse Technologies’ approach bridges that gap by partnering closely with developers throughout the testing process. By working together, developers and testers can create more effective test cases, improve unit test coverage, and ensure that automated tests are integrated seamlessly into the CI/CD pipeline.

For organizations looking to embrace test automation, the key takeaway is that it’s not just about tools—it’s about people, processes, and leadership. By following these principles, teams can accelerate their test automation efforts and create a culture of quality that drives both speed and innovation.

When Humans Tested Software (AI First Testing) with Jason Arbon (a PNSQC Live Blog)

Are we at the edge of a new era in software development—an era driven by Generative AI? Will AI fundamentally change the way software is created? As GenAI begins to generate code autonomously, with no developers in the loop, how will we test all this code?

That's a lot of bold questions, and if I have learned anything about Jason Arbon over the years, bold is an excellent description of him. To that end, Jason suggests a landscape where AI is set to generate 10 times more code at 10 times the speed, with a 100-fold increase in the software that will need to be tested. The truth is, that our traditional human-based testing approaches simply won’t scale to meet this challenge.

Just like human-created code, AI-generated code is not immune to bugs. As GenAI continues to evolve, the sheer volume of code it produces will surpass anything we’ve seen before.  Think about it: if AI can generate 10 times more code, that’s not just a productivity boost—it’s a tidal wave of new code that will need to be tested for reliability, functionality, and security. This surge is not just a matter of speed; it’s a "complexity crisis". Modern software systems, like Amazon.com, are far too intricate to be tested by human hands alone. According to Jason, AI-generated code will require AI-based testing. Not just because it’s faster, but because it’s the only solution capable of scaling to match this growth.

The current approach to software testing struggles to keep pace with traditional development cycles. In the future, with the explosion of AI-generated code, human-based testing methods will fall short unless we somehow hire a tenfold increase in software testers (I'm skeptical of that happening). Manual testing will absolutely not be able to keep up, and automated testing as we know it today won’t be able to keep up with the increasing volume and complexity of AI-generated systems.

What’s more, while GenAI can generate unit tests, it can’t test larger, more complex systems. Sure, it can handle individual components, but it stumbles when it comes to testing entire systems, especially those with many interdependencies. Complex applications, like enterprise-level platforms or global e-commerce sites, don’t fit neatly into a context window for GenAI to analyze. This is where Jason says the need for AI-based testing becomes critical.

The future isn’t just about AI generating code—it’s about AI testing that AI-generated code. According to Jason,  AI-based testing is the key to addressing the 100X increase in software complexity and volume. Only AI has the ability to scale testing efforts to match the speed and output of Generative AI. 


AI-first testing systems should be designed to:


Automate complex testing scenarios that would be impossible for traditional methods to cover efficiently.

Understand and learn from system behaviors, analyzing patterns and predicting potential failures in ways that humans or current automated tools cannot.

Adapt and evolve, much like the AI that generates code, enabling continuous testing in real-time, as software systems grow and change.

As Jason points out, AI is not a fad or a trend, it’s the only way forward. As we move into an era where Generative AI produces vast amounts of code at breakneck speed, AI-based testing will be the way that we help ensure that the software we create tomorrow will be reliable, functional, and secure.

Tuesday, October 15, 2024

Humanizing AI with Tariq King (a PNSQC Live Blog)

I've always found Tariq's talks to be fascinating and profound and this time around we're going into some wild territory.

AI is evolving, and with each new development, it’s becoming more "human". It’s not just about executing tasks or analyzing data—it’s about how AI communicates, adapts, and even imitates.

So AI is becoming human... in how it communicates. That's a big statement but with that qualifier, it is more understandable. AI is no longer a cold, mechanical presence in our lives. Today’s AI can respond based on context, understanding the tone of requests and adjusting replies accordingly. It can mimic human conversation, match our language, and create interactions that feel amazingly real. Whether you’re chatting with a customer service bot or getting personalized recommendations, AI can engage with us in ways that were once the domain of humans alone.

Okay, so if we are willing to say that AI is "becoming human", how should we shape these interactions?What should the boundaries be for AI communication, and how do we ensure it serves us, rather than replaces us?

Beyond just communication, AI is showing remarkable creativity. AI can now write stories, compose music, and generate art, ranging from wild and weird to quite stunning (I've played around with these for several years, and I have personally seen the development of these capabilities and they have indeed become formidable and impressive). What once seemed like the exclusive realm of human creativity is now being shared with machines. AI is no longer just a tool—it’s being used as a collaborator that can generate solutions and creative works that blur the line between human and machine-generated content.

Tariq points out that this raises some significant and critical questions.  Who owns AI output? How do we credit or cite AI authorship? How do we confirm the originality of works? Perhaps more to the point, as AI generates content, what is the human role in the creative process? And how do we ensure that the human element remains at the forefront of innovation?

AI is getting better at how convincingly it can imitate humans. But there’s a caveat: AI is prone to hallucinations, meaning it can produce plausible and relatable material that feels right for the most part but may be wrong (and often is wrong). I have likened this in conversations to having what I call the "tin foil moment". If you have ever eaten a food truck burrito (or any burrito to go, really) you are familiar with the foil wrapping. That foil wrapping can sometimes get tucked into the folds and rolls of the burrito. Occasionally, we bite into that tin foil piece and once we do, oh do we recognize that we have done that (sometimes with great grimacing and displeasure). Thus, when I am reading AI-generated content, much of the time, I have that "tin foil" moment and that takes me out of believing it is human (and often stops me being willing to read what follows, sadly).

The challenge here is not just humanization. We need to have critical oversight over it so that we can have it do what we want it to do and not go off the rails. How do we prevent AI from spreading misinformation? And how can we design systems that help us discern fact from fiction in a world where AI-generated content is increasingly common?

Okay, so we are humanizing AI... this begs a question... "Is this something we will appreciate or is it something that we will fear?" I'm on the fence a bit. I find a lot of the technology fascinating but I am also aware of the fact that humanity is subject to avarice and mendacity. Do we want AI to be subject to it as well, or worse, actively practice it? What unintended consequences might we see or incur? 

For some of you out there, you may already be thinking of some abstract idea called "AI Governance", which is the act of putting guardrails and safety precautions around AI models so that they perform as we want them to. This means setting clear ethical guidelines, robust oversight mechanisms, and working to ensure that AI is used in ways that benefit society. More to the point, we need to continuously monitor and work with AI to help ensure that the data that it works with is clean, well-structured, and not poisoned. That is a never-ending process and one we have to be diligent and mindful of if we wish to be successful with it. 

Make no mistake,  AI will continue to evolve. To that end, we should approach it with both excitement and caution. AI’s ability to communicate, create, and imitate like humans presents incredible opportunities, but it also brings with it significant challenges. Whether AI becomes an ally or a threat depends on how we manage its "humanization".

AI-Augmented Testing: How Generative AI and Prompt Engineering Turn Testers into Superheroes, Not Replace Them with Jonathon Wright’s (a PNSQC Live Blog)

Sad that Jonathon couldn't be here this year as I had a great time talking with him last year but since he was presenting remotely, I could still hear him talking on what is honestly the most fun title of the entire event (well played, Jonathon, well played ;) ).

It would certainly be neat if AI was able to enhance our testing prowess, helping us find bugs in the most unexpected places, and create comprehensive test cases that could cover every conceivable scenario (editors note: you all know how I feel about test cases but be that as it may, many places value and mandate them, so I don't begrudge this attitude at all).

Jonathon is calling for us to recognize and use "AI-augmented testing" where AI doesn't replace testers but instead amplifies their capabilities and creativity. Prompt engineering can elevate the role of testers from routine task-doers to strategic innovators. Rather than simply executing tests, testers become problem solvers, equipped with "AI companions" that help them work smarter, faster, and more creatively (I'm sorry but I'm getting a "Chobits" flashback with that pronouncement. If you don't get that, no worries. If you do get that, you're welcome/I'm sorry ;) (LOL!) ).

The whole goal of AI-augmented testing is to elevate the role of testers. Testers are often tasked with running manual or automated tests, getting bogged down in repetitive tasks that demand "attention to detail" but do not allow much creativity or strategic thinking. The goal of AI is to "automate the routine stuff" so we can "allowing testers to focus on more complex challenges" ("Stop me! Oh! Oh! Oh! Stop me... Stop me if you think that you've heard this one before!") No disrespect to Jonathon. whatsoever, it's just that this has been the promise for 30+ years (and no, I'm not going to start singing When In Rome to you, but if that earworm is in your head now.... mwa ha ha ha ha ;) ).

AI-augmented testing is supposed to enable testers to become strategic partners within development teams, contributing, not merely bug detection but actual problem-solving and quality improvement. With AI handling repetitive tasks, testers can shift their attention to more creative aspects of testing, such as designing unique test scenarios, exploring edge cases, and ensuring comprehensive coverage across diverse environments. This shift is meant to enhance the value that testers bring to the table and make their roles more dynamic and fulfilling. Again, this has been a promise for many years, maybe there's some headway here.

The point is that testers who want to harness the power of AI will need a roadmap for mastering AI-driven technologies. there are many of them out there and there is a plethora of options in a variety of implementations from LLMs to dedicated testing tools. No tester will ever master them all but even if you only have access to a LLM system like Chat GPT, there is a lot that can be done with Prompt Engineering and harnessing the output of these LLM systems. They are of course not perfect but they are getting better and better all the time. AI can process vast amounts of data, analyze patterns, and predict potential points of failure, but it still requires humans to interpret results, make informed decisions, and steer the testing process in the right direction. Testers who embrace AI-augmented testing will find themselves better equipped to tackle the challenges of modern software development. In short, AI will not take your job... but a tester who is well-versed in AI just might.

This brings us to Prompt engineering. This is the process of precise, well-designed prompts that can guide generative AI TO perform specific testing tasks. Mastering prompt engineering will allow testers to customize AI outputs to their exact needs, unlocking new dimensions of creativity in testing.

Ss What Can we Do With Prompt Engineering? We can use it to...

-  instruct AI to generate test cases for edge conditions
- simulate rare user behaviors
- explore vulnerabilities in ways that would be difficult or time-consuming to code manually.
- validating AI outputs so that we ensure that generated tests align with real-world needs and requirements.

Okay, so AI can act as a trusted companion—an ally helping testers do their jobs more effectively, without replacing the uniquely human elements of critical thinking and problem-solving. Wright’s presentation provides testers with actionable strategies to bring AI-augmented testing to life, from learning the nuances of prompt engineering to embracing the new role of testers as strategic thinkers within development teams. We can transform workflows so they are more productive, efficient, and engaging. 

I'll be frank, this sounds rosy and optimistic but wow, wouldn't it be nice? The cynic in me is a tad bit skeptical but anyone who knows me knows I'm an optimistic cynic. Even if this promise turns out to be a magnitude of two less than what is promised here... that's still pretty rad :).

Vulnerabilities in Deep Learning Language Models (DLLMs) with Jon Cvetko (A PNSQC Live Blog)

Vulnerabilities in Deep Learning Language Models (DLLMs)

There's no question that AI has become a huge topic in the tech sphere in the past few years. It's prevalent in the talks that are being presented at PNSQC (it's even part of my talk tomorrow ;) ). The excitement is contagious, no doubt exciting but there's a bigger question we should be asking (and John Cvetko is addressing)... what vulnerabilities are we going to be dealing with, specifically in Deep Learning Language Model Platforms like ChatGPT?

TL;DR version: are there security risks? Yep! Specifically, we are looking at Generative Pre-trained Transformer (GPT) models. As these models evolve and expand their capabilities, they also widen the attack surface, creating new avenues for hackers and bad actors. It's one thing to know there are vulnerabilities, it's another to understand them and learn how to mitigate them.

Let's consider the overall life cycle of a DLLM. we start with our initial training phase, then move to deployment, and then monitor its ongoing use in production environments. DLLMs require vast amounts of data for training. What d we do when this data includes sensitive or proprietary information? If that data is compromised,  organizations can suffer significant privacy and security breaches.


John makes a point that federated training is growing when it comes to the development of deep learning models. Federated training means multiple entities will contribute data to train a single model. The benefit is that it can distribute learning and reduce the need for centralized data storage, it also introduces a new range of security challenges. Federated training increases the risk of data poisoning, where malicious actors intentionally introduce harmful data into the training set to manipulate the model’s generated content.

Federated training decentralizes the training process so that organizations can develop sophisticated AI models without sharing raw data. However, according to Cvetko, a decentralized approach also expands the attack surface. Distributed systems are nearly by design more vulnerable to tampering. Without proper controls, DLLMs can be compromised before they even reach production.

there is always a danger of adversarial attacks during training. Bad actors could introduce skewed or intentionally biased data to alter the behavior of the model. This can lead to unpredictable or dangerous outcomes when the model is deployed. These types of attacks can be difficult to detect because they occur early in the model’s life cycle, often before serious testing begins.

OK, so that's great... and unnerving. We can make problems for servers. So what can we do about it? 

Data Validation: Implement strict data validation processes to ensure that training data is clean, accurate, and free from malicious intent. By scrutinizing the data that enters the model, organizations can reduce the risk of data poisoning.

Model Auditing: Continuous monitoring and auditing of models during both training and deployment phases. This helps detect oddities in the model behavior early on, allowing for quicker fixes and updates.

Federated Learning Controls: Establish security controls around federated learning processes, such as encrypted communication between participants, strict access controls, and verification of data provenance.

Adversarial Testing: Conduct adversarial tests to identify how DLLMs respond to unexpected inputs or malicious data. These tests can help organizations understand the model’s weaknesses and prepare for potential exploitation.

There is a need today, for "Responsible AI development." DLLMs are immensely powerful and can carry significant risk potential if not properly secured. While this "new frontier" is fun and exciting, we have a bunch of new security challenges to deal with. AI innovation does not have to come at the expense of security. By understanding the life cycle of DLLMs and implementing the right countermeasures, we can leverage the power of AI while at the same time safeguarding our systems from evolving threats.

Mistakes I Made So You Don’t Have To: Lessons in Mentorship with Rachel Kibler (A PNSQC Live Blog)

I have known Rachel for several years so it was quite fun to sit in on this session and hear about struggles I recognized all to well. I have tried training testers over the years, some I've been successful with, others not so much. When a new tester comes along quickly, seems to get it, and digs testing, that's the ultimate feeling (well, *an* ultimate feeling). 

However, as Rachel points out, it’s also full of potential missteps, and as she said clearly at the beginning, "Believe me, I’ve made plenty!" This was a candid and honest reflection of what it takes to be a mentor and help others who are interested in becoming testers, as well as those who may not really want to become testers, but we mentor them anyway.

We can sum this whole session up really quickly with "Learning from our mistakes is what makes us better mentors—and better humans"... but what's the fun in that ;)?


Mistake 1: One-Size-Fits-All Training Doesn’t Work

There is no single, ideal method to teach testing that would work for everyone. Rachel had clear plans and expected to get consistent results. However, "people are not vending machines". You can’t just input the same words and expect identical outcomes. Each person learns differently, has different experiences, and responds to unique challenges.

Mistake 2: Setting the Wrong Challenges

It's possible to give team members tasks that are either too difficult or too easy, failing to gauge their current abilities. The result? Either they are overwhelmed and lost confidence, or they felt under-challenged and disengaged. Tailoring challenges to a trainee’s current skill level not only builds their confidence but also keeps them engaged and motivated. As mentors, our role is to provide enough support to help them succeed while still pushing them to grow.


Mistake 3: Don't Forget the Human Element

At the end of the day, we’re working with humans. Rachel’s talk highlights the importance of remembering that training isn’t just about passing on technical knowledge—it’s about building relationships.  Everyone has unique needs, emotions, and motivations. By focusing on the human element, we can create an environment where people feel supported and valued, making them more likely to succeed.

Mistake 4: Not Embracing Mistakes as Learning Opportunities

Mistakes are opportunities to learn. Mistakes aren’t failures—they’re stepping stones. Whether it’s a trainee misunderstanding a concept or a mentor misjudging a situation, these moments are chances to grow. They teach us humility, patience, and resilience.

Rachel’s talk is a reminder that no one is a perfect mentor right out of the gate. The process of becoming a great mentor is filled with trial and error, reflection, and growth. Also, Imposter Syndrome is very real and it can be a doozy to overcome.  Ultimately, the key takeaway is this: mentorship is a journey, not a destination. We will make mistakes along the way, but those mistakes will help shape us into more effective, empathetic, and responsive mentors.

Scaling Tech Work While Maintaining Quality: Why Community is the Key with Katherine Payson (a PNSQC Live Blog)

If someone had told me ten years ago I'd be an active member of the "gig economy" I would have thought they were crazy (and maybe looked at them quizzically because I wouldn't entirely understand what that actually meant. in 2024? Oh, I understand it, way more than I may have ever wanted to (LOL!). Rather than. looking at this as a bad thing, I'm going to "Shift Out" (as Jon Bach suggested in the last talk) and consider some aspects of the gig economy that are helping to build and scale work and, dare we say it, quality initiatives. 

Katherine Payson offers some interesting perspectives:

- The gig economy generates $204 billion globally
- many companies are leveraging and taking advantage of this, including international companies hiring all over the world for specific needs (I know, I did exactly this during 2024)
- In 2023, the anticipated growth rate for gig work was expected to be 17%
- By 2027 the United States is expected to have more gig workers than traditional full-time employees

This brings up an interesting question... with more people involved in gig work, and not necessarily tied to or beholden to a company for any meaningful reasons, how do these initiatives scale, and how do quality and integrity apply?

Strong Community is the approach that Katherine is using and experiencing over at Cobalt, a company that specializes in "pentesting-as-a-service". Cobalt has grown its pool of freelance tech workers to over 400 in three years. That's a lot of people in non-traditional employment roles. So what does that mean? How is trust maintained? How is quality maintained? Ultimately, as Katherine says, it comes down to effective "Community Building".

Today, many businesses are looking for specialized skills, frequently beyond what traditional full-time employees and employment can do. Yes, AI is part of this shift but there is still a significant need for human expertise. As Cobalt points out, cybersecurity, software development, and other technical fields definitely still require human employees with a very human element to them. What this means is that there is a large rise in freelance professionals actively offering niche talents on a flexible, on-demand basis (likely also on an as-needed basis both for the companies and the gig workers themselves). Again, the bigger question is "Why should a gig worker really care about what a company wants or needs?"

Community can be fostered directly when everyone is in the same town, working on the same street, going to the same office. When Cobalt first began scaling, they relied on a traditional trust model that worked well for a smaller, more centralized team. As the number of freelancers grew, however, this model began to show its limitations. Without a more robust system in place, it would be impossible to ensure consistent quality across a distributed workforce.


Tools can go a certain distance when it comes to helping manage quality and production integrity but more to the point, developing actual communities within organizations is another method for helping develop quality initiatives that resonate with people from all involvements in their organizations.

Cobalt prides itself as a company that is able to maintain quality at scale. It claims to create a culture where freelancers feel connected, supported, and motivated to deliver their best work. So how does Cobalt do that?

Collaboration and Communication: Freelancers can work independently, but they don't work in isolation. Cobalt believes in open communication, where freelancers can collaborate with one another, share knowledge, and learn from each other’s experiences.

Mentorship and Professional Development: Cobalt invests in the professional growth of freelancers. Mentorship opportunities, training programs, and access to industry resources help their freelance community continuously hone their skills.

Recognition and Incentives: High-performing freelancers are recognized and rewarded for their contributions. This helps retain top talent and encourages others to aim for top-quality work.

Feedback Loop: Freelancers receive regular feedback on their work, helping them improve and keep quality high across the board.

As the gig economy continues to grow, maintaining quality at scale will become increasingly important everywhere. Cobalt aims to embrace the strengths of their freelance workforce, not just as individual contributors but as part of a larger community. Scaling with freelancers is not just about hiring more people—it’s about building a culture of collaboration, growth, and trust. To ensure quality remains front and center, companies need to invest in their communities every bit as much as much as they do in their tools and processes.

Exploring Secure Software Development w/ Dr. Joye Purser and Walter Angerer (a live blog from PNSQC)

Okay, so let's get to why I am here. my goal is to focus on areas that I might know less about and can see actionable efforts in areas I can be effective (and again, look for things I can use that don't require permission or money from my company to put into play).

Dr. Joye Purser is the Global Lead for Field Cybersecurity at Veritas Technologies. Walter Angere is Senior Vice President for Engineering at Veritas and co-author of the paper. To be clear Dr. Purser is the one delivering the talk.

Creating secure software involves a lot of moving parts. So says someone who labels herself as "at the forefront of global data protection."

High-profile security incidents are increasing and the critical nature of secure software development is needed more than ever. Because of high-profile cases that end up in the news regularly, Dr. Purser shared her journey and experiences with Veritas, a well-established data protection company. She shared their journey of ensuring software security.

Veritas has a seven-step SecDevOps process, demonstrating how they aim to control and secure software at every stage.

1. Design and Planning: Building security in from the outset, not bolting it on as an afterthought.

2. Threat Modeling: Identifying potential threats and mitigating them before they can become problems.

3. Code Analysis: Veritas uses advanced code analysis tools to identify vulnerabilities early in the process.

4. Automated Testing: Leveraging automation to continuously test for weaknesses.

5. Chaos Engineering: Veritas has a system called REDLab, which simulates failures and tests the system’s robustness under stress.

6. Continuous Monitoring: Ensuring that the software remains secure throughout its lifecycle.

7. Incident Response: Being prepared to respond quickly and effectively when issues do arise.


A little more on chaos engineering. This technique actively injects failures and disruptions into the system to see how it responds, with the idea that systems are only as strong as their weakest points under pressure. Veritas' REDLab is central to this effort, putting systems under tremendous stress with controlled chaos experiments. The result is a more resilient product that can withstand real-world failures

Veritas also focuses on ways to validate and verify that code generation is done securely, along with a variety of ways to stress test software during multiple stages of the build process. The talk also touched on the importance of keeping technical teams motivated. Including examples of role-playing scenarios, movie stars, and innovative content ads a touch of fun and can help keep development teams engaged. 

As technologies evolve, so do the techniques required to keep software safe. Security is needed at every stage of the software development lifecycle. Using techniques like chaos engineering along with creative team engagement has helped Veritas stay at the front of secure software development.

Tuesday, October 10, 2023

Empathy is a Technical Skill With Andrea Goulet (PNSQC)

 


Today has been a whirlwind. I was up this morning before 5:00 a.m. to teach the penultimate class of my contract (sorry, I just love working that word into things ;) ) but suffice it to say the class is still happening while PNSQC is happening. That has made me a little tired and thus a little less blogging today. Add to that the fact I was called in to do a substitute talk in the afternoon (glad to do it but that was not on my dance card today) and I'm really wondering how we are at the last talk of the day and formal conference. regardless, we are here and I'm excited to hear our last speaker.

I appreciate Andrea talking about this topic, especially because I feel that there has been a lot of impersonal and disinterested work from many over the past several years. I was curious as to what this talk would be about. How can we look at Empathy as a technical skill? She walked us through an example with her husband where he was digging into a thorny technical problem that was interrupted by Andrea asking him for a moment. His reaction was... not pleasant. As Andrea explained, she realized that he was deeply focused on something so all-consuming that it was going to be a big deal to get his attention for needful things. Instead of it being an ugly altercation, they worked out a phrase (in this case, "Inception") to help see when a person is on a deep dive and needs to be in their focused state, at least for a little while longer. While I don't quite know that level of a dive, I have times in my own life when I get caught up in my own thoughts and I bristle when someone interrupts/intrudes. By realizing these things, we can not just recognize when we ourselves are focusing on deep dives, but we can also recognize when others are as well. This is a development of our own empathy to aid us in the process of understanding when people are dealing with things.


Okay, that's all cool, but why is this being called a technical thing? Because we are free and loose with the use of the word "technical". Technical comes from the Greek word "Techne", and techne means "skill". That means any skill is technical when we get down to it. It also means it's a skill that can be learned. Yes, we can learn to be empathetic. It's not something we are born with, it's something we develop and practice. Motivation likewise drives empathy. In many ways, empathy can be a little mercenary. That's why we get it wrong a lot of the time. We often want to reach out and help in ways that we would want to be helped, and thus our empathy is highly subjective and highly variable. Additionally, empathy grades on a curve. There are numerous ways in which we express and experience empathy. it's not a monoculture, it is expressed in numerous ways and under different circumstances and conditions. There are a variety of components, mechanisms, and processes that go into our understanding and expressions of empathy. It's how we collaborate and solve complex problems. In short, it's a core part of our desire and ability to work together.

Andrea showed us a diagram with a number of elements. We have a variety of inputs (compassion, communication) that drive the various mechanisms that end up with a set of outputs. Those outputs come down to:

  • Developing an understanding of others 
  • Creation of Trust
  • A Feeling of Mutual Support
  • An overall synergy of efforts   

 Empathy requires critical thinking. It's not all feelings. We have to have a clear understanding and rational vision of what people want, and not specifically what we want. 

On the whole, this is intriguing and not what I was expecting to hear. Regardless, I'm excited to see if I can approach this as a developed skill.


Automation, You're Doing It Wrong With Melissa Tondi (PNSQC)



This may feel a bit like deja -vu because Melissa has given a similar talk in other venues. The cool thing is I know each time she delivers the talk, it has some new avenues and ideas. So what will today have in store? Let's find out :).



What I like about Melissa's take is that she emphasizes what automation is NOT over what it is.

I like her opening phrase, "Test automation makes humans more efficient, not less essential" and I really appreciate that. Granted, I know a lot of people feel that test automation and its implementation is a less than enjoyable experience. Too often I feel we end up having to play a game of metrics over making any meaningful testing progress. I've also been part of what I call the "script factory" role where you learn how to write one test and then 95 out of 100 tests you write are going to be small variations on the theme of that test (login, navigate, find the element, confirm it exists, print out the message, tick the pass number, repeat). Could there be lots more than that and lots more creativity? Sure. Do we see that? Not often.

Is that automation's fault? No. Is it an issue with management and their desire to post favorable numbers? Oh yeah, definitely. In short, we are setting up a perverse expectation and reward system. When you gauge success in numbers, people will figure out the ways to meet that. Does it add any real value? Sadly, much of the time it does not.   

Another killer that I had the opportunity to work on and see change was the serial and monolithic suite of tests that take a lot of time to run. I saw this happen at Socialtext and one of the first big initiatives when I arrived there was to see the implementation of a docker suite that would break out our tests into groupings split into fours. Every test was randomized and shuffled to run on the four server gateways. We would bring up as many nodes as necessary to run the batches of tests. By doing this, we were able to cut our linear test runs down from 24 hours to just one. That was a huge win but it also helped us determine where we had tests that were not truly self-contained. It was interesting to see how tests were set up and how many tests were made larger specifically to allow us to do examinations but also to allow us to divvy up more tests than we would have been able to otherwise. 

Melissa brought up the great specter of "automate everything". While granted, this is impossible, it is still seen forlornly as "The Impossible Dream". More times than not, it's the process of taking all of the manual tests and putting them to code. Many of those tests will make sense, sure, but many of them will not. The amount of energy and effort necessary to cover all of the variations of certain tests will just become mind-numbing and, often, not tell us anything interesting. Additionally, many of our tests that are created in this legacy manner are there to test legacy code. Often, that code doesn't have hooks that will help us with testing, so we have to do end runs to make things work. Often, the code is just resistant to testing or requiring esoteric identification methods (the more esoteric, the more likely it will fail on you someday). Additionally, I've seen a lot of organizations that are looking for automated tests when they haven't done unit or integration tests at lower levels. This is something I've realized having recently taught a student group to learn C#. We went through the language basics and then later started talking about unit testing and frameworks. After I had gone through this, I determined if I were to do this again, I would do my best to teach unit testing, even if at fundamental levels, as soon as participants were creating classes that processed actions or returned a value beyond a print statement. Think about where we could be if every software developer was taught about and encouraged to use unit tests at the training wheels level!

Another suggestion that I find interesting and helpful is that a test that always passes is probably useless. Not because the test is necessarily working correctly and the code is genuinely good but because we got lucky and/or we don't have anything challenging enough in our test to actually run the risk of failing. If it's the latter, then yes, the test is relatively worthless. How to remedy that? I encourage creating two tests wherever possible, one positive and one negative. Both should pass if coded accurately but both approach the problem from opposite levels. If you want to be more aggressive, make some more negative tests to really push and see if we are doing the right things. This is especially valuable if you have put time into error-handling code. The more error-handling code we have, the more negative tests we need to create to make sure our ducks are in a row.

A final item Melissa mentions is the fact that we often rely on the experts too much. We should be looking at the option that the expert may not be there (And at some point if they genuinely leave, they WON'T be there to take care of it. Code gets stale rapidly if knowledgeable people are lost. Take the time to include as many people as possible in the chain (within reason) so that everyone who wants to and can is able to check out builds, run them, test them, and deploy them.

Continuous Testing Integration With CI/CD Pipeline (PNSQC)

Today, I'm taking a walk down memory lane. I'm listening to Junhe Liu describe integrating various automatic tests into the CI/CD pipeline.


It's interesting to think about where we are today compared to 30 years ago when I first came into the tech world. Waterfall development was all that I or anyone knew (we may not have wanted to call it that, or we'd dress it up. Realistically speaking, any given release was a linear process, and each sequence flowed into each other. While I had heard of Agile come the early 2000s, I didn't work on such a team (or one that presented itself as such) until 2011. 

Likewise, it was around the mid-200s that I started hearing the idea of Development and Operations being two great tastes that went better together being discussed ;). Again, it wouldn't be another decade until I saw it in practice but over time, I did indeed start to see this and I was able to participate in it. 

One of the interesting arrangements in the group I was working at (Socialtext), every member of the team had their turn at being the "Pump King". That's a piece of lore that I miss and it is a long story involving an old USB drive that was kept in a toy jack-o-lantern bucket, hence the person who took care of the protective pumpkin became known as the "Pump King" and after everything went online, the name stuck. The key point was that the Pump King was the person responsible for the Jenkins system and making sure that it was working, up to date, and patched when necessary, as well as running it to build and deploy our releases. Every few weeks, it would be my turn to do it as well. 

Thus it was that I was brought into the world of Continuous Delivery and Continuous Deployment, at least in a limited sense (most of the time this was related to staging releases). We actually had a three-tiered release approach. Each developer would deploy to demo machines to test out their code and make sure it worked in a more localized and limited capacity. Merging to the staging branch would trigger a staging build (or the pump king would call one up whenever they felt it warranted, typically at the start of each day. We'd run that and push changes and version numbering to our staging server, and then we'd run our general tests, as well as all the underlying automated tests with the Jenkins process, of which there were a *lot* of them. Finally, due to our service agreements, we would update our production server and then push uploads to customers who opted in to be updated at the same time. We never go to production daily pushes but weekly was more common towards the end of my time on that product.   

It was interesting to get into this mode and I was happy that we were all taught how to do it, not just one and when needed but that all of us were expected to be able to do it at any time. Thus all of us knew how to do it and all of us were expected to do it every time it was our turn to be Pump King. 


Monday, October 9, 2023

Learning, Upskilling, and Leading to Testing (Michael Larsen with Leandro Melendez at PNSQC)

You all may have noticed I have been quiet for a few hours. Part of it is that I was giving a talk on Accessibility (I will post a deeper dive into that later, but suffice it to say I shook things up a little, and I have a few fresh ideas to include in the future).

Also, I was busy chatting with our good friend Leandro Melendez (aka Señor Performo), and I figured it would be fun to share that here. I'm not 100% sure if this will appear for everyone or if you need a LinkedIn login. If you can't watch the video below, please let me know.

 

We had a wide-ranging conversation, much of it based on my recent experience being a testing trainer and how I got into that situation (the simple answer is a friend offered me an opportunity, and I jumped at it ;) ). That led to talking about ways we learn, how we interact with that learning, and where we use various analogs in our lives. This led us to talk about two learning dualities I picked up from Ronald Gross' "Peak Learning" book (Stringers vs. Groupers) and a little bit about how I got into testing in the first place.

It's a wide-ranging conversation, but it was fun participating, and I hope you will enjoy listening and watching it :).

Common Pitfalls/Cognitive Biases In Modern QA with Leandro Melendez (PNSQC)


Ah yes, another date with the legendary Señor Performo :). 

Leandro is always fun to hear present and I particularly liked the premise of his talk as I frequently find myself dealing with cognitive biases, both in the way of locating them when others use them but also to admonish myself when I do (and yes, I do fall prey to them from time to time). 



I've been in the process of teaching a class for the past few months related to software test automation, specifically learning about how to use a tool like Playwright with an automated testing framework. To that end, we have a capstone project that runs for three weeks. As anyone involved in software development knows, three weeks is both a lot of time and no time at all. This is by design, as there is no way to do everything that's needed, and because of that, there are things that we need to focus on that will force us to make decisions that will not be optimal. This fits into the conversation that Leandro is having today. How do you improve and get better when you have so many pressures and so little time to do it all? 

Note: I am not trying to throw shade at my students. I think they are doing a great job, especially in the limited time frame that they have (again, by design) and seeing what choices they make (as I'm literally a "disinterested shareholder" in this project, meaning I care about the end product but I'm trying my level best to not get involved or direct them as to what to do. In part, it's not the instructor's role to do that but also, I'm curious to see the what and the why concerning the choices that are made).

We often act irrationally under pressure and with time limitations. Often we are willing to settle for what works versus what is most important or helpful. I'm certainly guilty of that from time to time. An interesting aspect of this, and one I have seen, is the "man with a hammer" syndrome, where once we have something we feel works well, we start duplicating and working with it because we know we can have great wins with that. That's all well and good but at times, we can go overboard. Imagine that you have an application with navigation components. You may find that many of those components use similar elements, and with that, you can create a solution that will cover most of your navigation challenges. The good thing? We have comprehensive navigation coverage. The disadvantage? all of that work on Navigation, while important and necessary, has limited the work on other functionalities with the unit under test. Thus, it may be a better use of time to do some of the navigation aspects and get some coverage on other aspects of the application rather than have a comprehensive testing solution that covers every navigation parameter and little else to show for it. 

Another example that Leandro gives is "Living Among Wolves" or we can consider this an example of "conformance bias" meaning that when we do certain things or we are part of a particular environment, we take on the thinking of those people to fit in with the group. Sometimes this is explicit, sometimes it is implicit, and sometimes we are as surprised as anyone else that we are doing something we are not even aware of. 

The "sunk cost" appears in a lot of places. Often we will be so enamored with the fact that we have something working that we will keep running and working with that example as long as we can. We've already invested in it. We've put time into it, so it must be important. Is it? Or are we giving it an outsized importance merely because we've invested a lot of time into it? 

One of the lessons I learned some time back is that any test that never fails is probably not very well designed or it offers little value in the long run. It's often a good idea to approach tests both from a positive and a negative perspective. It's one thing to get lucky and get something that works in a positive/happy path test (or not necessarily lucky but limited in what's being done. Now, does your logic hold up when you inver the testing idea? Meaning can you create a negative test or multiple negative tests that will "fail" based on changing or adding bogus data or multiple bogus data examples. Better yet, are you doing effective error handling with your bogus data? The point is, that so many of our tests are balanced to only happy path, limited depth tests. If you have a lot of positive tests and you don't have many tests that handle negative aspects (so that the incorrect outcome is expected... and therefore makes a test "pass" instead of fail), can you really say you have tested the environment effectively?   

Ending with a Shameless plug for Leandro. Leandro is now an author, having written "The Hitchhikers Guide To Load Testing Projects", a fun walkthrough that will guide you through the phases or levels of an IT load testing project. https://amzn.to/37wqpyx

Amplifying Agile Quality Practices with Selena Delesie (PNSQC)

I had the opportunity and privilege to introduce Selena Delesie on the program today. It was fun to reminisce a bit because Selena and I were both in the same Foundations class for Black Box Software Testing all the way back in 2010. We also both served on the Board of Directors for AST, so we had a lot of memories and fun/challenging things to deal with over the years. Thus, it was a pleasure to introduce Selena as our first Keynote speaker. Also, part of her talk was discussed on a recent The Testing Show podcast, so if you want a sample, you can listen to that :).

Selena Delesie

The tool that Selena and Janet Gregory put together is called the Quality Practices Assessment Model (QPAM). The idea behind this is that there are ways to identify potential breakdowns in the quality of our work. Areas we should consider are:

  • Feedback Loops
  • Culture
  • Learning and Improvement
  • Development Approach
  • Quality and Test Ownership
  • Testing Breadth
  • Code Quality and Technical Debt
  • Test Automation and Tools
  • Deployment Pipeline
  • Defect Management

The fascinating thing is the order and how these are identified and examined. Selena makes the case that the order in which these are presented and examined is important and by examining them in this order or weighting, the best chances for overall and lasting improvement are possible. Yes, Defect management is important but it will be less effective if more weight is not given to the previously mentioned items.

A key aspect to this is that quality is not just a technical issue, it's also a social issue and it should not be dealt with in isolation. Selena introduces us to a group code-named "Team Venus" and identifies many of the issues they are facing and where those issues fall on the ten quality aspects. The key element is that each area is looked at holistically and in conjunction with the other areas, not in isolation. As anyone familiar with GeneralSystems Thinking can tell you, there is no such thing as a standalone and isolated change, any modifications made will have ripple effect. It's also critical to realize that a process alone is meaningless if the overall values are not solid or agreed upon. 

In the ten quality aspects that Selena referenced, there are four quadrants/dimensions to consider:

  • Beginning
  • Unifying
  • Practicing
  • Innovating

What I like about considering these as quadrants is the fact that these areas are not separate from each other but they are dependent on each other. Some areas of the ten practices will be closer aligned with a particular dimension. Additionally, it's common for teams to spend more time in a given quadrant/dimension for the ten areas than others. I like the diagram Selena uses that looks like a spider web. The center of the web means that that is an informational or foundational level, and the farther out from the center, the greater the expertise and experience. Ideally, of course, all of the aspects should be on the outer rim of the spider web but in practice, there will be color splotches in all four of the dimensions. That is normal and should not be discouraged, especially since each new team member will typically need to start from zero.

For those interested, the book and full model example for Team Venus is available in "Assessing Agile Quality Practices with QPAM" so if you want to learn more, go there.

Amping It Up!: Back at the Pacific Northwest Software Quality Conference

 


It's that time of year once again. I'm excited to be at PNSQC and in a new location. We are at the University Place Hotel and Conference Center, which is part of Portland State University. This is the first time PNSQC has been here but not the first time I've been here. A few years back, in 2018, they had run out of rooms at the primary hotel, so I had to find someplace else to stay. 

I happened upon University Place, and while it was a walk from the previous venue at the Portland World Trade Center, it was a comfortable hotel, and I liked my stay. As we were looking for different places to hold the conference this year I mentioned my experience and thought it would make for an excellent possible venue. And thus, here we are. It wasn't solely up to me but I definitely put in a good word ;).

This year's conference is a strange feeling for me, as it is one in which I am feeling unsure and unsteady after many years. I am at the tail end of a contract I've been working for several months. In a few weeks, barring any changes or new contracts, I will be out of work again. Thus I am attending this conference from a different headspace than usual. Previously, I was looking for small tips I could bring back to do my current job. This year, part of me feels the need for a literal reinvention. I'm having that uneasy feeling that I have too many potential options and not enough time to consider them all, so this year's talks are probably going to be focused on my current mental state. If you see me attending talks that may seem different or out of character, that's why.

Additionally, this year had an additional challenge and excitement in that I was the Marketing Chair for PNSQC this year. If you felt that there was either too much or not enough of a marketing presence for the conference, I get both the praise and/or the blame. Either way, for those who are here and for those following along, I'm happy you are here.

   

Tuesday, October 11, 2022

Value Streams, Quality Engineering And You: a #PNSQC2022 Live Blog



Wow, it's been an eventful couple of days but we have reached the end of the formal talk phase of the conference. This is the last talk before the festivities that follow and we get to go out and have fun in and around Portland. With moderating talks and being called in to pinch hit for a session, this has definitely been an eventful conference. Still, all good things must come to an end, and with that... 

Kaushal Dalvi, UKG






Today we have a new "D-D" to add to our list. Specifically, Value Stream Driven Development. So what does that actually mean?  We have a vast proliferation of development methodologies, so what does VSDD add to the DD nomenclature? Better yet, what is our value stream to begin with? Basically, our software's availability, robustness, performance, resilience, and security all add to the value stream. Anything that has an effect on any of those aspects can degrade that value stream. Thus, if we are looking at Value Stream Driven Development, what we are aiming to do is make sure that any change, any update, or any modification effectively adds to the overall value of your offerings. Additionally, as Lean Engineering concepts point to, we also want to eliminate waste wherever we can. 


When we take on a new approach, or a new library or framework, we can often be enticed by "the new shiny". I get this. Tools are awesome, they are fun, and they are nifty to learn. However, there are costs associated with these tools and changes. We have to ask ourselves what the actual gain is by using or implementing these tools, libraries, or changes. Can we vocalize or express what we are doing effectively? Does what we do benefit the entire organization? If not, can we explain why we are doing what we are doing and how those changes will benefit the rest of the organization?


Value is a subjective term. We could say anything that makes us money adds value. We could say anything that saves us time adds value. Additionally, anything that makes our product safer, more resilient, or perform better could be interpreted as adding value. Also, what may be seen as valuable to one part of the organization may be seen as less valuable to another part. What is valuable to the organization may be negligible to the customer or even detrimental. Thus value is context-dependent. 


The lean principles fall into these five areas:


- Specify value from the standpoint of the end customer by product family.

- Identify all the steps in the value stream for each product family, eliminating whenever possible those steps that do not create value.

- Make the value-creating steps occur in a tight sequence so the product will flow smoothly toward the customer.

- As flow is introduced, let customers pull value from the next upstream activity.

- As value is specified, value streams are identified, wasted steps are removed, and flow and pull are introduced, repeat this process again and continue it until a state of perfection is reached in which perfect value is created with no waste.


(Womack and Jones, 1996)


This is a great reminder to help us focus on ways to make sure that we make the main thing... "the main thing". By focusing on value-add and making sure our efforts specifically target value add, we are better able to implement the five Lean principles and make them meaningful and actionable. 


Software Quality As It Relates To Data: a #PNSQC2022 Live Blog

Well, sorry I've been quiet... I was asked to give an impromptu conference talk since the scheduled speaker couldn't attend. Fortunately, I had a number of talks downloaded to my laptop so I was able to pick another talk from a few years back but hey, I had it :). So yeah, just something any and all conference speakers should consider... keep an archive of your talks available on your system or quickly retrievable from the cloud. You never know when you might be needed/asked to give a talk on short notice.

Natasha NicolaiNatasha Nicolai




Back to today's other festivities (woo!)... 

How much thought do we give to  Data Management and Security? What happens to our data as we are trying to perform workflows? Where does our data go on its journey? At what point is our data standing in the line of fire or in a position to be compromised, stolen, or tainted?

Natasha Nicolai is discussing ways in which we can better manage and maintain our data and how that data is accessed, modified, deleted, and secured in the process of us doing our work. 

Odds are most organizations at this point are not using a monolithic data model, where everything is in one place and suffering a single point of failure or where a single vector being exploited could bring the whole system down or compromise all of the data.

I'm somewhat familiar with this by virtue of frequently testing data transformations. Most of these data transformations are being done on actual live customer data. That means I have to be exceptionally careful with this data and make sure that it cannot fall into the wrong hands. Additionally, I need to also make sure that none of the interactions I perform will mess up or modify that data.

Natasha is sharing a variety of strategies to make sure in production environments and specifically in Cloud environments like AWS. She makes the case that we want to make sure that the data that flows through our apps and what is visible is appropriately given permission to do exactly that. She refers to the s steps and gates as "data pillars" to make sure that we are allowing visibility to just those who need to see it and hiding/protecting the data from all who do not. The idea of "data lakes" is again ways to make sure that we maintain data integrity but to also give us the ability to store data and pack it away so as to not be accessed when it isn't meant to be.

There's a lot here that I must confess I have limited exposure to but I'd definitely be interested in seeing ways to learn more about these data security options.


Digitizing Testers: A #PNSQC2022 Live Blog with @jarbon


I must confess, I usually smile any time I see that Jason Arbon is speaking. I may not always agree with him but I appreciate his audacity ;). 

I mean, seriously, when you see this in a tweet:

I’m sharing perhaps the craziest idea in software testing this coming Tuesday. Join us virtually, and peek at something almost embarrassingly ambitious along with several other AI testing presentations.


You know you're going to be in for a good time.

Jason Arbon 





I'm going to borrow this initial pitch verbatim:

Not everyone can be an expert in everything. Some testers are experts in a specific aspect of testing, while other testers claim to be experts. Wouldn’t it be great if the testing expert who focuses on address fields at FedEx could test your application’s address fields?  So many people attend Tariq King’s microservices and API testing tutorials–wouldn’t it be great if a virtual Tariq could test your application’s API? Jason Arbon explores a future where great testing experts are ultimately digitized and unleashed will test the world’s apps–your apps.  

Feeling a little "what the...?!!" That's the point. Why do we come to conferences? Typically it's to come and learn things from people who know a thing or three more than we do. Of course, while we may be inspired to learn something or get inspired to dig deeper, odds are we are not going to develop the same level of expertise as, say, Tariq King when it comes to using AI and ML in testing. For that matter, maybe people look to me and see me as "The Accessibility and Inclusive Design Expert" (yikes!!! if that's the case but thank you for the compliment). Still, here's the point Jason is trying to make... what if instead of learning from me about Accessibility and Inclusive Design, *I* did your Accessibility and Inclusive Design Testing? Granted, if I were a consultant in that space, maybe I could do that. However, I couldn't do that for everyone... or could I?

What if... WHAT IF... all of my writings, my presentations, my methodologies & approaches, were gathered, analyzed, and applied to some kind of business logic and data model construction. Then, by calling on all of that, you could effectively plug in all of my experience to actually test your site for Accessibility and Inclusive Design. In short, what if you could purchase "The Michael Larsen AID" testing bot and plug me into your testing scripts. Bonkers, right?! Well... here's the thing. Once upon a time, if someone were to tell me that I could effectively buy a Mesa Boogie Triple Rectifier tube amp and a pair of Mesa 4x12 cabinets loaded with Celestion Vintage 30s, be able to select that as a virtual instrument and impulse controllers, and get a sound that sounds indistinguishable compared to the real thing? Ten years ago. Impossible. Today? Through Amplitube 5, I literally own that setup and it works stunningly well.

Arguably, the idea of taking what I've written about Accessibility and Inclusive Design and compartmentalizing that as a "testing persona" is probably a lot easier than creating a virtual tube amp. I'm not saying that the results would be an exact replica of what I would do while I test... but I think the virtual version of me could reliably be called upon to do what I at least have said I did or at least what I espouse when I speak. Do you like my overall philosophy? Then maybe the core of my philosophy could be written into logic so that you can have my overall philosophy applied to your application.

I confess the idea of loading up the "Michael Larsen AID" widget cracks me up a bit. For it to be effective, sure, I could go in the background and look at stuff and give you a yes/no report. However, that skips over a lot of what I hope I'm actually bringing to the table. When I talk about Accessibility and Inclusive Design, only a small part of it is my raw testing efforts. Sure, it's there and I know stuff but what I think makes me who and what I am is my advocacy and my frenetic energy of getting into people's faces and advocating about these issues. Me testing is a dime a dozen. Me advocating and explaining the pros and cons as to why your pass might actually be a fail is where I can really be of benefit. Sure, I could work in the background, but I'd rather be the present Doctor as we remember him on Star Trek: Voyager.

Thanks, Jason. This is a fun and out-there thought experiment. I must confess the thought of buying me as a "Virtual Instrument" both cracks me up and intrigues me. I'm really curious to see if something like this could really come to be. Still, I think you may be able to encapsulate and abstract my core knowledge base but I'd be surprised if you could capture my advocacy. IF you want to try, I'm game to see if I could be done ;).