Monday, December 25, 2023

Software Testing Strategies is NOW AVAILABLE!!!

 I realize that I am terrible at self-promotion at times but I have a very good reason to step up my self-promotion for a change.

I WROTE A BOOK!

OK, now that that's out of my system, Matt Heusser and I co-wrote a book. That's the much more honest answer but hey, my name is on the cover, so I'm claiming that credit ;).

Matt and I spent more than a year working on this project that has become "Software Testing Strategies" and its subtitle is "A Testing Guide for the 2020s". We have endeavored to create a book that is timely and, we hope, timeless. Additionally, we did our best to bring some topics that may not be in other testing books. Through several years of working on podcasts together, writing articles together for various online journals, presenting talks at various conferences, and developing training materials to deliver as training courses as well as online and in-person classwork, we realized we had enough experiences between us to inform and develop a full book.

From our Amazon listing:

Software Testing Strategies covers a wide range of topics in the field of software testing, providing practical insights and strategies for professionals at every level. With equal emphasis on theoretical knowledge and practical application, this book is a valuable resource for programmers, testers, and anyone involved in software development.


The first part delves into the fundamentals of software testing, teaching you about test design, tooling, and automation. The chapters help you get to grips with specialized testing areas, including security, internationalization, accessibility, and performance.

The second part focuses on the integration of testing into the broader software delivery process, exploring different delivery models and puzzle pieces contributing to effective testing. You’ll discover how to craft your own test strategies and learn about lean approaches to software testing for optimizing processes.

The final part goes beyond technicalities, addressing the broader context of testing. The chapters cover case studies, experience reports, and testing responsibilities, and discuss the philosophy and ethics of software testing.

By the end of this book, you’ll be equipped to elevate your testing game, ensure software quality, and have an indispensable guide to the ever-evolving landscape of software quality assurance.

Who this book is for

This book is for a broad spectrum of professionals engaged in software development, including programmers, testers, and DevOps specialists. Tailored to those who aspire to elevate their testing practices beyond the basics, the book caters to anyone seeking practical insights and strategies to master the nuanced interplay between human intuition and automation. Whether you are a seasoned developer, meticulous tester, or DevOps professional, this comprehensive guide offers a transformative roadmap to become an adept strategist in the dynamic realm of software quality assurance.


Table of Contents

  1. Testing and Designing Tests
  2. Fundamental Issues in Tooling and Automation
  3. Programmer-Facing Testing
  4. Customer-Facing Tests
  5. Specialized Testing
  6. Testing Related Skills
  7. Test Data Management
  8. Delivery Models and Testing
  9. The Puzzle Pieces of Good Testing
  10. Putting Your Test Strategy Together
  11. Lean Software Testing
  12. Case Studies and Experience Reports
  13. Testing Activities or a Testing Role?
  14. Philosophy and Ethics in Software Testing
  15. Words and Language About Work
  16. Testing Strategy Applied
Sound interesting? If so, please go and visit the link and buy a copy. Have questions? Want us to delve into some of the ideas in future articles here? Leave a comment and let's chat.

Friday, October 27, 2023

Feeding on Frustration: The Rise of the "Recruiter Scam"

 This is truly not an article I wanted to write, but my hope is my experience may help some people out there.

To put it simply, I have been applying for a variety of jobs because, well, that's what you do when you are between jobs. I have, for the past several months, been working with an organization performing training for a cohort of learners. That started at the beginning of June, and it has recently been completed. With the classes finished, I am now the same "free agent" I was in May.




Thus, it should come as no surprise that I am applying for the jobs that are being posted and that I feel might make for a good fit. Additionally, this is part of my certifying for unemployment benefits. You have to show a paper trail of the companies you are applying to and demonstrate your active job search and the results of that search. Thus, I am making several inquiries each week. It's not surprising that the deluge of messages one gets when they are actively involved in this process makes it difficult to determine what is legitimate and what might be a scam.

Last weekend, as I was working through some things while waiting in my car to get an oil change, I received a message saying that they had reviewed my application and wanted to "short-list" me for interviews and potential hiring. To help with that, they sent me a questionnaire to fill out. I've done many of these, so I didn't at first think anything of it, though as I worked my way through the questions, I started to think, "Wow, this is pretty cool. So many of these questions feel almost tailor-made for me." Part of me was getting suspicious, but I thought, "Ah, you're being that paranoid tester again. It's not like there's anything in here they're asking that weird or harmful." So I decided to submit it.

A few days go by, and I receive an email message saying, "Congratulations! We are pleased to offer you the job of Remote Quality Assurance Engineer at (Company). To facilitate a formal job offer, please provide us with the following (full name, address, phone number, and email)". Again, at first, it seemed logical, but then... hang on... if they have my resume, it already contains all of that information. Why would they need to have me send it again? Now my tester spidy sense is tingling. This is starting to feel like a scam. Do I disengage at this point, or do I see if I can catch them red-handed?

I figured, "What the heck? Let's roll with it". My name, address, phone number, and email are readily available. We can discuss if that is an intelligent practice another time. In this case, I figured, "Let's go with it."

I received an offer letter. The company looks legitimate. It's a company I applied for. The job description looks beautiful. It matches all of the items I would be looking for... all of them. Now, for anyone who has applied for a job, have you ever seen a job description that was a perfect 10/10, or in this case, a perfect 13/13? Everything felt tailor-made for me. The pay rate also felt right in the pocket. However, here's where things started to go sideways.

"We will send you a check so that you can procure the needed equipment from our preferred vendors. Once you are set up and have everything in place, we can start the necessary training and get you up to speed. We can set up the payment for this procurement by direct deposit, or we can send you a check."   

Ohhhhh, yeahhhhhh!!! Now they are feeling confident (LOL!). 

They have someone willing to give them sensitive information. Did I mention that with the signed cover letter, I was to also send them a copy of my driver's license, front and back? I understand the idea of verifying identity and ability to legally work, but that's what I-9 verification services are for. They are also secure entities. I am not sending my license details over email. With this, I was pretty certain that I had a scammer. Thus I went and did the next things that felt obvious to me. I went back to look up the company and determine if the information they were sending me was accurate. Company name? Checks out. Address? Yes, accurate. Let's do a little search on the name of the person recruiting... oh, would you look at that? There is no LinkedIn profile for this person associated with this company. Hmmm, let's see their job listings... okay, there's the Quality Assurance Engineer's job listing. A quick review... now that's interesting. These are not the same requirements they sent to me. Not only that, but that perfect 13/13 job match was now reduced to an 8/13, with a few of the requirements that I was qualified for not even in the listing, and a few additional items that were not aligned with what I was working with. Yeah, that's a lot more typical. Also, the pay rate was lower than what the scammer was advertising.

With that, I scanned to see who the company listed as their official recruiters and I reached out to them via LinkedIn and simply asked if they were familiar with the individual who contacted me and if they were aware of the odd request to send me a check to buy equipment.  The net result was that, less than an hour later, I saw a post from the company warning people to steer clear of any email communications from one "Maxwell Keen" as they were posing as a recruiter for the company but did not nor had they ever worked with them.

All's well that end's well, right? We caught the scammer, I reported them, and now that's all done, right? Maybe, but I have a feeling that this person is still out there and probably looking for their next target, so with that, consider these some quick safeguards you an take.

- If you need to keep track of your job search, create an intermediate table in Excel or elsewhere that stores the information about the job and who you are communicating with, if possible. At the very least, review the job descriptions on LinkedIn and on their site and verify that they match.

- If there is a contact information space, note it down, especially if there is a contact person with a phone number. You don't need to contact them immediately, but you will want this information should you receive a reply back.

- Getting a questionnaire is fairly standard but it also makes it easy to "cheat" and write down the answers you search for. Again, it's not the most red of flags but I'd argue it's also not very helpful so be leery of anyone sending these and not asking for a phone call/screening.

- If you get an offer for a job where there has been no interview or phone screen or a direct conversation with a human being (either over Zoom or in person), expect that this is probably a scam of some sort. Otherwise, how are they vetting these people?

- Look to make sure that, if you receive an offer letter, there are no misspellings in the document. It's a simple thing, and perhaps petty, but offer letters have a fair amount of boilerplate text for legal purposes. Any legal document will be fine-toothed for any grammatical errors or misspellings. There may be some grammar variation but misspelled words should automatically give you pause.

- Any reputable company will either work with you to set you up with VPN or other security details to use your equipment as is or they will ship you out a system set up with the software they expect you to use. Being asked to receive a check to procure equipment is an indication that something illegal or shady is happening.

- References are something worth having and including upon request. as my friend Timothy Western pointed out, though, if they are asking for them too quickly or at the very beginning of the process, hold off on providing those. They may be harvesting that information from your references to target them. 

Some additional items you can do that should help determine if you are dealing with a reputable recruiter or a scammer:

- Look up recent news about the company to understand its current market and technical position and future outlook. Discussing the latest product launches, partnerships, or corporate changes can help flush out what they know or don't know about the company.

- Read up on employee testimonials on sites like GlassDoor and see if they match what the recruiter is telling you. While this may not necessarily tip you off if they're a scammer, it will help give you some inside perspectives on working conditions and employees' perspectives on their work culture.

- If possible, try to connect with current or past employees who can offer firsthand insights into the company. definitely see if there is a secondary recruiter there who can at least confirm the interactions you are having.

- If publicly available, review financial reports to assess the company's stability. Ask them some questions to determine what they might know and if their answers corroborate or refute your findings. 

Finally, make sure that everything you see in any communications can be traced back to interactions you initiated and make sense/match the experience you started with. 

Do not trust. Absolutely verify. 

Many of us are struggling with the reality of needing to find work. Let's do what we can to stop these parasites from making this already challenging search even more so.

Tuesday, October 10, 2023

Empathy is a Technical Skill With Andrea Goulet (PNSQC)

 


Today has been a whirlwind. I was up this morning before 5:00 a.m. to teach the penultimate class of my contract (sorry, I just love working that word into things ;) ) but suffice it to say the class is still happening while PNSQC is happening. That has made me a little tired and thus a little less blogging today. Add to that the fact I was called in to do a substitute talk in the afternoon (glad to do it but that was not on my dance card today) and I'm really wondering how we are at the last talk of the day and formal conference. regardless, we are here and I'm excited to hear our last speaker.

I appreciate Andrea talking about this topic, especially because I feel that there has been a lot of impersonal and disinterested work from many over the past several years. I was curious as to what this talk would be about. How can we look at Empathy as a technical skill? She walked us through an example with her husband where he was digging into a thorny technical problem that was interrupted by Andrea asking him for a moment. His reaction was... not pleasant. As Andrea explained, she realized that he was deeply focused on something so all-consuming that it was going to be a big deal to get his attention for needful things. Instead of it being an ugly altercation, they worked out a phrase (in this case, "Inception") to help see when a person is on a deep dive and needs to be in their focused state, at least for a little while longer. While I don't quite know that level of a dive, I have times in my own life when I get caught up in my own thoughts and I bristle when someone interrupts/intrudes. By realizing these things, we can not just recognize when we ourselves are focusing on deep dives, but we can also recognize when others are as well. This is a development of our own empathy to aid us in the process of understanding when people are dealing with things.


Okay, that's all cool, but why is this being called a technical thing? Because we are free and loose with the use of the word "technical". Technical comes from the Greek word "Techne", and techne means "skill". That means any skill is technical when we get down to it. It also means it's a skill that can be learned. Yes, we can learn to be empathetic. It's not something we are born with, it's something we develop and practice. Motivation likewise drives empathy. In many ways, empathy can be a little mercenary. That's why we get it wrong a lot of the time. We often want to reach out and help in ways that we would want to be helped, and thus our empathy is highly subjective and highly variable. Additionally, empathy grades on a curve. There are numerous ways in which we express and experience empathy. it's not a monoculture, it is expressed in numerous ways and under different circumstances and conditions. There are a variety of components, mechanisms, and processes that go into our understanding and expressions of empathy. It's how we collaborate and solve complex problems. In short, it's a core part of our desire and ability to work together.

Andrea showed us a diagram with a number of elements. We have a variety of inputs (compassion, communication) that drive the various mechanisms that end up with a set of outputs. Those outputs come down to:

  • Developing an understanding of others 
  • Creation of Trust
  • A Feeling of Mutual Support
  • An overall synergy of efforts   

 Empathy requires critical thinking. It's not all feelings. We have to have a clear understanding and rational vision of what people want, and not specifically what we want. 

On the whole, this is intriguing and not what I was expecting to hear. Regardless, I'm excited to see if I can approach this as a developed skill.


Automation, You're Doing It Wrong With Melissa Tondi (PNSQC)



This may feel a bit like deja -vu because Melissa has given a similar talk in other venues. The cool thing is I know each time she delivers the talk, it has some new avenues and ideas. So what will today have in store? Let's find out :).



What I like about Melissa's take is that she emphasizes what automation is NOT over what it is.

I like her opening phrase, "Test automation makes humans more efficient, not less essential" and I really appreciate that. Granted, I know a lot of people feel that test automation and its implementation is a less than enjoyable experience. Too often I feel we end up having to play a game of metrics over making any meaningful testing progress. I've also been part of what I call the "script factory" role where you learn how to write one test and then 95 out of 100 tests you write are going to be small variations on the theme of that test (login, navigate, find the element, confirm it exists, print out the message, tick the pass number, repeat). Could there be lots more than that and lots more creativity? Sure. Do we see that? Not often.

Is that automation's fault? No. Is it an issue with management and their desire to post favorable numbers? Oh yeah, definitely. In short, we are setting up a perverse expectation and reward system. When you gauge success in numbers, people will figure out the ways to meet that. Does it add any real value? Sadly, much of the time it does not.   

Another killer that I had the opportunity to work on and see change was the serial and monolithic suite of tests that take a lot of time to run. I saw this happen at Socialtext and one of the first big initiatives when I arrived there was to see the implementation of a docker suite that would break out our tests into groupings split into fours. Every test was randomized and shuffled to run on the four server gateways. We would bring up as many nodes as necessary to run the batches of tests. By doing this, we were able to cut our linear test runs down from 24 hours to just one. That was a huge win but it also helped us determine where we had tests that were not truly self-contained. It was interesting to see how tests were set up and how many tests were made larger specifically to allow us to do examinations but also to allow us to divvy up more tests than we would have been able to otherwise. 

Melissa brought up the great specter of "automate everything". While granted, this is impossible, it is still seen forlornly as "The Impossible Dream". More times than not, it's the process of taking all of the manual tests and putting them to code. Many of those tests will make sense, sure, but many of them will not. The amount of energy and effort necessary to cover all of the variations of certain tests will just become mind-numbing and, often, not tell us anything interesting. Additionally, many of our tests that are created in this legacy manner are there to test legacy code. Often, that code doesn't have hooks that will help us with testing, so we have to do end runs to make things work. Often, the code is just resistant to testing or requiring esoteric identification methods (the more esoteric, the more likely it will fail on you someday). Additionally, I've seen a lot of organizations that are looking for automated tests when they haven't done unit or integration tests at lower levels. This is something I've realized having recently taught a student group to learn C#. We went through the language basics and then later started talking about unit testing and frameworks. After I had gone through this, I determined if I were to do this again, I would do my best to teach unit testing, even if at fundamental levels, as soon as participants were creating classes that processed actions or returned a value beyond a print statement. Think about where we could be if every software developer was taught about and encouraged to use unit tests at the training wheels level!

Another suggestion that I find interesting and helpful is that a test that always passes is probably useless. Not because the test is necessarily working correctly and the code is genuinely good but because we got lucky and/or we don't have anything challenging enough in our test to actually run the risk of failing. If it's the latter, then yes, the test is relatively worthless. How to remedy that? I encourage creating two tests wherever possible, one positive and one negative. Both should pass if coded accurately but both approach the problem from opposite levels. If you want to be more aggressive, make some more negative tests to really push and see if we are doing the right things. This is especially valuable if you have put time into error-handling code. The more error-handling code we have, the more negative tests we need to create to make sure our ducks are in a row.

A final item Melissa mentions is the fact that we often rely on the experts too much. We should be looking at the option that the expert may not be there (And at some point if they genuinely leave, they WON'T be there to take care of it. Code gets stale rapidly if knowledgeable people are lost. Take the time to include as many people as possible in the chain (within reason) so that everyone who wants to and can is able to check out builds, run them, test them, and deploy them.

Continuous Testing Integration With CI/CD Pipeline (PNSQC)

Today, I'm taking a walk down memory lane. I'm listening to Junhe Liu describe integrating various automatic tests into the CI/CD pipeline.


It's interesting to think about where we are today compared to 30 years ago when I first came into the tech world. Waterfall development was all that I or anyone knew (we may not have wanted to call it that, or we'd dress it up. Realistically speaking, any given release was a linear process, and each sequence flowed into each other. While I had heard of Agile come the early 2000s, I didn't work on such a team (or one that presented itself as such) until 2011. 

Likewise, it was around the mid-200s that I started hearing the idea of Development and Operations being two great tastes that went better together being discussed ;). Again, it wouldn't be another decade until I saw it in practice but over time, I did indeed start to see this and I was able to participate in it. 

One of the interesting arrangements in the group I was working at (Socialtext), every member of the team had their turn at being the "Pump King". That's a piece of lore that I miss and it is a long story involving an old USB drive that was kept in a toy jack-o-lantern bucket, hence the person who took care of the protective pumpkin became known as the "Pump King" and after everything went online, the name stuck. The key point was that the Pump King was the person responsible for the Jenkins system and making sure that it was working, up to date, and patched when necessary, as well as running it to build and deploy our releases. Every few weeks, it would be my turn to do it as well. 

Thus it was that I was brought into the world of Continuous Delivery and Continuous Deployment, at least in a limited sense (most of the time this was related to staging releases). We actually had a three-tiered release approach. Each developer would deploy to demo machines to test out their code and make sure it worked in a more localized and limited capacity. Merging to the staging branch would trigger a staging build (or the pump king would call one up whenever they felt it warranted, typically at the start of each day. We'd run that and push changes and version numbering to our staging server, and then we'd run our general tests, as well as all the underlying automated tests with the Jenkins process, of which there were a *lot* of them. Finally, due to our service agreements, we would update our production server and then push uploads to customers who opted in to be updated at the same time. We never go to production daily pushes but weekly was more common towards the end of my time on that product.   

It was interesting to get into this mode and I was happy that we were all taught how to do it, not just one and when needed but that all of us were expected to be able to do it at any time. Thus all of us knew how to do it and all of us were expected to do it every time it was our turn to be Pump King. 


Monday, October 9, 2023

Learning, Upskilling, and Leading to Testing (Michael Larsen with Leandro Melendez at PNSQC)

You all may have noticed I have been quiet for a few hours. Part of it is that I was giving a talk on Accessibility (I will post a deeper dive into that later, but suffice it to say I shook things up a little, and I have a few fresh ideas to include in the future).

Also, I was busy chatting with our good friend Leandro Melendez (aka Señor Performo), and I figured it would be fun to share that here. I'm not 100% sure if this will appear for everyone or if you need a LinkedIn login. If you can't watch the video below, please let me know.

 

We had a wide-ranging conversation, much of it based on my recent experience being a testing trainer and how I got into that situation (the simple answer is a friend offered me an opportunity, and I jumped at it ;) ). That led to talking about ways we learn, how we interact with that learning, and where we use various analogs in our lives. This led us to talk about two learning dualities I picked up from Ronald Gross' "Peak Learning" book (Stringers vs. Groupers) and a little bit about how I got into testing in the first place.

It's a wide-ranging conversation, but it was fun participating, and I hope you will enjoy listening and watching it :).

Common Pitfalls/Cognitive Biases In Modern QA with Leandro Melendez (PNSQC)


Ah yes, another date with the legendary Señor Performo :). 

Leandro is always fun to hear present and I particularly liked the premise of his talk as I frequently find myself dealing with cognitive biases, both in the way of locating them when others use them but also to admonish myself when I do (and yes, I do fall prey to them from time to time). 



I've been in the process of teaching a class for the past few months related to software test automation, specifically learning about how to use a tool like Playwright with an automated testing framework. To that end, we have a capstone project that runs for three weeks. As anyone involved in software development knows, three weeks is both a lot of time and no time at all. This is by design, as there is no way to do everything that's needed, and because of that, there are things that we need to focus on that will force us to make decisions that will not be optimal. This fits into the conversation that Leandro is having today. How do you improve and get better when you have so many pressures and so little time to do it all? 

Note: I am not trying to throw shade at my students. I think they are doing a great job, especially in the limited time frame that they have (again, by design) and seeing what choices they make (as I'm literally a "disinterested shareholder" in this project, meaning I care about the end product but I'm trying my level best to not get involved or direct them as to what to do. In part, it's not the instructor's role to do that but also, I'm curious to see the what and the why concerning the choices that are made).

We often act irrationally under pressure and with time limitations. Often we are willing to settle for what works versus what is most important or helpful. I'm certainly guilty of that from time to time. An interesting aspect of this, and one I have seen, is the "man with a hammer" syndrome, where once we have something we feel works well, we start duplicating and working with it because we know we can have great wins with that. That's all well and good but at times, we can go overboard. Imagine that you have an application with navigation components. You may find that many of those components use similar elements, and with that, you can create a solution that will cover most of your navigation challenges. The good thing? We have comprehensive navigation coverage. The disadvantage? all of that work on Navigation, while important and necessary, has limited the work on other functionalities with the unit under test. Thus, it may be a better use of time to do some of the navigation aspects and get some coverage on other aspects of the application rather than have a comprehensive testing solution that covers every navigation parameter and little else to show for it. 

Another example that Leandro gives is "Living Among Wolves" or we can consider this an example of "conformance bias" meaning that when we do certain things or we are part of a particular environment, we take on the thinking of those people to fit in with the group. Sometimes this is explicit, sometimes it is implicit, and sometimes we are as surprised as anyone else that we are doing something we are not even aware of. 

The "sunk cost" appears in a lot of places. Often we will be so enamored with the fact that we have something working that we will keep running and working with that example as long as we can. We've already invested in it. We've put time into it, so it must be important. Is it? Or are we giving it an outsized importance merely because we've invested a lot of time into it? 

One of the lessons I learned some time back is that any test that never fails is probably not very well designed or it offers little value in the long run. It's often a good idea to approach tests both from a positive and a negative perspective. It's one thing to get lucky and get something that works in a positive/happy path test (or not necessarily lucky but limited in what's being done. Now, does your logic hold up when you inver the testing idea? Meaning can you create a negative test or multiple negative tests that will "fail" based on changing or adding bogus data or multiple bogus data examples. Better yet, are you doing effective error handling with your bogus data? The point is, that so many of our tests are balanced to only happy path, limited depth tests. If you have a lot of positive tests and you don't have many tests that handle negative aspects (so that the incorrect outcome is expected... and therefore makes a test "pass" instead of fail), can you really say you have tested the environment effectively?   

Ending with a Shameless plug for Leandro. Leandro is now an author, having written "The Hitchhikers Guide To Load Testing Projects", a fun walkthrough that will guide you through the phases or levels of an IT load testing project. https://amzn.to/37wqpyx

Amplifying Agile Quality Practices with Selena Delesie (PNSQC)

I had the opportunity and privilege to introduce Selena Delesie on the program today. It was fun to reminisce a bit because Selena and I were both in the same Foundations class for Black Box Software Testing all the way back in 2010. We also both served on the Board of Directors for AST, so we had a lot of memories and fun/challenging things to deal with over the years. Thus, it was a pleasure to introduce Selena as our first Keynote speaker. Also, part of her talk was discussed on a recent The Testing Show podcast, so if you want a sample, you can listen to that :).

Selena Delesie

The tool that Selena and Janet Gregory put together is called the Quality Practices Assessment Model (QPAM). The idea behind this is that there are ways to identify potential breakdowns in the quality of our work. Areas we should consider are:

  • Feedback Loops
  • Culture
  • Learning and Improvement
  • Development Approach
  • Quality and Test Ownership
  • Testing Breadth
  • Code Quality and Technical Debt
  • Test Automation and Tools
  • Deployment Pipeline
  • Defect Management

The fascinating thing is the order and how these are identified and examined. Selena makes the case that the order in which these are presented and examined is important and by examining them in this order or weighting, the best chances for overall and lasting improvement are possible. Yes, Defect management is important but it will be less effective if more weight is not given to the previously mentioned items.

A key aspect to this is that quality is not just a technical issue, it's also a social issue and it should not be dealt with in isolation. Selena introduces us to a group code-named "Team Venus" and identifies many of the issues they are facing and where those issues fall on the ten quality aspects. The key element is that each area is looked at holistically and in conjunction with the other areas, not in isolation. As anyone familiar with GeneralSystems Thinking can tell you, there is no such thing as a standalone and isolated change, any modifications made will have ripple effect. It's also critical to realize that a process alone is meaningless if the overall values are not solid or agreed upon. 

In the ten quality aspects that Selena referenced, there are four quadrants/dimensions to consider:

  • Beginning
  • Unifying
  • Practicing
  • Innovating

What I like about considering these as quadrants is the fact that these areas are not separate from each other but they are dependent on each other. Some areas of the ten practices will be closer aligned with a particular dimension. Additionally, it's common for teams to spend more time in a given quadrant/dimension for the ten areas than others. I like the diagram Selena uses that looks like a spider web. The center of the web means that that is an informational or foundational level, and the farther out from the center, the greater the expertise and experience. Ideally, of course, all of the aspects should be on the outer rim of the spider web but in practice, there will be color splotches in all four of the dimensions. That is normal and should not be discouraged, especially since each new team member will typically need to start from zero.

For those interested, the book and full model example for Team Venus is available in "Assessing Agile Quality Practices with QPAM" so if you want to learn more, go there.

Amping It Up!: Back at the Pacific Northwest Software Quality Conference

 


It's that time of year once again. I'm excited to be at PNSQC and in a new location. We are at the University Place Hotel and Conference Center, which is part of Portland State University. This is the first time PNSQC has been here but not the first time I've been here. A few years back, in 2018, they had run out of rooms at the primary hotel, so I had to find someplace else to stay. 

I happened upon University Place, and while it was a walk from the previous venue at the Portland World Trade Center, it was a comfortable hotel, and I liked my stay. As we were looking for different places to hold the conference this year I mentioned my experience and thought it would make for an excellent possible venue. And thus, here we are. It wasn't solely up to me but I definitely put in a good word ;).

This year's conference is a strange feeling for me, as it is one in which I am feeling unsure and unsteady after many years. I am at the tail end of a contract I've been working for several months. In a few weeks, barring any changes or new contracts, I will be out of work again. Thus I am attending this conference from a different headspace than usual. Previously, I was looking for small tips I could bring back to do my current job. This year, part of me feels the need for a literal reinvention. I'm having that uneasy feeling that I have too many potential options and not enough time to consider them all, so this year's talks are probably going to be focused on my current mental state. If you see me attending talks that may seem different or out of character, that's why.

Additionally, this year had an additional challenge and excitement in that I was the Marketing Chair for PNSQC this year. If you felt that there was either too much or not enough of a marketing presence for the conference, I get both the praise and/or the blame. Either way, for those who are here and for those following along, I'm happy you are here.

   

Wednesday, May 24, 2023

The Different Dimensions of Accessibility: Cognitive: Training for Accessibility (Part 6)

While often overlooked, cognitive disabilities are perhaps one of the most common yet least seen of the disability families we have discussed. Cognitive disabilities are varied and present some challenges that can affect how navigate and interact with online content. 

word cloud for cognitive disabilities: words include cognitive, disability, 
area
, attention
, experience, 
fatigue
, hemingway, 
memory
, situation
, stress
, user
word cloud for cognitive disabilities: words include cognitive, disability, area, attention, experience, fatigue, Hemingway, memory, situation, stress, user


Cognitive disabilities cover a broad range of conditions. Memory, attention, comprehension, and problem-solving are all affected, and for some people, all of the boxes are checked. 

Primary Disabilities

Down Syndrome: a genetic condition where people have an extra copy of chromosome 21. Cognitive impairments, delayed development, and distinctive physical features are often seen in this condition. Levels of cognitive impairment can vary from mild to severe.

Dyslexia: a learning disability that can affect reading, spelling, and language comprehension. They may swap letters or read certain characters out of order or need to step back and slowly read the text to process what they are seeing.

Dyspraxia: Also referred to as Developmental Coordination Disorder. While often considered a mobility disability, dyspraxia can also have an effect on the actions of writing and typing and cause stress to cognitive functions as well.

Traumatic Brain Injury: A sudden impact to the head such as a concussion or bone breakage in the skull can cause long-term issues with memory, attention, and problem-solving.

Fetal Alcohol Spectrum Disorders: people exposed to pre-natal alcohol in high amounts during their development in pregnancy can develop a range of disabilities. these can affect memory, attention, impulse control, and social skills.

Secondary/Situational Disabilities

Cognitive disabilities are perhaps one of the areas where situational disabilities may be the most prevalent. There are numerous situations that can put a strain on our mental faculties and can cause us issues that are not necessarily long standing. Many of these share similarities but these are all situations any of us could find ourselves dealing with.

Cognitive Overload: stressed, fatigue, or just having a million things coming at us all at once. These situations can make it more difficult for us to process information and make decisions.

Reduced Attention Span: again, stress and fatigue can contribute to this, as well as side effects of medication or recreational alcohol or drug consumption. 

Memory Impairment: there can be a lot of situations that lead to this. Again, stress and fatigue but also just being in an unfamiliar or foreign environment, especially one where the language that is spoken/written is foreign to you. 

Design Considerations for Cognitive Disabilities

When we want to address Accessibility and accessibile design for cognitive issues, it's important to realize that each area is unique, and individuals within these categories can have varying strengths, challenges, and needs. This is definitely the area where one size will not fit all and a lot more judgment calls are required. Still, here are several suggestions that should help considerably and make the experience better for all users.

Avoid Complex Navigation: having multiple layers of nested content or menus of menus is not ideal. It's easy to lose track of where a user is and then trying to get back to that location could be challenging if not impossible. Try to limit menus to one layer at maximum if possible.

Avoid Overwhelming With Information: A wall of text is not welcoming to anyone and for people with cognitive disabilities it is even more daunting. Try to use space, break up large paragraphs, and aim for a simplicity of message where it makes sense. 

Allow for Longer Time Limits: Aim to make it so that timers or time pressures are minimized. Some systems require this but make it so that the value can be adjusted reasonably

Provide Alternative Means for Content Display: Have clear labels and do not assume that users will get by inference what is meant by using a color in isolation or a metaphor that may be well known but some people may not be aware of that meaning. Provide clear labels and alternatives that will provide more context if necessary.

Avoid High Contrast or Flashing Content: this is an example of where a suggestion that works well for one group could be a distraction or a problem for another. High contrast screens that help those with vision issues could be too stressful to read or look at for people with cognitive disabilities. Having the ability to easily adjust the contrast can be a big help. Overly aggressive flashing and strobing is just a bad approach overall, IMO.

Use fonts that are not overly busy or decorative: font choice can have a profound effect o the readability of online text and for people with cognitive issues, overly fancy fonts can be a struggle to read. aim to make sans-serif fonts and typical typefaces a standard or make it easy for these typefaces to be selected. 

Write For Everyone (and Learn to Love Hemingway): this is perhaps one of my favorite cognitive tools to use, the Hemingway Editor. I get occasional raised eyebrows when I mention that I think of Hemingway as an Accessibility tool but I really see it as such. Hemingway is designed to help you improve writing from a clarity standpoint and also to help fix/avoid overly complex prose or impenetrable walls of text. You can also set a reading comprehension level and see how well your writing falls into that level (or doesn't).

As I stated at the beginning, cognitive disabilities are perhaps the most common and also the most neglected because we don't necessarily "see' them. Understanding how many there are and how varied they are, we can see a lot of areas that we can do better and can look out for to help make the experience of being online and using digital products better and more usable. Again, it doesn't take much in this stressful and fast-moving world to feel overwhelmed. These additional Accessibility features might be the ticket to making better interfaces and experiences for all of us.