Sunday, December 31, 2017

Where Does that Highway Go To?

It's that time again, the end of another year. With it a chance to reflect on some of what I've learned, where I've been, what I could do better and what I hope to do going forward.

This year has been a good one for the Testing Show podcast. In the software testing world, outside of the work I do each day, I would say that this has been my most consistent endeavor. I've been pleased with the episodes we have done this year and I am excited about more episodes that will branch away from the typical topics that we normally cover and aim to strike out into other areas that are important to software testing but may not be solely focused on the testing aspects alone. Also, as it will go live in the middle of this week, I can say there will be one noticeable change. New theme music will be part of the 2018 series of shows.

In my workaday world, I had an odd situation that required an adjustment and a change in work habits for me. I started the year with just a handful of people coming to our office to work each day and quickly that changed into my being the only person coming into the office most days. I felt a bit like the lone lighthouse keeper much of the time, though of course, the team communicated regularly through digital means. This year, as I was the one person coming into the office, our parent company decided it didn't make much sense to pay for a large office space that only one person was using. The office was closed and I was transitioned to being a 100% remote worker. This is a first for me. I've had the option to work from home much of my career, but it was a couple of days a week. I've never worked in an arrangement where I was 100% at home, all the time. That's been my reality since October of this year. I'm adjusting, but I do have to say it still feels a bit strange.

On the family front, my son moved away from home to embrace his dream of being part of the music industry in Los Angeles. I'm proud of him for chasing after his dream, but part of me is way too familiar with the entertainment industry and its many promises made but so few kept. He sends me texts that remind me of myself at his age when I was sure I was going to conquer the world. I have to fight the desire to tell him I know that tone so well, but in truth, I don't know his reality. I remember mine. Therefore, I do my best to hold my tongue and just let him tell me what he is doing and keep the peanut gallery comments to a minimum. It's hard, to say the least. On the bright side, he's more interested in the production and marketing end of entertainment, not being the actual performer, so he has better odds than I had ;).

On the physical front this year, as I tested out a variety of fitness approaches, trying to maintain a target weight and chasing metrics to define quantitatively how physically fit I was, I came to some stark conclusions. Numbers are artificial, they can be gamed, they can be detrimental to long-term success, and most importantly, it's entirely possible to "be fit" and feel completely miserable. That was a conclusion I reached this summer. Yes, I was 194 pounds, but I was also anemic and unable to donate blood, I felt tired much of the time, I got sick more frequently. I allowed myself to put a buffer between that lower point and I voluntarily gained back 20 pounds. Since around November, I have hovered between 215 and 225 pounds. It doesn't sound as dramatic, but it sure feels a lot better. In 2018, my goal is to focus more on body composition and worry a lot less about my actual weight.

Some longer-term initiatives I had to decide that I didn't have the energy or ability to do as often. Weekend Testing Americas is still a thing, but I found myself repeating where I'd been before and decided it was better to step away for a bit. I haven't shut it down, but I determined I needed to let some new blood take a shot or let it lay dormant for a bit. I'm feeling like I have some fresher ideas now, so do not be surprised if you see more Weekend Testing Americas sessions in the new year.

On the speaking front, I have kept my pledge to focus on Accessibility and Inclusive Design as my topics of choice, and those will also carry over into the new year as well. As I get older and I discover that my resiliency and "springiness" isn't what it used to be, many Accessibility and Inclusive Design aspects that were at one-time talking points have become "living points" for me. As such, the lack of sites that follow guidelines and the limited testing being done to help sights be more inclusive matters more to me now than ever. Additionally, I've decided that I want to do some more writing on the topic in 2018, with concrete and specific actions people can take.

As my company is making moves to modernize many of their testing frameworks, I'm taking advantage of the changing landscape to try to learn more about setting up a new framework from the ground up. At the moment, I'm championing an approach that uses Angular, Protractor, and Jasmine over our older successful but decidedly proprietary setup. Our testing team has decided to try a number of experiments and see where they lead us, rather than going all in on a particular strategy. To that end, I aim to learn a bunch of things with this approach and I also aim to talk about it as much as I can. This blog has been lean on my "learning in public" experiments as of late. It's time I got back to that.

My thanks to everyone who has worked with me, interacted with me, been part of The Testing Show podcast as a regular contributor and as a guest, shared a meal with me at a conference, come out to hear me speak, participate in a Weekend Testing session, shown support to the Bay Area Software Testers meetup, and otherwise given me a place to bounce ideas, think things through, and be a shoulder to cry on or to just hear me out when I feel like I'm talking crazy. Regardless if you have done that just a little bit or a whole lot, I thank you all.

Here's wishing everyone a happy, healthy and sane 2018. I look forward to talking with you all in the new year and beyond. Also, thank you, Talking Heads, for the song "Once In A Lifetime" as it has provided me so many clever titles for my year-end posts these past seven years (well, I think they are clever; your mileage may vary ;) ).

Wednesday, December 20, 2017

The Testing Show: Hiring and Getting Hired

It's been a big year for The Testing Show and this is the last episode of the year that is 2017. We were happy to have Gwen Dobson join Jessica Ingrassellino, Matt Heusser and me to talk about the changes that have taken place in the testing market over the past few years.

We riffed on a number of topics including the laws that prohibit asking about salary histories, having that discussion about money and making the best case for your worth, marketing your skill set and leveraging the variety of platforms at our disposal to help sell ourselves and our personal brands.

It's been a great deal of fun to produce and participate in this podcast and I'm looking forward to the new topics and guests we will have in 2018. I am actively working on a two-parter for January with a special guest that you are just going to have to wait and see/hear who it is, but I can say I've wanted to interview this person for a long time and I'm excited about presenting these episodes, along with some other changes for the show in 2018.

With that, please jump in and have a listen to
The Testing Show: Episode 50: Hiring and Getting Hired:

Thursday, November 16, 2017

Good Ideas are Spawned From Bad Or Failed Ideas

My elder daughter and I have been working on a number of small projects together over the past few months and every once in awhile, I will get a comment from her where she says "wow, how did you know what to do in that situation? We hadn't faced it before, and you quickly figured out a way to deal with it. How is it so easy for you? Why can't I do that?"

I chuckled a little and then I've told her a maxim that I've used a bunch (and I'm sure I've said it here at some point, too)... "Good ideas/solutions come from good judgment. Good judgment comes from experience. Experience comes from bad ideas/solutions/judgment." In short, she sees that I can come in and consider an idea and implement it (in some things, I do have my limits, and they are legion). What she doesn't hasn't seen is the countless times I've tried similar things, failed at them, regrouped, reconsidered, remeasured, tried again, and repeated this process until I happened upon something that worked.

As a Scout leader, I've had the benefit of decades of teaching a few generations of kids how to do some basic stuff (note, I said basic, not easy or simple). We teach how to tie a handful of knots. We teach some basic cooking techniques. We teach how to handle items like an ax, a knife, and a saw. We teach how to safely use fire. We teach some basic wilderness survival tips. Each time through this process there is always a similar "wave" that I witness. At first, there's an excitement level, but that quickly gives way to a mild boredom. Seriously? This is such a big deal? Snooze! Still, I push on and demonstrate what I can and encourage them to practice what I am showing them. A hallmark of our Scouting year typically takes place three months in for a typical scout. That's the "silent campout". Not silent in the sense that there's no talking or interaction, but silent in that the leaders (i.e. me and the other adults) make it a point to not discuss any of the campout particulars with the troop. They have their campsite, we have ours. they are within eyesight of each other, and we reserve the right/authority to intervene if a situation is deemed unsafe. Outside of that, we let them pick the camp area, bring in all needed items, and then we leave them to it. They construct the camp, they cook their meals, they clean, they tend fires, and do all of the other things that we have taught them over a few months.

Each time, the outcome has been similar. The bored expressions often give way to genuine concern or in some cases panic. Wait, what was I supposed to do at this point? Did I pack what we needed? Did I cook that long enough? Am I going to be able to properly contain the fire? You get the idea. They make mistakes, they get frustrated, and then they approach the problem(s) from different angles. They confer. They discuss options. They experiment. Some of those experiments fail, but some succeed. They note the ones that were successful. The next morning, fewer mistakes, less frustration, and almost no panic. The process, while ragged, gets smoother and more refined. Almost to a person, this experience makes for a change of attitude, and then when we talk about "the basics", they are not so jaded and bored. they realize that basic stuff often is harder to physically do in a regular and smooth manner. Like everything, it takes actual practice and it takes some working through frustration. Do it enough, and you start to actually get good at those basics and then you forget that there was a learning curve at all.

My point with this today is that, too often, I think we approach aspects of what we do (testing, coding, administration, learning new stuff, getting out of our comfort zone) with the same mindset. We start out enthusiastic, we get bored and jaded and then we panic when what was supposed to be so simple doesn't work out to be. It's OK to feel these things. In fact, it's necessary. Over time, as we stumble, learn, practice and perfect, we too might forget exactly what it takes to do basic things and make them look easy. May I encourage you not to? You never know who may be watching and feeling discouraged because they can't seem to "get it". We've been there, we know how that feels. Let's make sure we remind others that basic doesn't necessarily mean easy, and that good ideas/solutions often come from bad ideas/attempts.


Tuesday, November 14, 2017

The Frustration of "Too Much Choice"

Hello, Internet world. My name is Michael. I'm a serial collector of informational tidbits.

"Hi, Michael!"

Seriously, I recently went through and realized something both frustrating and enlightening. I am a huge fan of Bookmarking and Favoriting (Liking on Twitter now, but I still think of it as Favoriting). In my world, "Favoriting" serves a specific purpose. It's not so much to say "hey, I want to show you I like what you've posted" (though I do that from time to time) but to say "this is something I don't have the time to look at right now, but I really want to look at it later". I subscribe to lots of services that send me emails with cool tips and tricks to test, code, and administer stuff. I have a digital library that has hundreds of titles on all sorts of topics. I have categorized listings of websites, forums and other services that are there to help me learn and do things better and easier.

The thing is, when I get up in the morning and I scan my Inbox, most of the time I just delete the notifications, unless there's something that really piques my interest.

Those links? Rarely visited.
That list of Favorites (Likes) on Twitter? Rarely reviewed.
That massive list of books? It's so big that most titles hide in plain sight.

I remember Cem Kaner saying at one point that having the information doesn't necessarily mean that it will be useful to you at that moment, but being able to reference it and know about it or where to find it is of value. Thus, for many of us, resources are just that, they are raw lumps that are there when and if we need them, but we have to understand what we have access to and when that access is relevant.

For me, I struggle with too much choice. If there are too many options, I simply get overwhelmed and never make a decision. It's all clutter. It's a challenge to organize it. I have a couple hundred CDs and whenever I go on a road trip, I spend a ridiculous amount of time trying to pick something to listen to. Often, I give up and listen to the podcast I downloaded on my phone. Oh, that's another thing, which podcast to listen to and when? So many choices, so many options, but do I really have time for a deep dive? Have I truly let that one podcast build up to ten unlistened episodes? Yikes! When am I going to find the time to listen to that? Since my phone has a limited amount of storage, I tend to be a little more deliberate with what goes on it and I cycle what I download, so I have fewer choices. The net result is that I actually listen to what I put on the phone.

As I've stated in this blog before, I don't write about these things because I'm particularly good at them. I write about them because I'm particularly terrible at many things but want to do better. Thus, I'm trying my best to constrain those things that overwhelm me. Yes, I belong to a service that lets me download a free ebook every day. Many (perhaps most) of those books are "someday maybe" propositions that tend to crowd out the books that are actually immediately relevant. Therefore, I'm trying something different. Each week, I'm going through a particular category of expertise and/or criteria I need to understand or become more proficient with. I'm looking at this from a Marie Kondo approach. I'm looking at the resources I've collected, taking some time to categorize them into "immediately relevant", "relevant later", and "someday maybe". My goal is to locate the items that are immediately relevant and then focus on those for a short period of time.

In other words, I'm putting a physical constraint on the information I have an use, not to block out all of the resources I have, but to meaningfully work on the ones that can be most effective here and now. It's great that I have books that will help me master a particular technology, but if I'm just learning about it or trying to get beyond the Advanced Beginner stage, do I really need to deal with topics that relate to mastery at this stage? No. Yet just by their being there in my line of sight, I lose focus and my attention wanders. I also do something similar regarding other endeavors in my office. I have a lot of interests and it's tempting to have a variety of things out and ready to use. The net result, though, is that I dabble in lots of things and don't put any appreciable time into the areas that are most important. Frequently I end up dealing with what's urgent or pressing, and that's great for the moment, but it can leave me lacking in areas that are indeed important but aren't urgent.

I'm not sure if this is going to be helpful to anyone else, but it's currently helping me. Take some time to block out items you want to work on, that you need to work on and then think of the things that will directly help you meet those goals in the very near-term future. If they don't, don't delete them but perhaps put them in a place where you know they will come in handy later, and try to set a hard time for when "later" might be. If you can't do that, put them in the "someday maybe" container. The ability to pick and choose is wonderful, but sometimes, it helps a lot to limit what can be picked so that you actually make a choice and move forward with it :).

Wednesday, October 11, 2017

Machine Learning Part 2 With Peter Varhol: The Testing Show

As has become abundantly clear to me over the last several weeks, I could be a lot more prolific with my blog posts if I were just a little bit better and more consistent with self-promotion. Truth be told, a lot of time goes into editing The Testing Show. I volunteered a long time ago to do the heavy lifting for the show editing because of my background in audio editing and audio production from a couple decades back. Hey, why let those chops go to waste ;)? Well, it means I don’t publish as often since, by the time I’ve finished editing a podcast, I have precious little time or energy to blog. That is unless I blog about the podcast itself… hey, why not?


So this most recent episode of The Testing Show is “Machine Learning, Part 2” and features Peter Varhol. Peter has had an extensive career and has also done a prodigious amount of writing. In addition, he has a strong mathematical background which makes him an ideal person to talk about the proliferation of AI and Machine Learning. Peter has a broad and generous take on the current challenges and opportunities that both AI and Machine Learning provide. He gives an upbeat but realistic view of what the technologies can and cannot do, as well as ways in which the tester can both leverage and thrive in this environment.




Anyway, I’d love for you to listen to the show, so please either go to the Qualitest Group podcast page or subscribe via Apple Podcasts. While you’re at it, we’d love it if you could leave us a review, as reviews help bubble our group higher in the search listings and help people find the show. Regardless, I’d love to know what you think and comments via this page are also fine.

Tuesday, October 10, 2017

Seek JOY: #PNSQC Live Blog

Wow, we're at the end of the day already? How did that happen? Part of it was the fact that I started a few conversations with people that cut into talks being delivered, but as is often the case, those discussions can take priority and can often be the most important conversations you have at a conference.

Long story short though is that we are at the closing Keynote for the main two-day conference. Rich Sheridan of Menlo Innovations believes that we can do work that we care about and that we can have joy in the work that we do and in the workplaces we actively move in. Rich shared his story of how he came up into the world of computers and computing starting in the early seventies and how the profession he loved was starting to sap the life out of him and how he was contemplating leaving the industry entirely. He was experiencing the chaos of the industry. Issues, bugs, failed projects, blown deadlines, lack of sales, and all of the fun stuff any of us who have worked in tech recognize all too well. Chaos often ends up leading to bureaucracy, where we can't get anything done to not being able to get anything started.

The fact Rich wants to impart is that Joy is what all of us hope for in most of the things that we do. We often see it as some form but it's often nebulous to us. Additionally, jobs and companies cannot guarantee our success or our happiness. We have to have an active role in it and be willing to make it happen for us as we endeavor to make it work for others.

Why joy? Joy is service to others and being able to see the reaction to that service. IT's why we do the work that we do. We want to see our work out in the world. We want to see it get a response. We want to see people react to it. and we want to have that moment that swells up inside of us and that cheers us and makes us jump for (wait for it!) joy.

It's one thing to say that you want to build a joyful career, but it requires human energy. In most of the work environments that I have enjoyed the most, the work has almost always been secondary. What made the work enjoyable? The people and the interactions with those people are what makes for memorable experiences.

One of the most important things to foster joy is the idea of trust. We have to trust one another. Trust allows us to be open and frank. We can get into hard discussions and deal with conflict in a positive manner. When we can debate issues with trust and consideration, while still being committed to trying to get our issues resolved, we can deal with the hard issue and still be positive and remain friends.

Rich describes his office as a hodgepodge of machines, but the most astounding aspect is the fact that no one has their own computer. People change pairs and move onto other machines every five days, and with those moves, people move onto other machines. That means there is no such thing as "it works on my machine" because there is no dedicated machine for anybody.

Simplicity goes a long way to helping develop joy. Complexity for its own sake sucks the life out of us, and Rich showed the way that they manage their work. It's all on paper. It's all based on the size of commitment, and it's all based on creating meaningful conversations. Pans are then put on the wall, for all to see so that it's completely transparent. Honesty matters and with transparency comes honesty. With honesty and a willingness to express it, empathy comes into play. With empathy, we can see what others see and feel what others feel. To borrow from Deming, as Rich did "All anyone wants is to be able to work with pride" With that pride comes joy.

Rich Sheridan is the author of "Joy, Inc." and it is scheduled to be released very soon. Yep, I want a copy :).

And with that, the two-day technical program for PNSQC is over. Tomorrow will be another work day for me, but I think it's safe to say that I'll be literally buzzing with the kinetic energy that these past two days have provided for me. Thank you PNSQC team for putting on a great event. Thank you, speakers, for sharing your insights and experiences. Thank you, participants, for coming out and participating, and especially, thank you to everyone who came to my talk and offered positive feedback (and constructive criticism, too). Wishing everyone to get back to their destinations safely, and if you are here for the extended workshop day, I hope you all enjoy it and get to bring back great insights to your teams.

TESTHEAD out!!!

Off the Cuff Accessibility and Inclusive Design: #PNSQC Live Blog

And here I was wondering what I was going to say to give a highlight and wrap-up for my own talk that I gave yesterday.

The PNSQC Crew beat me to it. They asked me to give a video summary of my talk and some of my impressions of Accessibility and Inclusive Design.

This video chat was unrehearsed, so if you are used to my nice and polished podcasts, where I seem to speak super smooth and without flubs, today you get to see the real me: no edits, no do overs and a much more realistic representation of how I talk most of the time, hand flails and all.


To quote Ms. Yang Xiao Long... "So... that was a thing!"

Product Ecology and Context Analysis: #PNSQC Live Blog

Ruud Cox is an interesting dude who has done some interesting stuff. To lead off his talk, he described a product that he was involved with testing. A deep brain stimulator is effectively a pacemaker for the brain. Ruud described the purpose of the device, the requirements, and issues that have come to light during the procedures. I've worked on what I consider to be some interesting projects, but nothing remotely like this.


Ruud was responsible for organizing testing for this project, and he said he immediately realized that the context for this project was difficult to pin down. In his words, the context is a blur. However, by stepping back and addressing who the users of the product are (both the doctors and medical staff performing the procedures and the individuals who are having the procedures performed).

One tool that Ruud found to be helpful was to create a context diagram, and in the process, he was able to sketch out all of the people, issues, and cases where the context was applicable, and that it was somewhat fluid. This is important because as you build out and learn about what people value, you start to see that context itself shifts and that the needs of one user or stakeholder may differ from, or even conflict with, another user.



Patterns and giving those patterns meaning are individual and unique. Ruud points out that, as he as making his context diagram, he was starting to see patterns and areas where certain domains were in control and other domains that had different rules and needs. Our brains are "belief" engines", meaning we often believe what we see, and we interpret what we see based on our mental model of the world. Therefore, the more actively we work with diagramming context, the more we understand the interactions.

Ruud refers to these interactions and the way that we perceive them as "Product Ecologies". As an example, he showed how a person has been asked if they can make an apple pie. Once the person says yes, they then look at the factors they need to consider to make the pie. The PRoduct Ecology considers where the apples come from, what other ingredients are needed and where they come from, the tools and necessary methods of preparing and combining to create an apple pie suitable for eating. In short, there' a lot that goes into making an apple pie.

Areas that appear in a Context analysis need to be gathered and looked at in regards to their respective domains. Ruud has the following  approach to gathering factors:


  • Study artifacts which are available
  • Interview Domain experts
  • Explore existing products
  • Hang out at the coffee machine (communicate with people, basically)
  • Make site visits
Another way to get insights on your context is to examine the value chain. Who benefits from the arrangements and in what way. Who supplies the elements necessary to create the value? Who are the players? What do they need? How do they affect the way that a product is shaped and developed over its life?

User scenarios try to diagram the various ways that a user might interact with a product. The more users we have, the more unique the scenarios we will accumulate and the more likely that we will discover areas that are complementary and contradictory. Ruud showed some examples of a car park with lighting that was meant to come on when being used and go dark when not. As he as diagramming out the possible options, he realized that the plane and the angle of the pavement had an effect on the way that the lights were aligned, how they turned on or off, or even if they turned on or off.

Currently, Ruud is working with ASML, which creates tiny circuit elements on chips. One of the factors he deals with is that, to create a wafer in a scanner and fabricator, it can take months to produce a single wafer. Testing a machine like this must be a beast! Gathering factors and requirements likewise is also a best, but it can be done once the key customers have been identified.

Owning Quality in Agile: #PNSQC Live Blog Day 2

Good morning everyone! I actually had a full night's sleep and no need to rush around this morning, so that was fantastic. I should also mention that I was really happy with the choice to go to Dar Salam restaurant last night as part of the Dine Around Town event. Great conversation and genuinely amazing food.

This morning, the theme for 2018 was announced. It will be "On The Road to Quality" and I am already thinking about what I'd like to pitch for a paper and talk next year. No guarantee it will be accepted, but I'm looking forward to the process anyway.


The first talk this morning is "Who Owns Quality in Agile?" by Katy Sherman and she started out with a number of questions for us. Actually, she started the morning with a dance session/audience participation event. No, I'm not kidding :).

Katy shared the example of her own company and the fact that her organization had two pulling interests. In some cases, testers felt they didn't have enough time to test. On the other hand, developers felt like they had too much time with nothing to do. One of the challenges is that there is still a holdover from older times as to what quality means. Is it conformance to requirements? Or is it something else? How about thinking of quality as meeting both the explicit and implied needs of the customers? We want conformance, we want user satisfaction and we want our applications to perform well, even if they don't know what performing well even means. Ultimately, what this means is that Quality cannot be achieved through testing (or at least not testing alone.

In some ways, there's still an us vs them attitude when it comes to being a developer or being a tester in an Agile team. For my view, silos are unhealthy and perpetuate problems. There's an attitude that testing in extra, that having a dedicated tester is a frustrating overhead and wouldn't it be great if we could do away with it? Some developers don't want to test. Some testers struggle with the idea of becoming programmers. Many of us are in between. As a tester that has reported to test managers and to development managers, there's a need to be direct and show what you can bring to the table and to express issues and concerns in a similar way (perhaps not exactly the same but feel confident that both can be taught to understand the issues). In my own work, I have determined that "being a tester" alone isn't a realistic option at my company. There is too much work and there are too few people to do it, so everyone needs to develop their skills and branch into avenues that may not necessarily be their specific job title. Consider the idea that we are all software engineers. Yes, I realize that engineer is a term that has baggage, but it works for the current discussion.



There is a lot of possible cross training and learning that can happen. Developers being taught how to test is as important as testers being taught how to code. Perhaps it is even more important. Testers can learn a lot about the infrastructure. Testers can learn how to deploy environments, how to perform builds, how to set up a repository for code checkout, and yes, the old perennial favorite, work with automating repetitive tasks. Please do not believe that more automation will eliminate active human testing. Yes, it is true that it may well end a lot of low-level testing and that it will eliminate a number of jobs focused on that very low level, but testers who are curious and engaged in learning and understanding the product are probably the least likely to be turned out. I'm mimicking Alan Page and Brent Jenson of A/B Testing at this point, but if testers are concerned about their future at their current company, perhaps a better question to ask would be "what value am I able to bring to my company and in what areas am I capable of bringing it?

One of the key areas that we can make improvements is in the data sets and the data that we use to both set up tests as well as to gather to analyze the output. Data drives everything, so being able to rive the testing by data and analyze the resulting data produced (yes, this is getting meta, sorry about that ;) ). By getting an understanding of the underlying data and learning how to manipulate it (whether directly or through programmatic means) we can help shape the approaches to test as well as the predictions of application behavior.

Is QA at a crossroads? Perhaps yes. There is a very real chance that the job structure of QA will change in the coming years Let's be clear, testing itself isn't going anywhere. It will still happen. For things that really matter, there will still be testing and probably lots of it. Who will be doing it is another story. It is possible that programmers will do a majority of the testing at some point. It's possible that automation will become so sophisticated and all-encompassing that machines will do all of the testing efforts. I'm highly skeptical of that latter projections, but I'm much less skeptical of the former. IT's very likely that there will be a real merging of Development and QA. In short, everyone is a developer. Everyone is a tester. Everyone is a Quality Engineer. Even in that worldview, we will not all be the same and do exactly the same things. Some people will do more coding, some people will do more testing, some people will do more operations. The key point will be less of a focus on the job title and a rigorously defined role. Work will be fluid and the needs of the point in time will determine what we are doing on any given day. Personally, that's a job I'd be excited to do.




This is a 90-minute talk, so I'll be adding to it and updating it as I go. More to come...

Monday, October 9, 2017

Lean Startup Lessons: #PNSQC Live Blog



Wow, a full and active day! We have made it to our last formal track talk for today, and what's cool is that I think this is the first time I've been in a talk with Lee Copeland that hasn't been physically running a conference.

Lee discussed the ideas behind Lean Startup, which was a book written back in 2011 by Eric Ries. In Lean Startups, classical management strategies often don't work.

The principles of Lean Startup are:

Customer Development: In short, cultivating a market for their products so that they can continue to see sales and revenue, and to help develop that relationship

Build-Measure-Learn Loop: Start with ideas about a product, build something, put in front of customers to measure their interest, then learn about what they Like or Don't Like and improve on the ratio of Like to Don't Like. Part of this is also the concept of "pivoting", as in "should we do something else"?

Minimum Viable Product: Zappos didn't start with a giant warehouse, they reached out to people and sought out what they might want to order, and then physically went to get the shoes that people ordered and sent them to them.

Validated Learning: "It isn't what we don't know that gives us trouble, it's what we know that ain't so" - Will Rogers
"If I had asked people what they wanted, they would have said faster horses." - Henry Ford (allegedly)

One Metric That Matters: This measure may change over time, but it's critical to know whatever this metric is because if we don't know it, we're not going to succeed.

So what Quality lessons can we learn here?

Our Customer Development is based on who consumes our services. Managers, Developers, Users, Stakeholders. Who is NOT A customer? The testing process. We do this wrong a lot of the time. We focus on the process, but the process is not the customer and it doesn't serve the customer(s). With this in mind, we need to as "what do they need?  what do they want? what d they value, what contributes to their success? what are they willing to pay for?"

The Build-Measure-Learn Loop basically matches up with Exploratory Testing.

Minimum Viable Product: We tend to try to build testing from the bottom up, but maybe that's the wrong approach. Maybe we can write minimum viable tests, too. Cover what we have to as much as we have to, and add to it as we go.

Validated learning equals hypotheses and experiments to confirm/refute hypotheses. In short, let's get scientific!

How about the One Metric That Matters? What it definitely does not include are vanity metrics. What does the number of planned test cases mean? How about test cases written? Test cases executed? Test cases passed? Can we actually say what any of those things mean? Really? How about a metric that measures the success of your core business? How do they relate to the quality of the software? Is there an actual Case and effect relationship? Does the metrics listed lead to or inform next actions? Notice I haven't actually identified a metric that meets those criteria, and that's because it changes. There are a lot of good metrics but they are only good in their proper context, and that means that we have to consistently look at what we are and what that metric that matters and what really matters when.





Gettin' Covey With It: #PNSQC Live Blog

This talk is called "7 Habits of Highly Effective Agile" which is a play on words of the Steven Covey book, hence my silly title. Yes, I'm getting a tad bit punchy right now, but I am lucky in that I get to have an energy boost. Nope, no food or drink or anything like that. I get to have Zeger Van Hese next to me doing his sketch notes.

I've waited years to actually see him do this. How am I going to concentrate on doing a live blog with that level of competition (LOL!)?

Seriously though, "Seven Habits" is a book from the 80s that is based on the idea of moving from dependence to interdependence with a midpoint of independence.  For those wondering about the original Covey Seven Habits, they are:


  1. Be Proactive
  2. Begin With the End in Mind
  3. Put First Things First
  4. Seek to Understand, Then be Understood
  5. Think Win/Win
  6. Synergize
  7. Sharpen the Saw


The Agile Seven Habits break down to:

  1. Focus on Efficiency and Effectiveness (make sure that your processes actually help you achieve these, such as cutting down, technical debt, providing necessary documentation, etc.)
  2. Treat the User As Royalty (understand your customer's needs, and what you are providing for them)
  3. Maintain an Improvement Frame of Mind (adapt, develop, experiment, learn, be open, repeat)
  4. Be Agile, Then Do Agile (we have to have internalized and actively live the purpose before we can do the things to get the results. Communicate, collaborate, learn, initiate, respond)
  5. Promote a Shared Understanding (develop a real team culture, break away from roles, stop thinking it's not my job, learn from each other, wear different hats and take on different tasks, pairing as a natural extension, determine what is important)
  6. Think Long Term (develop a sustainable process, pacing that's doable and manageable, making habits that stick and persisting with them)

Well, there's my thoughts. and here's Zeger's. His are much cooler looking ;):








Customer Quality Dashboard: #PNSQC Live Blog

OK, the late night flight arrival into Portland and the early morning registration is catching up to me. I will confess that I'm hunting down Diet Coke and alertness aids of the non-coffee or tea types. If I seem a little hazy or less coherent, well, now you will know why ;).

In any event, it's time for John Ruberto's "How to Create a Customer Quality Dashboard" and he started with a story about losing weight, and a low-fat diet that he was on for awhile. The goal was to get less than 20% of his calories from fat. He soon realized that having a couple of beers and eating pretzels helped make sure that the fat calories consumed were less than 20%. Effective? Well, yes. Accurate. In a way, yes. Helpful? Not really, since the total calories consumed went well above those needed for weight loss, but the fat calories were under 20% all the time ;).

This story helps illustrate that we can measure stuff all day long, but if we aren't measuring the right things in context, we can be 100% successful in reaching our goals and still fail in our overall objective.

To create a usable and effective dashboard, we need to be able to set goals that are actually in alignment with the wider organization, focusing on what's most important to stakeholders, providing a line of sight from metrics to goals, and build a more comprehensive view of our goals.

Let's put this into the perspective of what might be reported to us from our customers. What metrics might we want to look at? What does the metric tell us about our product? What does the metric not tell us about our product?

Some things we might want to consider:

Process metrics vs Outcomes: Think vines of code per review hour vs. defects found per review.
Leading Indicators vs. Lagging Indicators: Think code coverage vs. delivered quality.
Median vs. Average: Median page load vs. average page load. Average can skew numbers
Direct Measures vs Derived Measures: Total crashes vs reported crash codes

There are a lot of potential issues that can cause us problems over time. One is gaming the system, where we set up metrics that we can easily achieve or otherwise configure in a way that is not necessarily supported in reality. See the example of the fat percentages. We could adjust our total intake so our fat calories were below 20%,

Confirmation Bias: is a preconceived notion of what things should be, and therefore we see or support results that help us see that reality. 

Survivor Bias: The act of seeing the surviving instances of an aspect or issue as though it's the whole of the group.         

Vanity Metrics: Measuring things that are easily manipulated.


                                                                                                                   

Keep Your Project off the Autopsy Slab: #PNSQC Live Blog

All right, my talk is over, and from what I can tell from the comments back it seems to have gone over well :). Lunch was good and we got to get into the 35th Anniversary cake. The committee also put out cupcakes and I'm pretty sure this had to have been put out there as a joke:

I am happy to hear that no Vegans were harmed in the making of these cupcakes ;).


The first talk for the afternoon is to talk about Pre-Mortems, with the subtitle of "Keep Your Project off the Autopsy Slab". To make it simple, there'are three basic things to consider in a pre-mortem:

1. Something has failed.
2. You and a group have determined and found the issue.
3. Steps are put in place so that the issue doesn't happen again.

Put simply, projects frequently fail, and they fail for predictable reasons. Sometimes they are event-based, such as a person quits, or the schedule changes, or an integration is needed. Sometimes it's people related, such as a skill set is missing, or someone is not dedicated to the project. At times the culture works negatively, such as fostering overt competitiveness or being overly skeptical. Additionally, the goal to get it out there overrides the willingness and ability to do the job right. Technology can be a problem, as in external third party software elements seem great on paper but we end up struggling to try to use it.

One additional issue that can sabotage a project is a lack of knowledge, specifically with the idea of "unknown unknowns" as in "we don't know it and we don't even know we don't know it".

If we are looking at a pre-mortem as a way to get a feel for where things are likely to go, we first have to identify as many risks as possible, but just as important is determining if, and how, it will be possible to mitigate those risks. Mitigation is an oddball term, in that it's not entirely clear as to what we are doing. Are we minimizing risk? Reducing risk? In general, it's lessening the impact of risk as much as possible.

There are eleven ideas to consider with pre-mortems:


  1. What is the scope?
  2. How do we determine the logistics?
  3. What instructions do you send?
  4. How can you imagine the project failing?
  5. What are the reasons for potential failure (brainstorm)?
  6. Now, let's create a master list (consolidate, de-duplicate and clarify).
  7. What items can we remove from the list due to low impact (filtering)?
  8. What reasons of potential failure could be real risks to the real project?
  9. What does the group consider important? Reorder based on importance (VOTE).
  10. Action Items to mitigate the top risks.
  11. Action items need to actually be done. They are the whole point of the pre--mortem.







The Good, The Bad and the Ho-Hum: #PNSQC Live Blog Continues

One of my favorite parts of coming to PNSQC is the fact that it has a high rate of recidivism. A lot of the attendees have been several times and it's great fun getting back in touch and catching up between sessions. I met my friend Bill here at PNSQC in 2010, and what we find hilarious is that, even though we both live in the Bay Area, it seems we only see each other here at PNSQC ;).

Heather Wilcox leads off the breakout track talks with a talk called "Four Years of Scrum: The Good, The Bad and the Ho-Hum". For the record, I work in a Kanban shop currently but worked in a Scrum shop from 2011-2012. Prior to that, none of the companies I worked for could be considered Agile (not for lack of trying for the last company I worked for up through 2010). What I've experienced is that each company does Agile just a little bit differently, and in many cases, they think they are doing Agile, but not really.

Heather described how in the early part of the process of getting to Agile, they were often performing the rituals of Agile, but they were not really sharing those rituals with other teams. As I have experienced that in a small way in the past, it's not often a recipe for success, especially since you have to explain Agile principles in a non-Agile space. Sounds like it wouldn't be much of a big deal, but you would be surprised.

What Heather described was the way that they implemented and perfected their practices, and in some cases, they determined that they were going to do things that were non-standard and that they were OK with it. Her team does things that may be considered unorthodox. Their teams are somewhat Siloed, they use various Agile Methodologies, some teams do formal scrum or wing it on a theme of Scrum.

Her direct teams use standups, planning meetings, demos, retros, chartering and a focus on "The Goal" which is to respect and preserve Scrum ceremonies while being as mindful of time as possible. I like that Heather uses the terms "Scrum Ceremonies" so freely because, in many ways, we don't want to admit that we are somewhat beholden to "rituals". If they do a demo, they only demo when they have something interesting to show. Retros are done with the same rules and techniques each time so as to let the team know what they need to do and be prepared for. As long as it works, that will be what they do. If it stops working, they will change the process, so long as it gets them closer to the overall goal.

Does Scrum Always Work?

To put it simply, no. When predictable but not precise is important, or there are small changes being made that can be accomplished in a sprint or two, then it works very well (and that tends to correlate with my own experience). When dealing with a hard stop date or having to focus on a large project, and there just doesn't seem to be a way to break this down, Scrum itself starts to break down. Regardless, it's been determined that points matter, and that those points are vital to being consistent. Their planning improves considerably when points are adhered to. They also help avoid massive stories. In short, Scrum is a technique, it's a planning approach, it's a methodology, but it is not gospel and it need not be ironclad. There is a discipline requirement and it can be uncomfortable, but it allows for flexibility if done correctly.


-----

Just to let you all know it will be awhile until my next update because I'm delivering my talk next :). I will do a "somewhat summary" of that after the conference.

Scaling Quality: Live Blogging from #PNSQC

Good morning from chilly Portland. I'm saying that just by virtue of the fact that San Francisco has been warm and downright balmy. I got off the plane in Portland and immediately regretted not packing a hat.

A Tri-Met ride and a couple of blocks walk to get me to the Portland WTC, home to the Pacific Northwest Software Quality Conference. This year is a special year for PNSWC as it has been 35 years since their first conference, emphasized by a cake currently sitting downstairs (impressed that no one had cut into it yet, but they were smart not putting a knife out ;).

The introductory talk for today is "Quality Engineering 2017: Trends, Tricks, and Traps" by Penny Allen. Penny deals with the return process at Recreational Equipment, Inc. You might be familiar with their acronym, REI. One of the key points of Penny's presentation is that "Quality Still Matters". It seems that disruption is the new normal, the rate of change is exploding, and it's not like we are putting simple things together. We are now creating complex and game-changing interactions. Who remembers the days of a client a server and a database, when LAMP was all containing? I actually do remember that.

We are dealing with Internet-connected refrigerators... that DDOS us. We have information overlays... with people walking into traffic because they are so immersed they aren't paying attention. The cloud is magnificent... until the cloud, or our slice of it, goes down. Disruption is accelerating, and there are things we haven't even considered. DevOps, Microservices, CI/CD/C*, you name it. The only sure way to survive chaos is adaptation, and Penny says "radical adaptation" is essential.

Quality Engineering goes beyond design and construction. We also need to think about deployment and usage. More to the point, we need to "safeguard" these approaches and methods.

Why Quality Engineering?

Quality Engineering is tearing apart the problems we see and breaking them down until we understand them? Want to see the best and most productive Quality Engineers out there? Look at young children. Does anyone ask more questions than they do ;)? More to the point, they understand the creative process and there is a low fear of failure (that is if there is any real fear). Penny suggests we build our engineering muscles again. Books suggested are "Think Wrong", "Applied Minds", and "The Way Things Work Now".

Are You Saying I Have to Become a Developer?

Short answer, no. Longer answer, you definitely need to be a technologist. It means you need to be an inventive problem solver, be a self-driven learner, become a coherent communicator,  be open-minded but practical, and become adept at finding the signal in the noise. Think of it as though you want to get into better shape. Do you have to become a marathon runner? Certainly not, but you may need to do some running. Or lifting. Or backpacking. Whatever it is, you have to decide to do it and then work towards achieving the goals. Likewise, you don't have to become a developer, but you would be well suited to learn something about programming and find ways to regularly use those skills.

Shift Left?

Show of hands, is "shift left" signal, noise or meaningless? Penny says that "shift left" is noise, but it really doesn't mean anything. Asking "why" earlier is useful, but that's not shifting left, it's called "doing our job" and no amount of shifting left will matter as much as continually asking why. The earlier we do that, the better we will be.

Focus on the End Results

Software testers often are responsible for a lot of busywork and supporting efforts that focus on deliverable other than the end result. In short, process does not impart quality. No one goes to a company to buy a test plan, they go to buy an experience. Qualiy is not beat into a product, it's designed from the start and safeguarded along the way.

Laws and Guidelines: Accessibility for Everyone: Long Form Book Review

IT's been a fun ride, but it's time to close out my long form review of Laura Kalbag's "Accessibility for Everyone".

I have discussed with Laura my intention to do this, and she has given me her approval. This is going to be a review of the book and my own personal take on the concepts and ideas. I do not in any way intend for this to be a replacement for the book or otherwise want people to consider that reading my long form review basically means it's a Cliff Notes version of the book. I'm excited that books like this get written and I want to support Laura and other authors out there. If you think this book sounds intriguing, buy the book!!!

ACCESSIBILITY FOR EVERYONE

LAWS AND GUIDELINES

Front cover of Laura Kalberg's book "Accessibility for Everyone"
If you have ever set yourself down to reading the actual guidelines for WCAG, the U.S. ADA Section 508, or other guidelines for complying with requirements, you have probably felt a bit of frustration while reading them. That's because they are written in a legalese that only attorneys would love. Additionally, one of the biggest reasons that Accessibility projects are undertaken in the first place is because there is a threat of legal action, if not currently then in the future at some point. The retail chain Target and the UK airline Bmibaby are two high profile examples where legal action was taken due to sites not being Accessible to all users.

As mentioned above, in the USA, Section 508 is the part of the Americans with Disabilities Act that deals with making electronic communications available to everyone. It specifically has teeth when it comes to government agencies purchasing and using products. As I had mentioned in other posts in this book review series, my current company went through a large scale Accessibility project because a large entity wanted to purchase our product, but that product has to meet Section 508 guidelines. You may be asking, "was it a government agency?" The answer is, "Yes."

Europe has its own standard similar to Section 508. It's called the European Accessibility Act, developed to work along with Article 9 of the United Nation’s Convention on the Rights of Persons with Disabilities (CRPD).

Many of these legal guidelines point to WCAG and consider compliance with it to be sufficient to match the legal codes. As Laura points out, there are four fundamental principles that underly the WCAG guidelines (the following is right out of the book):


  • Principle 1: Perceivable—information and user interface components must be presentable to users in ways they can perceive.
  • Principle 2: Operable—user interface components and navigation must be operable.
  • Principle3: Understandable—information and the operation of user interface must be understandable.
  • Principle 4: Robust—content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.
WCAG hs three levels of conformance. Those levels are 'A', 'AA' and 'AAA' with the greater number of letters the more complete the conformance. Different organizations will aim for varying levels, with 'AA' being the most common. Achieving an 'AAA' rating is the ideal of Accessibility and Inclusive Design, but may not be feasible for many organizations, as it requires a lot of time, energy, and cash outlay to achieve it.

The key point of this section is the fact that Laws and Guidelines are all well and good if we want to ensure that we are not running afoul of legal commitments, and they have a tendency to "keep us honest", but that's a poor place to be keeping our focus. Much better would be if we were to consider that we are doing these things because they are the right things to do because we want to include people into our product usage because at some point every one of us will face Accessibility issues of some kind.

RESOURCES

Laura provides a lot of additional reading options for Accessibility, Coding Patterns, Animation, ARIA tags, Assistive Technologies, Color and Design, CSS, HTML, Guidelines, Internationalization, Planning and Research,  Performance, Typography, Subtitles and Captions, SVG Graphics, Usability, Validators and Inspectors, Video and Writing/Readability. There's a lot to dig into here and its safe to say that using these as a jumping off point, even if you only explore half of them listed, you will walk away with a lot of new skills and understanding of Accessibility that will stand you at least a head above many of your peers.

WRAPPING IT UP

If you have stuck with me this far, you probably have surmised that I like this book a great deal. It takes a broad and sometimes sticky topic and makes it "Accessible to Everyone". This is not a coding book per se, though it does have some code examples. It's not a design book, though it does cover a number of design ideas and techniques. It's not a "how" book in a strict sense at all. What it is is a "why" book, and at that level, it succeeds admirably. It's written in a plain and understandable style while not being patronizing. The examples are clear and easy to understand. The sections of the book build upon each other and allow the reader to "level up" with each chapter. The ordering makes sense in that they help shape why we should care before we get into the nuts and bolts of what we should do. Laura has put together a book that is focused, distilled and lacking "the boring bits", which of course falls into line with the ethos of "A Book Apart" and its publishing style. Would I consider this a worthwhile addition to my book library? The fact that I have done a long-form review like this should be ample evidence that I do, but even without this commitment, I consider "Accessibility for Everyone" to be a book that I will turn back to as I need to. The Resources section itself is worth the purchase of the book, but what you get in the preceding chapters will provide a great deal in helping anyone get a better feel for and a desire to want to make Accessivility a primary focus of their programming and testing efforts.

Sunday, October 8, 2017

Evaluation and Testing: Accessibility for Everyone: Long Form Book Review

As seen in my last few entries, for the next undetermined number of posts, I will be reviewing Laura Kalbag's "Accessibility for Everyone". I have discussed with Laura my intention to do this, and she has given me her approval. This is going to be a review of the book and my own personal take on the concepts and ideas. I do not in any way intend for this to be a replacement for the book or otherwise want people to consider that reading my long form review basically means it's a Cliff Notes version of the book. I'm excited that books like this get written and I want to support Laura and other authors out there. If you think this book sounds intriguing, buy the book!!!

ACCESSIBILITY FOR EVERYONE

EVALUATION AND TESTING


Front cover of Laura Kalberg's book "Accessibility for Everyone"
Laura's main point of this chapter is that we can do all of the work, follow all of the guidelines, dot all of the 'i's and cross all the 't's and still fall well short of the intended goal if our implementation doesn't actually help our end users achieve their aims. Having been part of several Accessibility initiatives over the last five years I can attest that unanticipated things show up in testing or require the original hypothesis to be reconsidered.

That's great, so what is suggested? Well, for starters, make a plan. Whether that be a formal test plan or a series of conversations ons specific topics, the point is that there are a lot of criteria to be considered when looking at an Accessibility issue and creating a remedy for it. What approach will you use? How will you gauge progress? How will you document the testing progress? How will the testing results and recommendations be fed back into the development process? These are standard questions for any software tester, and how they are handled will vary between organizations, but they all go through this process in some manner if they wish to be successful.

One way to get a head start on looking at Accessibility issues is to become familiar with the WCAG standard. Different countries have their own standards, but WCAG is intended to be International, and as such will have a broad range of areas to look at. At times a local standard has some additional details that are relevant to that area, but usually looking at WCAG and meeting the requirements there will most of the time satisfy local requirements. The few that don't can be handled on a case by case basis.

The best test strategy is to literally use the product in the way that the intended customer will use it. If you need to rely on a screen reader, turn your screen off, or even better, find a dark room and do it so that you have limited ability to see the target machine. This is an example of persona testing taken to the next level, but I can attest to the fact that it is effective.

If you have the ability to participate in code reviews you have a chance to help shape the product at a fundamental level. Even if you don't, getting together with the programmers during the story workshop to discuss requirements can go a long way in helping to set up the necessary processes to ensure success. Also, a lot of the structural elements for Accessibility exist in the HTML and CSS, as well as other web technologies. That means that they can be looked at and confirmed to exist with automated tests. Still, don't be lulled into thinking you can automate all of the testings, as a lot of Accessibility comes down to the user experience. As of yet, that's proven to be a very tricky area to put automation into. For now, it still requires an empathetic human.

There are lots of tools available to the programmer and tester looking to validate their Accessibility work. Laura includes several that I would also consider excellent, such as:

W3C Markup Validation Tool
WebAim WAVE
Color Oracle

Laura has a much larger listing in the Resources section of the book.

KEY TAKEAWAY

The best way to test for Accessibility issues is to actively encourage and set up opportunities to test with people that actively use Accessibility software and devices. If that's not possible, then persona based testing is required and will have to stand in as the next best thing. There are lots of avenues and ways to test, lots of available tools, but ultimately it all comes down to one thing; can your intended audience of your product use it effectively? If there is a need for assistive technology, can that technology help provide a comparable experience for all of your users? If that sounds like a tall order, that's because it is. As I stated earlier, and as Laura also makes continuously through the book, we can work on Accessibility techniques but they don't amount to much if our users are frustrated in their goals. The best way to help ensure that doesn't happen is to test our implementations often.


Friday, October 6, 2017

Accessibility and HTML: Accessibility for Everyone: Long Form Book Review

As seen in my last few entries, for the next undetermined number of posts, I will be reviewing Laura Kalbag's "Accessibility for Everyone". I have discussed with Laura my intention to do this, and she has given me her approval. This is going to be a review of the book and my own personal take on the concepts and ideas. I do not in any way intend for this to be a replacement for the book or otherwise want people to consider that reading my long form review basically means it's a Cliff Notes version of the book. I'm excited that books like this get written and I want to support Laura and other authors out there. If you think this book sounds intriguing, buy the book!!!

ACCESSIBILITY FOR EVERYONE

ACCESSIBILITY AND HTML

Front cover of Laura Kalberg's book "Accessibility for Everyone"
This is the payoff point. With solid, well-structured HTML, most of the struggle to make sites accessible becomes very simple. That's it! Oh, wait, you want to know what that actually means, don't you ;)? Well of course and that's the meat of this chapter.

Many of us learn the basic HTML tags and we think that's all we have to do. Not much to it, except for the fact that there are lots of parameters for nearly every tag, and understanding those parameters can help immensely in making sites more accessible. Turning off CSS in your browser can be very informative, as well as jarring. Pretty sites get reduced to a listing of text and spaces only and with it a much clearer idea of how your pages are actually structured and it also shows how a screen reader would walk through it. If you find yourself scrolling through a bunch of stuff to get to the main content, imagine how it will feel to a user utilizing a screen reader waiting for it to "get to the point already".

Being effective with headings can do a lot to help structure your page in a meaningful way. Think of an outline for a paper or presentation. You put certain elements at those levels for a reason, right? Make sure to use your heading tags the same way (h1 - h6, etc.). Lists are also helpful to break up the content and make it more readable, so use the proper list elements (ol, ul).

Forms are a prime location for people to add extra code to help make sure that the user is putting the right information in. The downside is that, for many using accessibility tools, these enhancements will be confusing at best and completely bypassed at worst. Also, while it's a common convention to put a red asterisk to mean that information is required, having a tag call out that it is required, or literally spelling it out is better.

For many users, the keyboard is not just the primary input tool, it is the only input tool. Therefore, it is wise to use conventions that are well known and understood. Be careful making your own conventions or overwriting long standard actions.

Skip links are found at the start of many documents. Since users may be familiar with the content they are after and know how to find it, presenting them with a navigation bar they have to address each time can be both training and time-consuming. Putting a skip link in means the user jumps over the navigation elements and gets right to the main content on the page.

Tab order is critical, so make sure that users can get to the content they need in as few steps as necessary by using the (tabindex) option.

It should be no surprise that HTML that is generated from WYSIWYG editors is often literal, overly verbose, and overloads each tag with meta details that are better put elsewhere, so paying attention to how your code is formatted is important. Additionally wherever possible, try to keep structural aspects in your HTML and move the style elements, wherever possible, to CSS.

Another important aspect to consider when writing HTML for accessibility is the notions of "Progressive Enhancement and Graceful Degradation". In a nutshell that means "start from a basic framework and add the aspects and elements that allow for the specialized tools while keeping the base experience similar for all users. Likewise, Graceful Degradation is the idea in the opposite direction. If something fails or doesn't apply due to a user's method of access, don't just shut them out. Instead, make it possible to view as much of the content as possible, even if some of it is unoptimized.

If' you've been around Accessibility any significant length of time, it's likely you have come in contact with WAI-ARIA tags. WAI-ARIA stands for Web Accessibility Initiative—Accessible Rich Internet Applications and is helpful in allowing for certain tasks to be described in a more effective way, beyond what HTML and CSS ever intended. If you have ever heard a screen reader counting down and declaring the percentage progress of a file being downloaded, odds are that behavior is being defined in an ARIA tag.

Each of these areas is described in more depth, but they are not comprehensive breakdowns of each. As I will mention again when I do my summary review, Laura is not writing a "what" or a "how" book as much as she is writing a "why" book. As the title states, this is "Accessibility for Everyone" and that goes beyond the role of the programmer trying to make the site. The nuts and bolts are important, but the philosophy is even more so.

KEY TAKEAWAY

Meaningful HTML is the key, and to make HTML meaningful, we need to understand what it's best suited for and when it's better to let something else (CSS, ARIA) step in and take care of things like styling and more advanced interaction. Structuring documents effectively and understanding how to place the most important information up front helps everyone, not just those in need of assistive technologies. Also, it's good for the soul to see what your site tells you without the eye candy present. If it's a mess and hard to distinguish what is what, that's exactly the experience users who need assistive technology will have.

Thursday, October 5, 2017

Content and Design: Accessibility for Everyone: Long Form Book Review

As seen in my last few entries, for the next undetermined number of posts, I will be reviewing Laura Kalbag's "Accessibility for Everyone". I have discussed with Laura my intention to do this, and she has given me her approval. This is going to be a review of the book and my own personal take on the concepts and ideas. I do not in any way intend for this to be a replacement for the book or otherwise want people to consider that reading my long form review basically means it's a Cliff Notes version of the book. I'm excited that books like this get written and I want to support Laura and other authors out there. If you think this book sounds intriguing, buy the book!!!

ACCESSIBILITY FOR EVERYONE

CONTENT AND DESIGN

Front cover of Laura Kalberg's book "Accessibility for Everyone"
Laura makes an interesting point in that all technology is assistive in some capacity. If we didn't have a keyboard, a mouse, a microphone, or speakers, how would we interact with the device? To that end, it helps to think of each aspect of how we interact and consider ways that we may be hindering the use of these basic tools. To borrow from the book directly here's a nice and succinct way to consider the four primary Accessibility domains:

1. Visual: make it easy to see.
2. Auditory: make it easy to hear.
3. Motor: make it easy to interact with.
4. Cognitive: make it easy to understand.

By contrast, if there is a limitation in one of these areas, we need to be aware of and prepared to make it possible to get the same (or similar) information into the other domains.

Sometimes people want to "shake things up" when it comes to design and approach. We might think it's silly to hold to "outmoded conventions" when it comes to icons and other things (I'm sure that there are many people who look at a Save icon who have never interacted with a floppy disk before). Still, the problem with that is that changing it to something new will potentially alienate and confuse those people who vividly remember what a floppy disk is and that it is nearly universally associated with "Save".

Methods of good accessible design are not that arcane, and many of them are already being implemented and you may not be aware of it. a navigation par that has descriptive listings can be a quick summary of a site's contents. It's tempting to pack sub-navigation bars and sub-sub-navigation bars in as a way to really economise and make information quickly available. For the mouse user, that may be true, but for a keyboard only user, sub-navs can be frustrating. Additionally, using mouse hovers to cause areas to pop out and be selected, again, is both non-intuitive and renders those not using a mouse unable to access the content unless an alternative method is provided. Using breadcrumbs or links to show where you have been previously can be helpful for those with cognitive disabilities as well as being just a nice thing to work with when you have to do several views in a hierarchy. Knowing where you are in that hierarchy shaves time and releases stress.

Links have had a long history on the web and their standard look and feel have helped make it easy for people to determine what tet makes a link, whether you have been there before, and something descriptive to tell you what the link represents. "Click Here" is a terrible linking convention. "Learn more about our services" works a lot better.

Writing for the web and for the greatest possible audience requires a balance. In my personal writing and anytime I try to write for my blog, I make a standard assumption that a sixth grader can read what I type. I don't always succeed, but that is the goal. To that end, breaking up paragraphs, using lists, bullet points, and other conventions to break up the flow is very helpful. Also, when possible, I encourage people to "speak Dude". If it's not possible to get a point across without technical jargon, that is understandable, but most of the time, we don't need to get overly flowery or "express overabundant facility via grandiose verbiage". That example is grammatically correct but bothersome to read.

Fonts can be rendered in a number of ways and it's important to realize that many font styles might look "cool" but could be painful to read. Using standard typesets and making the calling of the print easy for the user goes a long way in making the experience usable by as many people as possible.

Picture carousels, button increments and other cool uses of web "eye candy" can indeed show off your site and its features, but it can also be a chore to navigate through for users who are not able to use a mouse or view the pictures. Forms should allow for conventions that help make the process of filling them out as easy as possible, with prompts when something isn't correct, if possible, to be shown to alert an error before the users try to submit the form. Also, pages and content are able to change without the user doing anything on the page themselves. Dynamically updating copy is a reality, and making ways to alert users to that is important.

Color and color contrast can be a make or break for many people. Not enough contrast can mean it will be very difficult to see what is being displayed for some people. Higher contrast usually is better, but that may "clash" with your overall design aesthetic. Perhaps, but one can argue that your design is flawed if a sizeable population of your users can't even read the site. Also, color swatches by themselves without other identifying details like a text description can also lock users out of making relevant choices.

One of the longest standing and simplest to look for Accessibility issues is not using Alt text for a picture or an image that has significance to the page and the user's experience and reason for interacting with it. Icons such as Open, Save, Copy, Paste, if rendered as pictures, should absolutely have some other indicator of what that picture is. An Alt tag helps with that for those who cannot or choose not to display the pictures. A readable text description is also important, and the more descriptive. The better. "Picture of a girl next to a car" is OK. "Photograph of a woman standing next to a 1985 Alfa Romeo across from the Huntington Beach boardwalk" is even better.

Additional areas to consider are providing alternatives to content. Using the four domain examples, think about anything you are presenting and how that content may be locking people out. Consider alternative ways to provide that information. A video presentation? Provide closed captioning for hearing issues and a transcript for those with vision issues.

KEY TAKEAWAY

We often err on the side of what's easiest for us and we make decisions based on our normative experiences. Therefore, it's not surprising that a lot of design issues happen not because people are being deliberately callous but because they are trying to be expedient. To borrow from the old Larry Wall Perl maxim, we need to make sure "there's more than one way to do it" whenever possible. That's going to require some extra work, certainly, but the increased usability for more people will likely make up for that. Take some time to look at your site or app and think "what could be stopping people from interacting with this? How can I make it easier for them? How can I make it possible for the most people to benefit from my site or app?"