Wednesday, October 16, 2019

Two Down, Eight More to Go - Tackling More "30 Days of Testing" Challenges

As a re-enactor, a performer, and a musician, I appreciate the fact that there is a need for regular practice in any endeavor.

- While I dress up like a pirate and participate in occasional stage shows, I need to actually practice swordwork so that I can be prepared and ready, as well as SAFE, during stage performances. In short, I need to keep my body in practice with rudiments and fencing drills.

- As a musician, I can't just show up and improvise (well, I can and I have and the results have been predictably embarrassing). To be able to play at any decent proficiency and dexterity, I must practice, even if the practice I do is to play things by ear so that I can do better live improvisation. The same goes for writing songs. If I want to write better songs, I have to (gasp!) write songs in the first place. It's a little silly to think I'm going to get inspiration and write the perfect thing every time. Likewise, if I use the music theory I do know and write songs with it, I may not make something brilliant every time but my odds of writing something good go way up. Much better than if I just wait for inspiration to strike.

- When I make clothes for historical garb or cosplay, I can't just expect to come in and knock everything out the first time in perfect order. I'm just not that skilled a tailor. I can, however, make mocks and practice and try out the ideas so I can get it solid enough to make the items well.

Why should I think that as a blogger and as a tester I am just going to have intelligent things fall into my lap? The answer is "things probably won't but they definitely won't if I don't practice or prepare for them.

This brings me back to the "30 Days" Challenges. For various reasons I looked at a number of them and said "oh, that would be cool, I will check that out later" or "hmmm, not quite in my wheelhouse, I may check that out further down the road." Any guesses how many of them I've come back to? Yep, I've not come back to any of them except for the two that I chose to hit immediately. Notice that those both completed and I learned a lot from both of them. Let's have a look at a little graphic:


There are ten challenges there. Two are done, eight I've never started. Well, that's going to change. Next up is "30 Days of Automation in Testing". Why? I'm in the middle of learning how to set up C# and .NET Core for automation needs.

The problem is, we're already up to the 16th of October. Not a really convenient start time, right? Old me would say "OK, I'll start this beginning of November" and then I'd forget about doing it. I'd still feel good because I told the world I'd do it. I mean, who is going to check up on me, right? Well, that's a lame attitude and the answer is I'M GOING TO CHECK UP ON ME!!! 

By the way, expect me to talk about "Writing While ADHD" but I'm not going to promise a timeline for it just yet ;).

So what's my plan for the "30 Days of Automation in Testing"? Simple, I'm starting it today. Seems two posts a day should be enough to get me back on track and cover 30 days (that may be aggressive and ambitious but hey, fools rush in where consultants fear to tread ;) ).

Am I Really So Ordinary? - a #PNSQC2019 Blog Retro

This has been an amazing few days. I've received a presentation award from my peers here. You all voted with your evaluations and you felt my score warranted the second most highly rated presentation of the conference. WOW!

I'm humbled by this but I'm also a little embarrassed. Why would I be embarrassed? Because for the past five months, my blog has been quiet. Why is that? Because I've felt that I don't have anything important left to say. TESTHEAD has been on the air for almost ten years. There are over 1200 blog posts I have written. What more could I possibly say without repeating myself? What can I possibly add that would be even remotely interesting?


I don't know if anyone else has these thoughts from time to time... or often... or every single day... but yeah, I do. I had a great conversation last night with a fellow presenter (they may or may not be cool with me sharing this so I'm cloaking in a little anonymity... but I'm pretty sure anyone who knows us can guess ;) ). As I was talking about how I struggled to come up with an idea this year and that I wondered if my experience would even be all that interesting, we recapped a few things and thoughts:

- experiences are all we can really share and there what people actually relate to. Me setting myself on high and offering pronouncements is boring. Me telling how I got completely lost or frustrated with a situation and what I learned from it is much more valuable.

- I joked that so much of my talk was "blinding flashes of the obvious" and the response back was "was it really? If it was so obvious, why was it a revelation when you addressed it?" Point being, what may seem patently obvious in hindsight may be hidden or not understood by everyone else. In short, if you are confused, it's a good bet a lot of other people are, too.

- it takes a lot to get people to get up on a stage or in front of a group of people to be willing to speak. What we may see as banal and every day is a major step out of the comfort zone for 95% of people. The act of presenting is courageous in and of itself, much less someone willing to do it again and again, year after year.

- what's more, think about what people do to agree to come to a conference in general. They give up their time, their families, their work commitments, their home commitments, many of them pack themselves into a plane for several hours and are not at all thrilled about the experience, yet they go because they want to hear what might give them an edge, a new idea, a new angle to help them do better work every day. They want to hear what you have to say, and really, the only worthwhile thing that you can share is your own experiences.

I should also mention that the thoughts for my talk didn't come together fully formed. The paper I submitted went through three revisions and extensive feedback from two other individuals that helped me take ideas that were half baked and get them to make more sense, as well as to be able to step back and help me emphasize the areas that needed to be and push back or disregard areas that didn't add as much as the parts they suggested I emphasize. For those who voted for my presentation, I must be absolutely clear that "I HAD HELP!"

Others have asked if I will be back next year and what I might talk about? The answers are "likely, yes!" and "I really do not know at this stage" but I have a few ideas. One thing I want to do is go back and review the other "30 Days" challenges that the Ministry of Testing has put together. I have several areas in my own work environment that is requiring me to step out of my comfort zone (have I mentioned I'm trading in my MacBookPro for a Windows 10 machine? Have I mentioned that I'm looking into what it takes to program in C# and run on .NET Core? Yeah, those are new realities for me. If you asked me last week, I might have said "yeah, no one is really interested in that." Today? I have a totally different opinion on that front. I'm still learning things and there's a lot to learn so it only seems reasonable I keep learning in public the way I've said I would :).

PS, I've been on a voyage of musical discovery with my younger daughter recently and part of that has been to introduce singer/songwriter Paula Cole to her. Today's title borrows from her first single "I Am So Ordinary" so credit where credit is due ;).


A Show of Hands - a #PNSQC2019 Live Blog

Today is the workshop day at PNSQC. I'm the moderator for Melissa Tondi's workshop on "Efficient Testing". As the workshops are an add on and paid for by the attendees, out of respect for that I do not live blog workshop goings-on; If you want to take part, come out and sign up to be a part of it ;).

Instead, I' like to talk a little bit about what I think really makes PNSQC unique, and that is its emphasis on working with volunteers.


  • If you submitted a paper and you received feedback, that person providing feedback is a volunteer.
  • If you interact with the web site, those updates are done by volunteers.
  • The registration, room monitoring, moderating of tracks, etc. are done by volunteers.

In short, this conference has so many opportunities to volunteer and participate. Many of the opportunities available will get a person a free ticket to the conference. Volunteering for workshops also gives a person the opportunity to participate in that workshop for free. While there is no guarantee that a person will be able to moderate or facilitate the specific workshop they want to participate in, odds are still pretty good that if you show interest early, you can moderate your first choice.

The bottom line here is that the conference is an excellent one, IMO, and the volunteers go a long way in helping foster that experience. 



Tuesday, October 15, 2019

Testing AI and Bias - a #PNSQC2019 Live Blog

Wow, have we really gotten to this point already? We're down to the last formal talk, the last Keynote. These conferences seem to go faster and faster every time I come out to play.

I've had the chance to hear Jason Arbon talk a number of times at a variety of conferences over the past several years and Jason likes to tackle a variety of topics with ML and AI. Thus, I'm definitely interested to see where Jason was going to go with the topic of AI and Bias. This is a wild new area of testing and as many of you all know I am fond of Carina Zona's talk "Consequences of an Insightful Algorithm".

OK, we understand bias is there but what can we actually do about it? Well, here's the spoiler. You can't remove bias from AI. That is by design. AI takes training data and based on the training system it learns. Literally at the start of the process bias enters in. The point is not to eliminate bias, it is to make sure that undesirable bias is not present or is minimized.

Think of a search engine. How can a machine look at a number of source articles and then, based on a query, decide what might be the most important information? We start with an initialized system. Think of it as a "fresh brain" and not in the zombie sense ;). From there, we then go to a training system, which is information already graded and scored by a human or group of humans, so that the system can train on that data and those values. Can you guess where the bias has crept in? Yep, it's there with the training set. If the system guesses wrong, it gets negative reinforcement. If it gets it right, it gets positive reinforcement (from a machine's sense of reward, I guess).

There are other factors at play such as commerce (ads will get preference or at least money will). Crawlers start on "popular" sites and then look at less linked sites. It will also be biased, by language, education and very likely dominant gender and race. There is also the fact that Microsoft Bing is set by default for Windows. For those people who don't know how to change their search engine, you end up with a "lowest-common-denominator" of users that match up with some weird demographics. According to Jason, Microsoft Bing's core user group is single women in the midwest (he said it, not me :p ). Indicative of that is that much of Microsoft Bing's largest search volume comes from that demographic. What might that indicate what is "feeding the neural network"?

There is also Temporal Bias or "Drift". Over time, the searches can be effective by many things such as weather, politics, astrology, etc. Sample Size can also affect the way that data is represented. One way to check this is to keep changing the sample size until the scores stop changing. Not a guarantee but at least it gives a better feeling that more people are being represented.

There is also a bias in the literal training data. In most cases, the people who provide seach data training sets are for people paid $20/hr or less. In short, the people feeding are neural networks are not those of us who are designing them. We can debate if that is a good or bad thing or if software engineers would be any better.

There's even a Cleaning Bias. Bing cleaned out mis-spellings, random numbers and letters, etc.and the irony was that Google didn't do that and thus Google can even help people find ht they are looking for even if they misspell it.

What happens when there is no "right" answer? Right answer meaning there is one answer for a particular word and there are multiple possible different options for the same word?

As Jason has said a few times, "are you all scared yet?!" Truthfully, I'm not scared but I am a lot more cynical. Frankly, I don't consider that a bad outcome. I think it helps to have a healthy skepticism of these types of systems. We as testers should remain relevant a bit longer if we continue to be ;).






Testing in the Golden Age of Quality - a #PNSQC2019 Live Blog

Jennie Bramble and I have joked a couple of times today that with her giving two talks (one totally impromptu because a speaker went MIA) as well as running the Panel Discussion for
"Testing in the Golden Age of Quality: Where are we and where are we going?", we should just change the working name of PNSQC 2019 to "The Jennie Bramble Show" ;).

Anyway, we have a wonderful panel including  Carol Oliver, Nermina Avdic, and Mallory Brame. We are chatting about "the world we live in and life in general" (and bless you if you actually get that reference, though it may indicate you are at least my age ;) ). Quality is at the forefront of software decisions now more than it ever has been. There have been huge strides in tools, methodologies, and areas that software testers and testing can get involved with. Not bad for a career that was considered "dead" ten years ago (and yes, that "dead" is in huge air quotes and I get the original context, don't get at me (LOL!) ).

Some areas that have been discussed:

- How has the proliferation of tools and paradigms helped the software testing landscape (answer: it has, a lot!)

- What sort of skill sets might we look for in someone interested in security testing? It starts with the code review and then we need to be able to consider attack vectors based on the code used. Security Certificates like CISSP are also helpful but not easy to get.


- How do we show the best value to our clients and how has that changed? By helping move the process along, by helping test early, identifying paths that would be good automation candidates, showing that early testing makes for easier to fix bugs and that pairing can possibly prevent bugs in the first place, thinking of areas developers wouldn't have considered to test.

- Advice for how to communicate with devs to take bugs personally? Have a conversation around the proof of the issue, try to find common ground, it's possible both groups have different understandings,  realize that pushback is natural but being able to support with evidence goes a long way, don't think in terms of "I'm right, you're wrong, shut up!". Unless you're Eugene Lee Yang, then, by all means, go ahead ;).

- How does your company treat bugs such as bugs in production vs. bugs in regression? If the bug is not easily discoverable, we may defer but generally, we will do what we can to fix them. Customer anger may tip the scales ;). Ask the question "so, what would happen if a customer actually did find this?"

- In this Golden Age are we still numbers-driven or is that changing? Jennie steps out of moderator role to say that numbers are a terrible rubric for validating quality because numbers can be gamed so easily as to be effectively meaningless. Fewer numbers, more morale-driven and skills-driven. The numbers will still be present but expect to see we are moving away from it.


Questions that didn't get covered. Any chance readers might want to follow on in the comments :)? If so, sound off!!!

----------

What are the practices for high-value test automation?

What makes automation valuable in the right now and what will it help us achieve in the future?

What methodologies are most prominent right now?

Are we doing more exploratory or ad hoc testing as opposed to test cases and scripts?

Where does QA fit into the life cycle?

How can we move around the SDLC and create value in new places?

What are ways that companies are thinking quality first and what can we do to influence that?

Where do you see QA/automation moving in the future and do you like what you see?

How can we make an impact on the future, share ideas and come together as a discipline?

Should we bleed more into development, for example, testers fixing defects, and is this something you do at your job or would embrace?

How can we make the golden age of quality last?

A11Y Testing Using an Intelligent Agent - a #PNSQC2019 Live Blog


All right, we are in my wheelhouse now :).


As an #a11y advocate, I spend a lot of time talking about and hoping to get people excited about and see the value in focusing on the benefits of thinking about Accessibility.

First and foremost, let's talk about a sobering number. 75,000,000 people need wheelchairs but cannot afford them. Why is this number important? It underscores the moral obligation that we as a society have to help these people who otherwise would be left out of any realistic operation in society. The World Bank estimates that 1.125 billion people deal with some significant difficulties in daily life due to a disability. That is 15% of the world's population as of now.

These numbers, I hope, give emphasis to how many people are affected by Accessibility issues. There are also news stories that time and time again that show that businesses are slowly waking up to the fact that, if they don't do o out of moral or financial obligation, they may well end up paying for it in legal fees and lawsuits.

When I test for Accessibility issues, I tend to use the Web Content Accessibility Guidelines (WCAG) produced by the World Wide Web Consortium (W3C). Sounds like a mouthful but ultimately it comes down to:

Is a site Perceivable?
Is a site Operable?
Is a site Understandable?
Is a site Robust?

In other words, does your site "POUR" ;)? No, that's not really a thing but I laugh about it anyway.

All right, enough merriment, how does Keith recommend we actually test and what can the automated tools actually help with? On the whole, automated testing has a LONG way to go when it comes to addressing Critical and Cognitive issues with sites. Most of the issues found have been found with human discernment (for those who have followed my comments about Accessibility over the past few years know that this meshes with my general opinion. It's nice to see actual data points that support it, too ;) ).

O course, with the title of this talk, I'm expecting that Kevin has some sort of a software anwer to this dilemma. To that effect, let's have a look at Agent A11Y!



Agent A11Y is capable of semi-autonomously exploring a website and evaluating its compliance with WCAG guidelines. As far as automated tools are concerned, that's a big step.

There is also additional tooling around manual testing of WCAG requirements, as a lot of WCAG is difficult to fully automate.

I had a hand in helping review this paper so I have had some experience with the end results of today's presentation. Having said that, I'm very excited to get a chance to play with this in the wild.

Agile Where Agile Fears To Tread! - a #PNSQC2019 Live Blog


Thomas Cagley is a fun guy to listen to in just about any situation. He's been a guest on The Testing Show and I've listened to his own SPAMcast podcast (SPAM in this case means "Software Process and Measurement" ;) ) but this is the first time I've actually listened to him speak.

Tom opens up his conversation with a word I will not even try to write out much less pronounce but it has to do with Germany's Strict Beer Purity Law that has existed since the Middle Ages in Bavaria. He then mentions that his wife is gluten intolerant and thus, because of that she cannot drink beer. She can, however, drink something referred to as a "tea beer". Is it "beer" in the classic sense related to the German Brew Law? No. Does that render the beverage irrelevant or not enjoyable? Not in the slightest :). I can't speak for the beverage in question and the idea of a "tea beer" absolutely intrigues me (I don't drink but I do find the concept fascinating).


The key point to this is that we are back to the lower case versus upper case "[aA]gile". As Dawn Haynes suggested that we should tear down the Agile Temple (granted, those are more my words than hers), Tom is discussing the idea that we can take elements of Agile that are helpful and we can use them in the places we want to without necessarily committing to a full-blown "Religion of Agile"

The goal of "agility" is to move more nimbly, to be quicker, to not have to do such heavy lifts, and be able to get the product out to customers faster and more frequently, with less risk. News flash, you do not have to adopt every element of the Holy Eucharist of the Agile Temple (wow, sorry if that sounds a little harsh but I'm picking the words for the vividness, not for the snark... well, not maybe a little bit for the snark). Basically, we're back to the Beer Law. If it's not [ABCDEFGHIJK] then it's not Agile. Well, OK, maybe we can't do [ABCDEFGHIJK] but what if we were able to do [AEFGJK]. Do we just throw up our hands and say "nope, can't do just those, that wouldn't be Agile". Sadly, there are people who do say exactly that. I'm a fairly polite fellow and as such I will not call these people what they richly deserve to be called (hint: it rhymes with "boron").

As smaller components are implemented and benefits are seen what is likely to happen? The other elements will either fall into place as a matter of course or they can be introduced as needed and improved upon as the team gets better at what they do.

Creating Quality with Mob Programming - a #PNSQC2019 Live Blog


Thomas Desmond has helped me get my head around an example of something I've been interested in but haven't actually been able to actively participate in... what does Mob Programming actually look like?

I understand it as a concept but truth be told, programming in my organization beyond a pair arrangement is... challenging. The biggest challenge is the fact that we are all distributed. We've tried programming as a group via Hangouts or using tmux but again, it's a challenge to get that done with more than to people. Thomas is showing how is organization sets up these massive systems with multiple big screens, with multiple keyboards on rolling desks that can go anywhere in the office. The key idea here is that all of the people (optimized to four) are in the same place, at the same time, talking together simultaneously, and all interacting on the same computer.

Thomas is describing a situation where they mob program on all production code. As in, they don't have their own desks. They work as a group on a single task; designing, coding, testing, and releasing all together. The thought of being able to do this all day, every day, on all projects both seems cool and a little weird. In the neat way, not the unnerving way.


A tool that they use is called "Mobster" (neat, need to look this up) that has limites on who can be the driver (IOW, hands on the keyboard) and who can be the navigator (guiding direction but not necessarily behind the wheel) at any given time. The goal is that the roles switch and everyone gets their turn. For an idea to be implemented one of the navigators must be able to explain the idea clearly so that the driver can implement the idea. Ideally, everything is explained, everyone else in the group can hear the idea(s), and they can comment on the idea before it actually gets implemented.

I have struggled with where I would be effective as a tester in a Mob Programming environment and now that I have seen it explained as implemented by Hunter Industries. They actually throw new people right into the mix. Counter-intuitively, they come up to speed faster this way than they would if they were to be trying to get up to speed in a traditional development environment.

Thomas emphasizes that the benefits of mob programming are:

- live code reviews
- sharing knowledge
- greater idea sharing
- fewer meetings
- more engagement
- increased code experimentation

I must confess that this is a lot more tangible an idea now and it makes me excited to see if/how we might be able to implement it. Any thoughts on how to mob while fully distributed, let me know, please :).

Cutting Releases Cadence - a #PNSQC2019 Live Blog

OK, time to put on my Release Manager and Build Manager hat. For the past few years, outside of being a software tester this has been my most visible functionality within the team. There are a lot of moving parts in this process that I had to come to grips with and get a feel for what exactly it was that I was doing to make releases and deploy them. We do well with Continuous Integration. Continuous Delivery and Deployment are areas we can certainly do better, hence why I am here :).

We have some rules in place at my company and its parental units that mean that true push-button Continuous Delivery and Deployment will likely never happen in actual production. Well, saying "never" may be a bit of hyperbole but much would need to change before our organization would be OK with doing it that way. Still, just because there are limitations to CI/CD, that doesn't mean that in other cases we couldn't or shouldn't be able to do it. We have development environments, staging environments, and integration environments. They need to be provisioned and set up just like any customer site. Those steps are not exactly changing day to day if you get my drift :). Thus, it makes perfect sense to think that we should be able to do CI/CD on a more frequent basis, even if we are the only ones (the engineering team) who reap the everyday benefits.

I can totally feel how And Peterson's organization went through the processes he did to try to wrangle this monster to get a system in place that required less hand-holding and allowed for more time to work on genuinely interesting challenges.

Also, just because you have a process that is push-button does not mean that you always have to do it that way. All that it means is hat the parameters necessary are well understood and repeatable. If you can repeat them, you can standardize them. If you can standardize them, you can package them. If you can package them, you can set them into containers or other structures that allow us to maximize the amount of information that replicates and doesn't change, speeding up our deployments and limiting the time we have to wait between a release build finishing and th time when an environment is up and running with our application in a usable state.

Even with this approach, we are still limited to other teams in our company and what they can and will be able to release. Again, just because your product may not go out every day, there is no reason to not be able to create a staging environment that will benefit from these changes. While we may have a quarterly release cadence, there is nothing stopping us from getting into a daily cadence to push features in fully qualified builds to our staging server. Granted, this does mean that we have to go back and do a little bit of repeating to see if pushing a lot of the changes to a numbered build introduces anything unusual. Still, we have had a chance to see everything working in the staging environment so this shouldn't be a barrier in practice. I say that now, but let's see how that works in practice ;).

Being More Agile Without Doing Agile - a #PNSQC2019 Live Blog


Can I share a possibly unpopular opinion? I am not a fan of "Agile".

Now wait, let me clarify. I love BEING agile. Heck, who doesn't? I don't have a problem with the adjective. I have a problem with the Noun.

Also a confession. I'm here mainly because Dawn Haynes is talking. I've known Dawn for years and the irony is that I have had precious few times that I have actually been able to hear Dawn speak. Thus I consider this a perfect blend of opportunity and attitude :).


I like "little a" agility. Again, the actions and abilities. Those are all good things. They are helpful and necessary.  I like being nimble and quick where I can be.

What I have found less appealing in "Big A" Agile. Mainly because I find that when organizations try to implement "Big A" Agile, they become anything but "little a" agile.

As a software tester, I have often found that there is an afterthought when it comes to testing in Agile implementations. More times than not, what results are teams that kind of, sort of, maybe do some Agile stuff and then retrofit everything that doesn't actually feel right into a safer space.

Dawn emphasizes that the best way to achieve the goals of "Agile" is to actually "be agile". In other words, forget the process (for a moment) and focus on yourself and what you’re trying to accomplish.

A Comfort Zone is a Beautiful Place but Nothing Ever Grows There

For teams to get better, they have to be willing to go to places they don't really want to go to. There is a fear that going into the unknown will slow us down, will send us down paths we are not thrilled about going down, may not even get us to our end goal quickly. So we put a lot of emphasis on what I call "the priestly caste" and "the temple incantations". I'm not trying to be flip here, I'm saying that there are a lot of rituals that we reach for when we are not 100% sure about what we are or should be doing. As long as the rituals are met, we see comfort there, even if the rituals are adding little to no actual benefit. Are retrospectives helpful? They can be if they are acted upon. If they aren't then it is an empty ritual. Granted, it may take time and commitment to see the results of the retrospective findings and real results may not be manifest for weeks or months. Still, if we do not see that there are actual improvements coming from those retros, what is the point of doing them?

One of the interesting developments on my team related to agility and moving more quickly and effectively was to allow myself to wear whatever hat was needed at the moment. I'm not just a tester. Some days I'm a part-time ops person. Some days I'm a build manager. Some days I'm a release manager. Some days I've been a Scrum Master (and, in fact, I was a dedicated Scrum Master for three months). I was still a tester but I did what was needed for the moment and often that meant not being a "Tester" but always being a "tester"... see what I did there ;)?

Are test cases necessary? It depends on what you define as a test case. In my world, I go through a few waves of test case development. Almost never do I start with some super detailed test case. Typically I start with a 5000-foot view and then I look to get an idea of what is in that space. I may or may not even have a clear idea of how to do what I need to do, but I will figure it out. It's the process of that learning that helps me flesh out the ideas needed to test. Do I need to automate steps? Sure, but generally speaking, once I automate them, if I've done it right, that's the last time I need to really care about that level of granularity. Do I really care if I know exactly every step necessary to complete a workflow down to the property IDs needed to reference the elements? No, not really. Do I need to know that a unique ID name exists and can be used? Yes, I care a lot about that. In fact, that's about the most important finding we can make (see my talk about "Is this Testable?" about more of my feelings on this :) ).

The key takeaway, care more about the work and about being nimble than bowing to the altar of AGILE. I find much to value in that :).