Monday, October 11, 2021

Simplify Your Life With Static Testing (#PNSQC2021 Live Blog)

 

Photo of Chris Cowell

I confess, the title alone is what caught my attention here and I decided I wanted to jump in and see what this is all about. 

"You're Working Too Hard! Can You Simplify Your Life with Static Testing?"

Okay, what exactly is this? That's what Chris Cowell is describing and how he uses it.

So let's get down to it... what specifically is Static Testing and how do you do it?

Well, the easiest way to think bout this is to get to what static testing is going to be counter to? Is this like how we used "Lint" as a tool for finding issues in C code (and yes, I am that old ;) ). Static testing looks for problems in the source code without actually running it. Thus, yes, Lint was/is a static testing tool. But static testing today goes beyond that. We can use static testing to look for problems in configuration files, XML or markup files, etc. In some cases, these issues could be fixed automatically, or at least suggest fixes and let you, as the user, apply them. 

The real question here is, of course, "can static testing replace functional testing, performance testing, security testing, or other kinds of testing?" No, not really. However, it can complement dynamic testing and as Chris puts it, static testing can help identify risks rather than bugs. 


So where can we use this? Many IDE's have static testing built-in. Any time you see some underlining or alert icons pop up in your code and as we address them as they come up, that's static testing in action. Cool, I never considered it that way! Build tools like Maven, Gradle, and Rake have static testing plugged in. GitLab and GitHub also have this built into their tools (Chris works with and instructs for GitLab, hence why he's mentioning this :) ). There are even Chrome extensions that can do this (Code Climate, never heard of it, but intrigued now).

When static test tools look at code, the tools identify ways that the code might "work" but maybe not the way we intend it to. We could call this a "code smell". Perhaps we might have a coding approach that "works" but may end up causing us problems as things get more complex. 


Static testing can't tell you if you will meet performance goals, but it can/could tell you if there are processes and methods that might add extra time or be wasteful if run as-is.


Static testing may also be able to identify where you may have security issues such as file injection or the potential for Cross-Site Scripting attacks. Code Complexity can also be examined to see if there are overly complicated methods or areas that can be refactored.

Static testing can even help with your documentation. Heck, there are even tools that can help with grammatical issues in text that we have put into the code. as a Grammarly user, I definitely think that's a cool and neat thing to have.

So what does static testing not help with? It can't tell you if your algorithm is effective. It can't really correct iffy implementations, it can't actually help meet performance goals or integration issues, so those are areas where dynamic testing will be needed. Also, Static testing might be a little bit too helpful. Many of the areas that it suggests may not be relevant for our implementation. 

Again, this is meant to be a complementary approach, not a complete replacement for dynamic testing. I can see, however, that there are some definite benefits to looking at additional static testing options. I have some homework to do :).

We are Underway: Building on Success, Beyond the Obvious (#PNSQC2021 Live Blog)

 


It's exciting to be back at the Pacific Northwest Software Quality Conference (PNSQC). Granted, we are remote and virtual for a second year but I have enjoyed being a participant at this conference for the past ten-plus years. At this point, I think I have attended and presented at PNSQC more than at any other conference. I've gone the whole route from writing papers, presenting posters, giving full talks, being an invited speaker, and being a finalist for best presentation. I've learned a lot here and I look forward to continued participation. Yes, I have a talk I will be presenting here as well, but for obvious reasons, I can't live blog my own talk ;).


Monday is starting off with a keynote being given by Eric Van Veenendaal discussing "Building on Success, Beyond the Obvious". It's important to start with a simple question, "What is success? What happens and what makes a difference?" In other words, what can we do to actually be successful without necessarily being "THE BEST". 

Eric pivots here and takes on the "No More Testers" rhetoric. Will we indeed see a day where there will be no more testers? I'd say that the real answer is "it depends". I think testing will always be relevant and testing will need to be performed by skilled people. I also do not believe that testing has to be performed by a dedicated tester. Fortunately, many of us who have come up in the trenches as software testers are also not only doing testing. We may do a lot of it, but odds are we are involved in many other initiatives that help the success of the company. 

Having been in or around testing for thirty years now, I can say that the testing landscape and the focus of my efforts have changed many times over those three decades. The basic methods I learned in many ways are still relevant but the overall approach and HOW I test, much less WHAT I test, has changed dramatically. 

An interesting comparison is made with the idea that Agile adoption has grown but the achievement of the benefits from it looks to be elusive. Doing Agile has benefits but it does not guarantee better quality software. 



As a teacher of the BBST courses, I realize that there is a large gap between what we say and espouse testers to focus their efforts on and what actually gets focused on in the real world. Even when I train and work with people to use these skills and techniques, often they fall apart in a real-world application. I'm a Senior Automation Engineer by title but I'll say that my day-to-day testing efforts are still a large majority manual and exploration based. More to the point, I get to do a number of other things that go beyond just handling testing initiatives. this may sound bleak and yet, we release, we fulfill obligations, we get software out to our customers, and they make it work for their purposes. Ideally? No, certainly not. Effectively and usably? Yes. Thus a lot of what we do and focus on answers to the various Agile rituals, truth is we make do and make it work to be effective. It's not a perfect process at all but it does work, for some definition of "work" ;).

It's one thing to "do the things that make the difference" and that's really the key here. Getting to that place can be difficult because there are a lot of turfs that are often defended vigorously. In many cases, what we do is not nearly as important as how and why we do it.  

Set Clear Priorities and Understand The Risks

In many cases, testing comes down to communicating the risks. What would happen if the worst-case scenario occurred? How would we effectively approach our testing if we placed a serious focus on the risk assessment and taking a risk-based approach to testing? This doesn't necessarily mean "more time testing" but "testing the areas that would have the greatest risk were they to go wrong". This means we need to be focused on setting priorities related to risk identification, analysis, and mitigation. Additionally, how many areas do we have specific control over, and what areas are peripheral to us yet still important? Will we be dealing with things that would have a high impact if they went sideways? How likely is that sideways issue to happen? The better we can get a handle on these two values, the better we will be at developing our risk-based test approach.

I like what Erik says here to the effect that a test plan should reside on a single page where possible. We should have a test approach that deals with where to focus and determine what matters the most and what may not matter anywhere near as much.

Review Processes and Methods to Determine what is Actually Working/Important

Shift-left is a popular term these days and a part of that is looking at ways to be involved and focused on issues that are discovered and how they may be mitigated in the future. Part of the shift-left approach is increasing visibility on testing efforts as soon as possible in the process. Reviewing past efforts and learning what we can do to be effective EARLIER  in the process is a helpful approach. To be clear, this doesn't mean review everything all the time. That's all we would be doing. The key to effective reviews is to look at specific areas, scope them so as to not be overwhelming, and work on them a little bit at a time.   

Use More Effective Unit Testing to Help Find Issues Earlier

This may seem obvious but this is not to say "do Unit Testing". Hopefully, most organizations are doing Unit Testing as an active practice. This is taking a development methodology and thinking about how and where to use it to help with the development process. The potential issue here is that many developers have not been trained in testing techniques or skills. Thus, while there may not be a need for a dedicated tester in many circumstances, that means that there needs to be a developer(s) who are well versed and experienced in testing techniques and can bring that back to their software development. To say that developers don't know how to test is ludicrous, as many developers are very good at testing. The challenge is many other developers are not. How do we bring them up to speed?

Introduce Test Design to the Development Process

One area where testers can be helpful early on is identifying test conditions and test situations early on. In many cases, testers will not be able to address every area upfront or early in a development cycle. Over time, though, we will develop familiarity and we will have an understanding of the areas we have already tested and worked with, reviewed, and evaluated with the team. This will allow us to consider testing, testability, and risk areas early in the development process and ultimately make it possible for us to address testing in an effective manner. Exploratory testing is important in the early learning stages. It's important at many other times but it can be make-or-break in these early phases of development. this is the time where we can address areas like testability, accessibility, inclusive and responsive design.

Build Experienced and Skilled Testers, Whoever Those "Testers" Happen to Be

When we talk about training the testers, realize that there may be many different people testing at any given time. Dedicated testers, sure, but also developers, support engineers, operations people, stakeholders, customers (indirectly but they will test whether we like it or not). No one is going to be able to be excellent at everything but everyone can and will develop broad skills and knowledge that will be helpful to the process and projects. This also includes a variety of soft skills. Testing and bug hunting is important but just as important is advocacy and developing the ability to present cases on behalf of others and speak to the risks and challenges and what they may face. Additionally, think about the members of teams and what they are doing. If the attitude of "no more dedicated testers" is a potential reality, as what the plan is with those people above and beyond testing. Do we plan on having them add value in development? In Operations? Assisting support? Each organization will have different goals and needs but again, investing in people will net great results down the road.  


Wednesday, July 28, 2021

QA Open Season w/ panel people (@Xpanxion #QASummit 2021) : Live Blog

All right, here comes the last formal activity of the day. We are all gathered for a Q&A shootout with a panel of six participants:

  • Rachel Kibler
  • Carlos Kidman
  • Greg Paskal
  • Jason Bryant
  • Marcus Merrell
  • Matthew Heusser
The questions for this session have been generated via Sli.do and we've covered a number of questions, such as:

"What is the path for software testers going forward if you may not specifically be aiming towards being a technical tester?"

The general consensus is that there are so many possible avenues to explore and get involved in that worrying if you are not suited for automation, you need not worry that your career is over or that you will be replaced. If you apply your brain effectively in an organization and bring value with your efforts, you will run circles around any computer and script. Maybe not fast circles, but circles nonetheless. 

What is the difference between the hype and reality behind AI and ML?

In general, the hype around replacing the human with a computer seems to inspire investors much more than it inspires organizations. AI and ML should be focusing on the data science so that we can actually learn from the data we already have accumulated. Now that could be valuable (and I very much agree :) ).

How does your organization demonstrate the ROI on the testing investment?

The consensus is that if you lead with risk, the odds are that the C suite will start paying attention. Many ideas may take precedence at random times but talking about the actual risks, lead with risk and the C Level folks will year you. 

What are some ways to get testers to think more about quality?

Rachel voiced that she has a quality coach on every team but not necessarily a tester on every team. IOW, the role of testing may or may not be as critical as the role of quality but the role of quality itself certainly is.

A question that I will in no way be able to repeat because it was too verbose...?

Learn to ask better questions and learn when to avoid useless/needless buzzwords.

We interrupt this program to have a company jingle breakdown (you had to be there ;) ).

Can Unit Testing Be used as Integration Testing?

Seems the consensus is they are two different things. It's what you do with them that matters. Add two and two in your head. That's a Unit Test. Check the time... let me grab my watch... that's an integration test. Works for me :).

What is the #1 issue facing the QA world currently? What is the hottest trend?

The biggest issue is not testing the right thing. This extends to testing on devices people actually use (including mobile devices and Internet of Things devices). The biggest trend is ignoring failures and moving on as though there are not issues, and that is problematic, to say the least. Observability is a hot property and we are just at the beginning of what might be possible. 

How do we get our management teams to focus on iOS testing (or mobile testing)?

It seems that iOS in several organizations is not being actively tested or distantly compared to other infrastructures. the answers tend to range around risk and the fact that things break. Quantify how bad things could be if iOS interactions would be compromised or made unusable. My guess is a lot of users would be locked out and would effectively stop a revenue stream and that should light a fire under some people.

And that's a wrap.... oh, and Marvel (LOL!)

How Holistic Testing Affects Product Quality with @janetgregoryca (@Xpanxion #QASummit 2021) : Live Blog

 We're down to our final keynote and it's a pleasure to see Janet Gregory, if only virtually, this year. Since the border situation between USA and Canada is still in question (and considering the situation with outbreaks we are seeing, I don't blame it in the slightest), We're still getting to hear Janet talk about the value of DevOps and the fact that it genuinely works when the teams in question genuinely put in the time and energy to make sure that the teams can work. 

Quality is always a vague and odd thing to get one's head around. What makes something good to one person may not be so excellent to someone else. In some areas it is objective but much of the time it is subjective and not even related to the end product itself. Janet uses the example of a cup of coffee. For some, the best coffee is experienced black, so that every sense of the flavor of the beans can be examined. For others, the best crafted iced frappuccino with all of the extra flavors makes the experience a quality one. Does one approach replace the validity of the other? It really doesn't but it matters a lot to the person in question at that point in time. Quality is what matters to a person experiencing the item in question and in the way that they want to experience it.

So, how do you build quality into your product? In many cases, quality is not just one figure but many that come together. Some may argue that Lamborghini sports cars are of high quality. I may or may not agree but the cost for a Lamborghini puts it well out of the range where I will ever find out. Is the level of quality a consideration if you can't consider paying for it? If it is super affordable, does that automatically mean the product is of low quality? Not necessarily. I'm reminded of the app Splice, which is a video editing app that I use on my phone. Granted, I pay for it (about $3 a week) but their regularity of updates and their method of continually improving the product makes it worth that expense for me. It's not s much that it is going to discourage me but it also provides me a value that makes me willing to keep paying for it.

Holistic Testing focuses on the idea that testing happens all the time. To that end, Janet is not a fan of the terms shift-left or shift-right testing. The real question is, "what do you mean you are not doing active testing at every stage of the process?" It does help to know all areas where testing makes sense to perform and why/when we would do it. It may honestly have never occurred to people that monitoring and analytics after a product is released fits into testing and that testing can actually learn from these areas to help improve the product. 

One of the best phrases a tester can use/encourage is "can you show me?" I find that when working with developers and testers, many misconceptions and miscommunications can be avoided just by asking this question.  Using AB/Testing, feature flags, or toggles to turn on or off features allows us to do testing in production without it being a scary proposition. We also get to observe what our customers actually do and use and from that we can learn wi=hich features are actually used, or for that matter even wanted in the first place. We may also discover that features we develop to serve one purpose may actually be used in a different manner or for a different purpose than we intended. With that kind of discovery, we can learn how to better hit the mark or to provide features that we may not even be totally aware are needed.

The key to realize is there are testing initiatives that happen at every level of software development. It's important for us as organizations, not just us as testers, to learn how to leverage that testing focus at all levels and be able to learn, experiment, confirm or refute, and then experiment again. It will take time, it will take involvement, it will take investment, and it will take commitment. Still, the more that we are abe to leverage these testing areas, the better our overall quality approach will have the potential to be.

Managing The Test Data Nightmare with @AutomationPanda (@Xpanxion #QASummit 2021) : Live Blog

 Wooo hoooo! The second talk is done, I am officially free of obligations today :). However, conference still moves on and this session covers an area that I personally struggle with. My personal job does a lot of data transformation so I have a possibly endless range of test data that can be generated and transformed. 

Test data shows up everywhere. Not just the data needed to make the test work but your browser choice, your necessary created artifacts, before and after dependencies, etc.


Static Data is often created before testing. it's good fr slow or complicated data. It may make tests run faster but it may make tests brittle as data changes, and it may turn stale over time. Dynamic data gets created at run time for tests, it can avoid becoming brittle, it's exclusive tho that tests, creation and run time, it can slow down individual tests due to the creation dependencies, and it will need to be cleaned up after the test is finished.

The truth is, I probably initially do about 70% manually configured data and about 30% dynamic/automated data creation. The intermediate data that I create is always created dynamically (that's the nature of data transformation tests). Ultimately, my goal is to be able to take data from our database and dynamically generate flat files.

Additionally, there is a variety of test control inputs that we need to keep track of. Our browser. our destination URL, basically all information that can be entered and routed. There are also output references that we may know or we may not know anything about. 

The Quality Mind with Gwen Iarussi (@Xpanxion #QASummit 2021) : Live Blog

 Well, that was fun. I just delivered a talk about Self Healing Automation (spoiler: not really, much more opportunistic and using agents to help build a dynamic locator switch statement but that's nowhere near as cool sounding ;) ). Good group, great interactions, thanks to those who attended.

The next session is with Gwen Iarussi and we're talking about how to build a Quality mindset. Gwen focused on the past forty years and how we approached quality and software delivery, how the tools and processes we used developed and grew over those decades. The challenges we face today are similar but definitely "faster". We need to consider scalability as organizations often grow quickly and what worked one year will be wholly inadequate the next. More data, more interactions, more people, more, more, more! Always more!!! With this increase in infrastructure, knowledge of tools and tooling that we had last year is out of date already (not completely but there's a lot that happens in any given year).

"The quality of your thinking determines the quality of your life" -- A.R. Bernard

When we talk about replacing QA with automation, they are talking about replacing the human brain with machines. It's important to realize that machines are repetitive and exact but remarkably stupid without being told exactly what to do. Humans are slower, less prone to dealing with repetitive tasks but we can come up with so many interesting avenues. Our brains are pattern-recognition machines, in that we are very quick to catch on to patterns and then be able to anticipate what comes next.  

We can learn and believe a lot of things and how we approach the way that we learn and focus on tasks gives us the impulse to succeed or to hold us back. Neuroscience has shown that we all bring unique perspectives to our interactions. Our experiences both inform how we look at our approach to quality and, likewise, how we approach testing.

First questions:

Who uses our product? What is most important to them?

What is critical for my company's survival?

What tech do we need to invest in or understand what we already have?

What tools and heuristics can help me make sense of what I am seeing?

Continuous Learning is critical to success. When we stop learning, we stop progressing. Areas like Systems Thinking, Design Theory, Human Psychology, Learning about the world, Cognition Games, and Testing Disciplines/Methods are all important areas to study and understand.

Knowing these things is not enough. We have to actually know it and use it. that means we need to be at the table, "the room where it happens". In short, we need to be engaged and involved. If we are not, it's our loss.



On The Road Again: Speaking Today at the @XPansion #QASummit (Live Blog)

 Hi all!

I confess I have been struggling to participate with this blog. I just haven't felt mentally in it. Additionally, it took me a little while to get things sorted out with my Twitter handle (in a neat twist of fate, the person who took the account decided to give it back to me, so I will be putting mkltesthead back into my bio again. It took me a while to make sure that the gift of my account back didn't come with some "extra stuff" that would have made my reality unpleasant but thankfully that was not the case).

A couple weeks back I was asked if I'd like to speak at the XPansion QA Summit being held in South Jordan, Utah, USA. Seeing as I had a number of friends participating in the program and I hadn't spoken in a live setting in nearly two years, I decided it was time to say "yes" and get back to live speaking. That is part of what I will be doing today. I will be giving two talks today (actually, I'll be giving the same talk twice) about "Sef Healing Automation" or more to the point "what self-healing automation actually is (in most cases) and how it's basically a switch statement that rebuilds itself.

The first talk is being given by Andrew Brown and the topic is "Why Do People Break Software Projects". Andrew predicts that software development in 2031 will have about 20% of projects fail. Many will be late or over budget. Some projects will take crazy risks. Many will work in silos. They will develop too much technical debt, they will add more processes that will have no effect on quality, and their regression tests will be filled with junk. Sounds like today, huh? Well, that's the point. We've had these same problems for fifty-plus years. What are we missing? First, there's a technical part and that changes all the time but there is also a people/human part and those problems don't really change. What's worse, we don't change them because we don't really understand those issues. The key to realize is the human brain was never really designed to develop software. The fact that we can do it is kind of remarkable. The human mind is amazingly adaptable but the technology we create quickly outstrips our effective understanding of it. Our thought processes have deep evolutionary roots and many of our thoughts are much more primitive, tribal, and segmented. We are focused on survival and reproduction, and those aspects we do quite well. Those are far and away removed from the thought processes that help us develop software. The technology far outstrips our actual understanding. 

There is a lot of historical fears and issues, some might call this the lizard brain. Those fears and issues are the ones that get to the heart of being human and why we struggle with getting things done effectively. Often, we are overconfident. We see things the way we are, not the way they should be seen. 

Overall, this has been a neat discussion and some interesting ideas shared. I see, and agree, that the areas we need to spend more time on are not the technological issues but the human issues.

Tuesday, April 13, 2021

Starting Over On Twitter with TheTestHead

 It's a strange feeling realizing that ten years of communication and connection can be taken over and made irrelevant.

For those who didn't see yesterday's post, my Twitter account was hacked, and the email address assigned to the person who hacked it. My attempts to contact Twitter about this have been answered thus far with:

"please respond with the email address associated with this account." 

Well, I would if that address was still mine but alas, it is not, and really, I only have myself to blame.

So let's have a little chat about this tale of woe, what I should have done and how I'm going to move on with this.

For starters, my plan to have "mkltesthead" be a ubiquitous tag that was once and done now has run into a bit of a problem. Granted, "mkltesthead" is a bit arbitrary to begin with. I first came upon the idea when I wanted to name the TESTHEAD blog. I really wanted testhead as a Twitter handle and name to use but I couldn't get it as it had already been used.  Thus the convention that started here spread out as a username in many places. During the pandemic, I admit that my Twitter participation was sporadic at best. I just wasn't in the mood to tweet, so I wasn't really paying attention to that account. Well, I paid attention yesterday, that's for sure, when I discovered I couldn't use it any longer!

To be clear, there are a couple things anyone who interacts with me should know:

First, I will not ask you for money, EVER! Granted, I may ask you to go over to Ensign Red's Bandcamp page and buy some music, but that's about it ;).

Second, why would I want to sell my account? For what purpose? Who else would benefit from being TESTHEAD?

Anyway, it's looking less and less likely that the account will be recoverable, so I have mentally prepared to move on. I have created a new account, and it looks like this:


Please note the new name. It's @TheTestHead. Seeing how easy it was to get that, I am a bit chagrined I never tried to change it to that before (LOL!). 

In any event, I will tell you what I suggest everyone do and what I should have done:

- update your password regularly. Even if you think you have a wildly creative password no one else will figure out, you may be surprised how easily passwords can be cracked nowadays.

- do an audit and see what devices and apps have access to your account(s). The more avenues for data flow, the more likely you will be a victim of a breach.

- if you have been delaying setting up multi-factor authentication, do so now. Make sure that you create barriers to people taking over your stuff. It may feel like an annoyance but trust me, having your account lifted and having to explain to people "no, that's not me asking for money" is much more annoying.

One might think that a seasoned software tester should be well aware of stuff like this. Just because We should be aware doesn't necessarily mean we always follow our own advice or everyday practices. We can get lazy as well. This is just an example of how getting lazy can come back and bite us.

 In short, learn from me ;).


Monday, April 12, 2021

mkltesthead on Twitter has been Hacked


I am sorry that this gets to be what breaks my radio silence on the blog for 2021 but I guess if it motivates, it's a good thing, in that sense.

If you interact with me on Twiter with my account of "mkltesthead", please be advised that that person you are communicating with as of some time today is not me. It is someone who hacked and has taken over my account. I'm in the process of trying to see what I can do to get it back but there is the distinct possibility I may not be able to.

Interestingly enough, the person who currently has it is willing to sell it back to me or anyone else. The only problem with that is I have zero interest in doing that. It would be sad to lose a ten-year and running account and one that's associated with my name so thoroughly but if I must start again, then I shall start again.

Hopefully, it won't come to that but please, if you see @mkltesthead for the next bit, and I haven't updated this page or made a new post, please assume the hacked account is still hacked and in the hands of the hacker. Also, if you'd like to do me a solid, call the hacker out and let them know it isn't me and you know it isn't me.

Thank you all very much!!!

Thursday, December 31, 2020

Summing Up "The Ludicrous Year" that was 2020

 I had a feeling that my run of the joke that was the lyrics of  "Once In A Lifetime" might come to an end in 2020 after ten years but frankly, nothing in those lines can be used that can sum up this year that absolutely redefined reality for so many. The only line that matches would be a repeat anyway... "well, how did I get here?"

It started out with my looking forward to a lot of new opportunities. My band, Ensign Red, had just recruited a new guitar player to replace our longtime outgoing guitarist. We were looking forward to a year of a new infusion of ideas, of songs, of shows, and other avenues to perform. I was just coming off of a string of dates with the Sea Dogs/Paddy West School of Seamanship during the Dickens Faire and I was looking forward to a year of performances where I'd be able to show versatility and get to perform in more places. 

Then "The Ludicrous Year" took over.

For four whole months, we didn't do anything together as we were under "shelter in place" rules and we did our best to follow them. we invested in Zoom setups. We invested in a shared file store to trade song ideas. We invested in new interface technology to capture what we wanted to record and make it as close to seamless as we could so everyone could work on ideas. We put together a short series of interviews with the band members and we would put some rehearsal footage together to leak out to people to hear what we were doing. I'd Zoom with the members of the Sea Dogs and Paddy West to stay in the loop and hope there might be some performances of some kind (spoiler, nope, there weren't). Still, I feel like I got the chance to get to know the other performers in a way I might not have otherwise and I had the chance to get a better feel for where I might be able to make contributions in some way in the future, whenever that future will be and what it will be.

I started out with a team that I had worked with for seven years, a product I knew well, and a pretty clear path as to what I had to do to keep it going as we expected. 

Then "The Ludicrous Year" took over.

With the need to reposition and reapportion who was working on what, including the tough decision to let go of most contractors,  I was asked if I'd be willing to join a group that needed a tester and hadn't had one before, to go into an avenue of our company that was utterly foreign to everything I had been working on up to that time. There were no familiar faces, no familiar rituals, no familiar workflows to fall back on. I said yes and thus started my learning about a whole new way of looking at testing and also looking at data. I took on an automation project unlike any I had previously worked on. Far less UI focus, much more API manipulation. Less browser interaction, more flat file manipulation. Less "does this look right?" and more "is the data accurate?" challenging, but interesting. I had to say "goodbye" in a way to a long time family but I got to meet a whole new team. The challenge was of course that I struggled to keep up with the people who were my long-time team as we were now focused on different things and my old comfort zone had disappeared.

I had looked at the opportunities for conferences that I was going to take part in for 2020. 

Then "The Ludicrous Year" took over.

I had to adjust to doing conference talks on Zoom. I had to manage rooms of people I couldn't see, record conference talks to a camera with no one watching or listening, and then interact with people via chat to be able to see how well I did or if I did well at all. Frankly, this was challenging as I tend to work off of a room of people and their reactions to see what is needed and adjust my message, even if I have practiced it numerous times. I also got to work with video tools and presentation tools in a way that I hadn't done so before, so there was a lot of learning and new approaches I could look to.

My family reality had looked like it would be productive and fun. Two of my daughters were working at San Francisco International Airport, with carriers working out of countries they were looking forward to visiting and perhaps even working in for a bit. My son was working on setting up a studio with some other friends and investors in Los Angeles. Christina had several opportunities with the law office she works with and on the whole, 2020 looked like it might be an adventurous year.

Then "The Ludicrous Year" took over.

Both of my daughters were furloughed from the airport (makes sense when nobody is flying or flying in such a limited capacity, especially with International carriers). My elder daughter decided to take the opportunity to pursue a cosmetology school program. We set up the upstairs living room area to be her online classroom and her own salon area for learning and practice. My younger daughter found work at a couple of places outside of the airport and has a small recurring role in the airport concierge service that she can do a few hours each week but it's definitely not the career she had been working towards. My son had to hunker down and get creative for ways to work in an industry where performances and recordings and touring were the main bread and butter of the industry. He's said it's been a challenge, added to the fact that he was the one out of all of us thus far to actually contract COVID-19. One of his housemates brought it home to all of them and thankfully his case was mild, though he said he wouldn't want to see anyone go through the six weeks or so that he did dealing with it.

For me as a whole, this year felt like treading water. Much of what I enjoyed doing had to be bottled up and thus I tended to likewise treat myself that way. Still, I could say that there were several neat developments:

- The Testing show returned to twice a month publication, so I was able to keep busy with that.

- I had some opportunities towards the end of the year to write for some publications I hadn't worked with before.

- The fact that I wasn't performing or traveling to festivals meant that the money I would have spent over the course of the year didn't get spent and that allowed me to revamp my studio/rehearsal space and buy new equipment for a guitar rig that I had dreamed of owning for years, and the oddness of "The Ludicrous Year" made that both more possible and actually more affordable, since used gear shops were filling up dramatically this year.

- One interesting development, and for those curious, is that I joined TikTok and started making content about "Heavy Metal Vocal Technique", among other things. That has also caused me to revamp my home studio space (about thirty times) so as to make it more interesting for filming and presentation purposes. If you'd like to see me doing my thing there, it's @mkltesthead :).

Needless to say, this was not the year I had hoped for and I do not want to make light of the fact that people lost loved ones this year or literally died this year. I have seen both the best and worst of human behavior. I have celebrated highs and dealt with crashing lows. Still, I am here, all of my family is still with me, I'm gainfully employed while many others aren't, and I can look forward to 2021 that I am again hoping will prove better than this year now closing down. 

Be excellent to each other :)!!!