Monday, December 31, 2018

And You May Find Yourself

I just realized that this is the ninth installment of this little year-end feature for my blog. I started writing it early in 2010, so that means that it is nearly a decade old. Much has changed and I've done and learned a lot in that time over the past almost nine years. However, I still manage to find a way to come back to this joke and see if the lyrics to Talking Heads "Once in a Lifetime" will line up to my life's experience. Hence the title this go around.

It's that time again, the end of another year. With it a chance to reflect on some of what I've learned, where I've been, what I could do better and what I hope to do going forward.

Some may notice that the blog entries have been fewer here this year. There are a variety of reasons for that, but specifically, I've been writing guest blog posts over at the Test Rail Blog. Thus, I've "found myself" stepping into broader topics, many of them related to software testing and software delivery, accessibility, inclusive design, and automation techniques. One of my most recent entries is here:

Let the Shell Be Your Pal

The Testing Show had an interesting year. While we have scaled back to monthly shows, we have had a variety of interesting topics and broad discussions on software testing, software delivery and quite a bit of coverage of Artificial Intelligence and Machine Learning. In fact, that was the topic of our latest show, so I'd encourage anyone interested to drop in and have a listen:

The Testing Show: Testing with AI and Machine Learning


Last year, I talked about my transition to being a 100% remote worker. This is the first full year that I worked remotely. My verdict? It's mixed to be truthful. On the plus side, I never have to leave my house. On the negative side, that is often a self-fulfilling prophecy. While I was working in Palo Alto having that daily ritual of traveling to a train station, walking and exploring around Palo Alto was a great way to break up my day. Now that I'm at home full time, I really have to remind myself to make those diversions and get up and get out.

This year, in our family, my daughter accepted a full-time mission call for our church. She has been in Sao Paulo, Brazil since early May and she will be returning back home in early November of 2019. Our weekly emails have been a very bright spot and something I look forward to. My son is still in Los Angeles, working with a recording studio and doing a number of site management details so he can live there (seriously, just think about that, how cool is it that my son literally lives in a multi-track recording studio ;) ). He's doing a lot of photographic and graphic artwork for a variety of performers so he's "living the dream". How lucratively? That's always up to interpretation and as you might guess, he's not telling me much (LOL!). Not that I blame him. I remember full well how it felt to be a performer almost thirty years ago. Creatively, I was on cloud nine. Financially, I struggled. I still wouldn't have changed any of those years and I'm pretty sure he feels the same.

On the speaking front, I'm still focusing on Accessibility and Inclusive Design. I've also expanded into developing a workshop devoted to building a testing framework from scratch. Well, building a framework from available parts and connecting them to each other is a more appropriate description. This is an initiative I've worked with my friend Bill Opsal to put together. The feedback from presenting this workshop has been great and I've appreciated the feedback I've received to make it better. I presented it at the Pacific Northwest Software Quality Conference back in October an I will present it again in April at the Software Test Professionals Conference (STPCon). I should also mention that the materials I used to put this workshop together have also been rolled out as a new framework approach at my company. It's neat when my speaking engagements and workshop presentations can filter back into my day to day work :).

If the following looks a little verbatim to last year, it's because the sentiment is the same. My thanks to everyone who has worked with me, interacted with me, been part of The Testing Show podcast as a regular contributor and as a guest, shared a meal with me at a conference, come out to hear me speak, shown support to the Bay Area Software Testers meetup, and otherwise given me a place to bounce ideas, think things through, and be a shoulder to cry on or to just hear me out when I feel like I'm talking crazy. Regardless if you have done that just a little bit or a whole lot, I thank you all.

Here's wishing everyone a wonderful 2019.

Thursday, November 8, 2018

Live Blogging at #testbash Same approach, Different location



Hello everyone and welcome to #testbash San Francisco.


As many of you are familiar, one of the things I actively do when I attend conferences is that I live Blog the sessions I attend. I am doing/have done this at #testbash but instead of those posts appearing on TESTHEAD, they are posted at Ministry of Testing's forum "The Club", specifically in the TestBash San Francisco section.

Please stop by and have a read or several, as there's plenty of posts there :).

Thursday, October 11, 2018

Results of the Install Party - a #pnsqc workshop followup

Yesterday I said I was interested in trying to see how far we could go with trying to solve the install party dilemma. That is where a bunch of people who are sitting in a room try to get the code or application installed so that it can be useful. Often this turns into a long process of trying to determine the state of people's machines, struggle with trying to see why some machines work and some don't, and overcome other obstacles. It's not uncommon to have an hour or so go by before everyone is in a working state, or at least those who can be.

Bill Opsal and I thought that making a sandbox on a Virtual Machine would be a good way to go. By supplying two installers for VirtualBox, we would be able to have the attendees install VirtualBox, set up the virtual machine, boot it and be ready to go. Simple, right? Well...

First of all, while Macs tend to be pretty consistent (we had no issues with installing to Macs yesterday) PC hardware is all over the map. I had a true Arthur Carlson moment yesterday (Station manager of "WKRP in Cincinnati") who famously quoted in an episode, "As God is my witness, I thought turkeys could fly".



Well, in that classic fashion "as God is my witness, I thought all Operating Systems supported 64-bit configurations in 2018".

Oh silly, silly Testhead!!!

To spare some suspense, for a number of participants that had older PC hardware, the option to select a Linux 64 bit guest operating system wasn't even available. Selecting a 32-bit system presented the users with a blank screen. Not the impression I wanted to make at all. Fortunately, we had a lot of attendees that were able to load the 64 bit OS without issue. Some other details I hadn't considered, but we were able to overcome:

- Hyper-V configured systems don't like running alongside VirtualBox, but we were able to convert the .vdi file to a .vhd file and import the guest OS into Hyper-V

- one of the participants had a micro book that had 2 GB of RAM for the whole system. That made setting up the guest with enough space to run in a realistic way to be difficult.

Plus one that I hadn't considered and couldn't... one attendee had a Chromebook. That was an immediate "OK, you need to buddy up with someone else".

In all, we had about eight people out of the 28 participants unable to get the system working for them. By the time we got everyone sorted, settled and we felt sure we could continue, 30 minutes had elapsed. That's better than the hour I'd routinely experienced, but we had what is, to me, an unacceptable level of people who couldn't get their systems to work.

Through talking with other workshop facilitators, we all tried a variety of options and one that I think will likely have to be the one I use going forward is the "participant install prerequisite" which one of the instructors instituted. He encouraged all of the participants to contact him before the course started and make sure they could install the environment. If they couldn't they would work out what would be needed for them to be able to do so. While this might take more time for all involved prior to the workshop, it would be balanced by the fact that all attendees were confirmed ready to go at the start of the workshop. My goal was to speed up that adoption by using a sandbox environment that was all set up. It was partially successful but now I know there are other variables that I need to pay closer attention to. Good things to keep in mind for next time.

Wednesday, October 10, 2018

Lifting Radio Silence - Building a Testing Framework from Scratch(*) at #PNSQC

Last year, my friend Bill Opsal and I proposed something we thought would be interesting. A lot of people talk about testing frameworks but if you probe deeper, you realize that what they are actually after is an end to end solution to run their tests. More times than not, a "testing framework" is a much larger thing than people realize or at least what they are envisioning is a larger thing.

Bill and I started out with the idea that we would have a discussion about all of the other elements that go into deciding how to set up automated testing, as well as to focus on what a framework is and isn't.

The net result is the workshop that we will be delivering today (in about three hours as I write this).



We will be presenting "Building a Testing Framework from Scratch (*)". The subtitle is "A Choose Your Own Adventure Game". In this workshop, we will be describing all of the parts that people tend to think are part of a testing framework, how essential they are (or are not), and what you can choose to do with them (or choose to do without them). Additionally, we are giving all participants a flash drive that has a fully working, albeit small, testing framework with plenty of room to grow and be enhanced.

OK, so some of you may be looking at the title and seeing the asterisk. What does that mean? It means that we need to be careful with what we mean by "From Scratch". When Bill and I proposed the idea, it was from our impression of "starting with nothing and going from there" and that is what we have put together. Not being full-time programmers, we didn't realize until later that that could also be interpreted as "coding from the ground up". To be clear, that is not what this is about. Neither Bill or I have the background for that. Fortunately, after we queried the attendees, we realized that most were coming to it from the perspective of our intended example. We did have a couple who thought it was the latter and gave them the option of finding a workshop that would be more appropriate for their expectations ;).

In the process, we also agreed we would do our best to try to overcome another challenge that we had experienced in workshops for years; the dreaded "install party". That's the inevitable process of trying to get everyone to have the software running on their systems in as little time as possible. This has been a long-running challenge and workshop coordinators have tried a variety of ways to overcome it. Bill and I decided we would approach it in the following manner:


  1. Create a virtual machine with all code and examples, with a reference to a GitHub repository as a backup.
  2. Give that Virtual machine to each participant on a flash drive with installers for VirtualBox.
  3. Encourage each participant to create a virtual machine and attach to the virtual disk image on the flash drive.
  4. Start up the machine and be up and running.
Today we are going to see how well this goes with a room of twenty-eight people. We will test and see if we are successful (for science!).

Tomorrow and in the coming days, I will share the results of the workshop, the good, bad, and ugly that we witnessed (hopefully much of the first but if we get some of the second or third I want to see how we can do better), as well as some of the decisions we made in making the materials that we did. We hope you will join us :).

Taking My Own Advice - A New Look for TESTHEAD

One of the comments that I made during my talk on Monday was that you could go to great lengths, make your site Accessible, pass all of the WCAG recommendations and still have an experience that was less than optimal. That point was driven home to me this morning by a message that a reader really enjoyed the material but that the white on black text was hard for them to read and that it was too small (even though it was set up to be in compliance).

Therefore, for the first time in many years, I stepped back, reconsidered the blog aesthetics vs the blog's usefulness and I redid everything.


  • The white on black look is gone.
  • The contrast level has been pumped up (I may do some more tweaking on this).
  • The default font is larger.
  • I will have to go back and check the images to make sure that the tags are still there, but the goal is that every image has an alternate description.


My goal in the next few weeks is to re-evaluate this change and then ratchet up the WCAG 2 coverage.

In other words, I ask you all to "pardon the dust" as I reset the look and feel of my home away from home. As always, I appreciate feedback and suggestions for making my words and message as available to all as possible :).

Tuesday, October 9, 2018

The Lost Art of Live Communication - a #pnsqc Live Blog




Wow, have we really already reached the end of the program? That was a fast two days!

Jennifer Bonine is our closing keynote and her talk is centered on the fact that we seem to be losing the ability to actually communicate with people. We are becoming "distanced". I've found that there are a handful of people that I actually talk to on the phone. It's typically limited to my family or people who I have a close connection to.

I work from home so I really have to make an effort to get up and out of my house. Outside of meetings and video calls I don't directly interact with my co-workers. There are no water cooler chats. We do use our chat program for intra-work communication, but otherwise, there really isn't any random communication. Is this good or bad? Jennifer is arguing, and I'd say successfully, that it's a bit of both, with a solid lean towards bad.

What is not debatable is that we are definitely communicating less in person and in real time. Is this causing a disconnect with families? I think it's highly likely. Jennifer does as well.

How much of our communication skill is non-verbal? To what level do we have to go through to make sure that a text message gives the full nuance that an in-person communication does? when we explain something to someone, how do we know they actually have received and processed the message effectively. Outside of an in-person discussion, we really don't. Often, even in an in-person discussion, there may well be a lot lost. Communication styles are unique to individuals and different people communicate and receive information differently.

I read a great deal, so I have a vocabulary that may go over the head of many of the people I communicate with. I pride myself on trying to "Speak Dude" as much as possible but my sense of speaking "Dude" may still have a lot of words that people may not understand. Having a big vocabulary can be cool but it's not necessarily a plus if the people I am communicating with doesn't get the words that I am using.

Jennifer suggests that, perhaps, one of the biggest positives of AI-based test automation making inroads has less to do with the fact that it can automate a bunch of testing and that it can free up our minds for doing lots of other things, things that are potentially a lot more interesting compared to the repetitive tasks.

We had a conversation break that amounted to "what would we want to do/be if we had one year to live and had a million dollars in the bank?" It was interesting to me to see that, after a very short time to think, I knew what I wanted to do. With those parameters, I would want to gather my wife and children and just tour the world. Go to places I've never been or visit places my kids have seen and I haven't. I'd love to have my daughters show me their experiences and memories of their time in Japan. I'd love my older daughter to be able to show me the areas she has been living in while she has been in Brazil (she'll be there for another thirteen months so I hope this experiment can be paused until she returns ;) ). The neatest part of this is how quickly that clarity comes.

Communication takes time, it takes energy, and it takes commitment. I'm on board with being willing to make a better effort at communicating better. Not necessarily communicating more but certainly upping the quality of the communication I participate in.

Testing all the World’s Apps - a #pnsqc Live Blog


In my world, I basically test one app. I test its appearance on the web and I test it as it appears on a variety of mobile devices. That's about it. By Contrast, Jason Arbon runs a company that tests lots of apps. Like, seriously, LOTS of apps!

Jason asks us what we would do if we had to test a huge number of apps. Would we take an approach to try to test all of them as unique projects? It's logical to think that we would but Jason points out that, no that's not needed. A lot of apps reuse components from SDKs and even when elements are unique, they are often used in a similar manner. What if you had to test all of the world's apps? How would you approach your testing differently?

Three major problems needed to be solved to test at this scale:

Reuse of test artifacts and test logic

By developing test approaches at as high of a level as possible, we have the ability to create test templates and methods for creating test artifacts in a reliable or at least close to uniform manner. Over time, there is a way to look for common components and make a series of methods to examine which places an element might be and how it might be interacted with. Chances are that many of the steps will be very close in implementation.


Reliable test execution

Once the patterns have been determined and the set of possible locations has been mapped, it is possible to create a test harness that will go through and load a variety of apps (or a suite of similar apps) to make sure that the apps can be tested. It may seem like magic, but it's really leveraging the benefit of reused and reusable patterns.

One challenge is that a lot of services introduce latency to testing over the Internet. By setting up queuing and routing of test cases, the cases that need to be run get the priority that they need.

Unique ways to correlate and report on the test results

The reporting structure that Jason shows includes the type of app, the page type, and the load time on average for each of the page. This allows for an interesting view of how their own app relates to or competes with other apps. Wild stuff, I must say :).





The Do Nots of Software Testing - a #pnsqc Live Blog


Melissa Tondi makes a point that, at this and many other conferences, we learn a lot of things we should do. She's taking a different tack and suggesting things we should not be doing.

Do NOT Be the Enabler

I can relate to this in the sense that, in a number of sprints or times of challenge, we jump in and we become the hero (sometimes). However, there is a danger here and that danger is that it can communicate to others on the team that things can wait because the test team will be the ones to make sure it gets in under the wire. Possibly, but then that also means there may be things we miss because of being left at the end.

Risk-based and Context-driven testing approaches can help here, and we can do quality testing without necessarily driving ourselves to distraction but ultimately if we are in a situation where we can see that we are going to find that testing is about to enter an enabling phase, we as a team need to be able to figure out how to not allow that enabling to happen. As Scrum Master on my team, I am actually in a good position to make sure that this doesn't happen (or at least I can give my best effort to help see that that doesn't happen if possible).

Do NOT Automate Everything

I agree with this. I think there are important aspects that can and should be automated, especially if it helps us avoid busywork. However, when we focus on the "Everything" we end up losing focus and perhaps developing better solutions, not to mention having to charge through a lot of busywork. We should emphasize that automation help make us more efficient. If we are not achieving that, we need to push back and ask why or determine what we could/should be doing. In my world view, what I want to determine is "what am I doing that takes a lot of time and is repetitious?" Additionally, there are methods of automation that are better than others. Simon Stewart gave a great talk on this at STPcon in Newport Beach this year, so I suggest looking up that talk and reviewing it.

Do NOT Have QA-ONLY" Sprints of Cycles

If you hear the term "hardening sprint", then that's what this is. The challenge of this is that there is a lot of regression testing that needs to be processed and has the potential of losing both momentum and time as we find areas that don't mesh well together. Melissa describes "The ABC Rule":

- Always Be Coupled (with dev)
  Try to keep development and testing work coupled as closely as possible

Do NOT Own All Testing

Software Testing needs to happen at all levels. If a team has developers that have all of the testing aspects performed by the testing group, that's both a danger and a missed opportunity. By encouraging testing at all levels of development, the odds of delivering real quality product goes up considerably.

Do NOT Hide Information

This is trickier than it sounds. We are not talking about lying or hiding important things that everyone should know. This is more the danger of implicit information that we might know and act on but may not even be aware of it. We need to commit to making information as explicit as we possibly can and certainly do so when we become aware of it. If we determine information is needed to be known and then we don't act on it, then we are just as guilty as if we deliberately hid important information.


Talking About Quality - a #PNSQC Live Blog


Kathleen Iberle is covering a topic dear to my heart at the moment. As I'm going through an acquisition digestion (the second one, my little company was acquired and that acquiring company has recently been acquired) I am discovering that words we are used to and the way that we defined quality is not necessarily exactly in line with the new acquiring company. Please note, that's not a criticism, it's a reality and I'm sure lots of organizations face the same thing.

In many ways, there are lots of conversations we could be having and at times there are implicit requirements that we are not even aware of the fact that we require. I consider that an outside dependency where, if I don't know I need to do something, I can't do it until I get the knowledge that needs to be given to me so I can act on it. There's a flip side to this as well. That is the implicit requirement where there's something I do all the time, so much that I don't even think about it, but someone else has to replicate what I am doing. If I can't communicate to them the fact that there is a step they need to do because of me literally jumping through it so fast that I don't notate it, can I really be mad when someone doesn't know how to do that step?

Many of us are familiar with the idea of SMART Goals, i.e. Specific, Measurable, Achievable, Relevant and Time-based.  This philosophy also helps us communicate requirements and needs for products. Taking the time to make sure that our goals and our stories take into account whether or not they add up to the SMART goal model is a good investment in time.

An interesting distinction that Kathleen is making is the differentiation between a defect and technical debt. A defect is a failure of quality outside of the organization (i.e. the customer sees the issues). Technical debt is a failure of quality internal to the team (which may become a defect if it gets out in the wild).

An approach that hearkens back to older waterfall testing models (think the classic V model) is the idea of each phase of development and testing having a specific gate at each point in the process. Those gates can either be ignored (or have them only used at the very end of the process) or they are given too large an amount of attention out of cope or context for the phase in question. Breaking up stories into smaller atomic elements can help improve this process because the time from initial code to delivery might (can) be very short. using terms like "Acceptance Test", "Definition of Done", "Spike", "Standard Practice", etc. can help us nail down what we are looking at and when. I have often used the term "Consistency" when looking at possible issues or trouble areas.

Spikes are valuable opportunities to gain information and to also determine if there is an objective way to determine the quality of an element or a process. They are also great ways to be able to determine if we might be able to tool up or gain skills that we don't have or need more abilities with to be effective.


Risk Based Testing - a #PNSQC Live Blog


Its a fact of life. We can't test everything. We can't even test a subset of everything. What we can do is provide feedback and give our opinion on areas that may be the most important. In short, we can communicate risk and that's the key takeaway of Jeny Bramble's talk. By the way, if you are not here, you are missing out on Dante, the deuteragonist of this presentation (Dante is Jenny's cat ;) ).

Jenny points out off the bat that, often, words are inadequate when it comes to communicating. That may sound like unintentional irony but I totally get what Jenny is saying. We can use the same words but have totally different meanings. One of the most dangerous words (dangerous as in its fluidity) is "risk". We have to appreciate that people have different risk tolerances, often in the same team. I can point to my own team of three testers and I can feel in our discussions that risk is often a moving target. We often have to negotiate as to what the level of risk actually is. We get the idea that risk exists, but how much and for whom is always up for discussion.

Jenny points out that risk has a variety of vectors. There's a technical impact, a business impact, and a less tangible morale impact. When we evaluate risk, we have to determine how that risk will impact us. What is the likelihood that we will experience failure in these scenarios? I often have these discussions when it comes to issues that I find. Rather than just come out and say "this is a bug!", instead, I try to determine a consensus of how bad this issue might be. This is often done with discussions with our product owner and asking questions like "if our customers were to see this, what would your impression be?" I likewise have similar discussions with our developers and often, just asking questions often prompts people to look at things or to have them say "hey, you know what, give me a couple of hours to harden this given area".

Risk isn't always limited to the feature you are developing at the given moment. A timetable changing is a risk. Third party interactions can increase risk, sometimes considerably. If your infrastructure is online, consider where it is located (Jenny is from North Carolina and as many are probably aware, we recently had a hurricane sweep through that made a mess of Eastern North Carolina. Imagine if your co-lo was located there.

Ultimately, what it comes down to is being able to perform an effective risk assessment and have a discussion with our teams about what those risks are, how likely they are to happen, and ultimately how we might be able to mitigate those risks.

Jenny has a way of breaking down a risk matrix to make it a numerical value. By looking at the level of likelihood with the level of impact, multiply the two numbers and that gives you the risk factor. A higher number means higher risk and higher efforts to mitigate. Lower values mean lower risk and therefore lower cost to mitigate.

"This feature has been drinking heavily and needs to go to rehab!" Best. Risk. Based. Metaphor. Ever (LOL!).

This is my first time seeing Jenny present, though I see her comments on Twitter frequently. If you haven't been to one of her presentations, may strongly suggest that, should she be speaking at a conference near you, that you make a priority to see her speak? Excellent, my work here is done :)!

Adventures in Modern Testing - a #PNSQC Live Blog From #OneOfTheThree


As a long time listener to Alan Page and Brent Jensen's "A/B Testing" podcast, I consider myself to be an early contingent of being "One of the Three".  For those wondering what this means, when the show first started, they joked that really there were only three or so people at Microsoft that even bothered to listen to it. As the show grew in popularity, the joke grew to mean that there were still only three people listening, just at any one time. Longtime listeners have taken some pride in regularly referring to themselves as "One of the Three" and I'm one of those people. Anyway, that's a long buildup to me saying that I've listened to and read Alan for a number of years (plug: check out his book "The A Word" for some great thoughts on the plusses and minuses of software test automation).

Alan and Brent have been focusing on an initiative dedicated to "Modern Testing Principles" and many episodes of A/B Testing have been dedicated to these principles. Happily, Alan isn't just throwing out the ideas, he's putting it in context and sharing his recent adventures and real-world experiences getting to these principles. Alan was part of the XBox One team in 2011-2013 and he describes those two years as "the best five years of his life" ;). It was in this role that he began thinking about what would become the core of the modern testing ideas, specifically about the test org as a community.

Testers are often not all part of the same group in an organization. Often, they are working on a number of different teams in different places and often, those people are working on similar problems in isolation. I can appreciate this as I work for a small team (used to be its own company) that was acquired by a larger company (with a number of functional teams with testing groups) which in turn has been acquired by another company (with already existing testing groups). How often do all of these people communicate? Simple answer. Outside of my immediate team (three testers) and a handful of conversations with a few other team leads (I think about six total people), that is all of the "testers" I have actually spoken to in my entire organization. I know that we have something in the neighborhood of fifty or so testers throughout the entire organization. Who is willing to take the bet that we have similar challenges and issues that we could probably solve or make movement on if we were all able to communicate about them? I'd be willing to take that bet.

Something else to consider is that we need to overcome the hubris that developers can't test, or at least not as well as a dedicated tester. It's wrong and it's selling developers short. That's not to say there aren't a lot of developers that don't want to test. I've worked with a lot of those people over the years but I've also worked with several developers who were/are excellent testers. By leveraging our interaction with and encouraging the teaching and shared learning with developers,  can help foster their growth in testing skills and abilities and that can help us focus on other areas that we haven't had time to work on or consider.

Like Alan, I tend to get bored easily and I look for new areas to tickle my brain. Often, we need to be willing to dive into areas we may not be qualified for or somehow find a way to get involved in ways we may not have been able to be involved before. Alan's suggestion is.... well, don't outright lie... but if you express at least an interest in an area and show some ability or desire to develop said ability, we may be surprised how many organizations will let us run with new initiatives. That's how I became a Build Manager and the Jenkins Product Owner. Am I a Jenkins expert? Not even close, but I'm working with it all the time and learning more and more about it every day. I wouldn't have been given that choice if I just asked for it.

We should get used to the idea of Testing without Testers? Wait, what?!! Hold on, it's not what you think... or at least it's not as much of what you might think. For a variety of teams, it is possible to do testing and do good testing, without having a dedicated testing team. I get it, why would a tester be excited about that proposition? Overall, if we are not the sole person responsible for testing, we can actually look at areas that are not being addressed or might be less covered. While I still do a lot of testing, I actually spend a significant time as a Scrum Master and as a Release Manager and Build Manager. Neither of those roles is specific to testing but they are enhanced by my being a tester. Both roles allow me to actually exercise some dominion and allow me to address quality issues in a more direct way.

Traditional testing harms business goals by creating unnecessary delays, an over-focus on specification correctness over actual quality, trying to be a "safety net" for the organization, and focusing on testing as a dedicated specialty with no interaction or an isolated approach. Wouldnt we instead want to be more focused on having everyone be focused on creating solutions that are better quality from the start, where everyone understands how to test and make that a part of the process?

In Alan's model of the Modern Tester, we are more of a Quality Coach. We focus on speeding up the team. We do some testing as well but on the whole, we use our expertise to help others tests. In that process, Alan and Brent and the rest of "The Three" have gone through and vetted what the Modern Testing Principles are:

The seven principles of Modern Testing are:


  1. Our priority is improving the business.

    We need to understand where the pain points are so that we can help change the culture and way things are done. We should place priority on delivering quality and customer satisfaction over code or functional correctness.
  2. We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system.

    If testing is the bottleneck, then it needs to be addressed. If delivery is the issue, that needs to be addressed. Find the bottleneck and figure out how to mitigate those issues if they can't be eradicated.
  3. We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures.

    Pair testing with developers, tooling and building good infrastructure can be a big help, as well as serially working on items to completion rather than having a million things in-process that move so slowly that they don't really progress.
  4. We care deeply about the quality culture of our team, and we coach, lead and nurture the team towards a more mature quality culture.

    Test experts will be able to leverage their skills and help others develop those skills. Don't think of this as losing influence or the team losing the need for you. The opposite is true; the more effective coaching testers can provide, the more helpful they can be and as such, more indispensable.

  5. We believe that the customer is the only one capable to judge and evaluate the quality of our product.

    At the end of the day, the decision of good enough isn't ours. We need to be willing and able to let go and encourage the product owner and the customers to let us know if they are happy with what we have delivered.
  6. We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact.

    We need to get a better understanding of what the data actually says. What features are really being used and are important? How can we know? How can we break down what the customer is actually doing and how positive their experience is?
  7. We expand testing abilities and know-how across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist.

    This is the most daring part. If we are really willing to embrace this, we might work ourselves out of a job. If we are truly good enough to do this, this shouldn't be that big of a concern because there are a LOT of organizations out there who are not even close to this. A "Modern Tester "will always be needed, but perhaps not in the same place all of the time.
As I've said, I've listened to these ideas come together over the past couple of years and it's really cool to hear this in this condensed format after all this time. Alan has a bold and interesting take on the transitioning of testing into the brave new world. It's a compelling view and, frankly, a view I'd like to embrace in a greater way.


Rise of the Machines - a #pnsqc live blog


All right, it's day 2, the last day of the technical program and we are starting off with Tariq King's talk "Rise of the Machines". The subtext of this talk is "Can Artificial Intelligence Terminate Manual Testing?" In many ways, the answer is "well, kind of..."

In a lot of ways, we are looking at Machine learning and AI through a lens that Hollywood has conditioned us to. Our fears and apprehensions about robotic technology outstripping humanity has been a part of our common lore for the past 100 years or so. Counter to that is the idea that computers are extremely patient rocks that will only do exactly what we tell them to. My personal opinion, as far as anyone might care, is somewhere in between. It's not a technological problem, it's an economic one. We are already watching a world develop where machines have taken the place of people. Yes, there are still people maintaining and taking care of the grooming and feeding of these machines, but it's a much smaller percentage of people that were doing that work as little as ten years ago.

Recently, we have seen articles about software developers that have automated themselves out of their jobs. What does that tell us? Does that mean that we are ultimately reaching a point where our software is outstripping our ability to contribute? I don't think so. I think in many cases we might have reached a point to where a machine can replace a person who has ceased to look for broader and greater questions. Likewise, is it possible for machines to replace all manual testing? The answer is yes if we are just looking at the grunt work of repetition. The answer is more nuanced if we ask "will computers be able to anticipate the exploration sense and think of new ways to look for more interesting problems?" Personally, I would say "not yet, but don't count the technology out". It may be a few decades, maybe more, but ultimately we will be replaced if we stop looking for wider and more interesting problems to solve.

We focus on Deep Blue beating the grand master of Chess, Alpha Go beating the grand master of Go, and Watson beating Ken Jennings in Jeopardy (not just beating but being so much faster to get to the buzzer that Ken never got the chance to answer). Still, is that learning, or is that brute force and speed? I'd argue that, that this point, it's the latter, but make no mistake, that's still an amazing accomplishment. If machines can truly learn from their experience and become even more autonomous in their solutions, then yes, this can get to be to be very interesting.

Machine Learning is in the process of re-inventing how we view the way that cars are driven and how effective they can be. Yes, we still hear about the accidents and they are capitalized on, but in that process, we forget about the 99% of the time that these cars are driving adequately or exceptionally, and in many cases, better than the humans they are being compared to. In short, this is an example of a complex problem that machines are tackling, and they are making significant strides.

So how does this relates to those of us who are software testers? What does this have to do with us? It means that, in a literal brute force manner, it is possible for machines to do what we do. Machines could, theoretically, do exhaustive testing in ways that we as human beings can't. Actually, let me rephrase that... in ways that human being's wont. 

The Dora Project is an example of bots doing testing using very similar methods as humans. Granted, Dora is still quite a ways away from being a replacement for human-present testing. Make no mistake, though, she is catching up and learning as she goes. Dora goes through the processes of planning, exploring, learning, modeling, inferencing, experimenting, and applying what is learned on future actions. If that sounds like what we do, that's no accident. Again, I don't want to be an alarmist here, and I don't think Tariq is trying to be an alarmist here either. He's not saying that testers will be obsoleted. He's saying that people who are not willing to or aren't interested in jumping forward and trying to find those next bigger problems, those are the people that probably should be concerned. If we find that we are those people, then yes, we very well probably should be concerned.

Monday, October 8, 2018

How Do We Fix Test Automation? - a #pnsqc Live Blog


Well, this should be interesting :).

OK, I'm poking a little fun here because I know the presenter quite well. I met Matt Griscom at a Lean Coffee event in Seattle when I was speaking at ALM Forum in 2014. At the time he was talking about this idea of "MetaAutomation" and asked if I would be willing to help review the book he was working on. I said sure, and now four years later he's on his Third Edition of the book and I've been interested in seeing it develop over the years.

Matt's talk is "How to Fix “Test Automation” for Faster Software Development at Higher Quality"

So first, let's get to the first premise. Test Automation is broken? I don't necessarily disagree, I definitely think that there's a lot of truly inefficient automation, but Matt's going farther and saying that it's genuinely broken. We write tests that don't tell us anything, we see a green light and we move forward and we don't even really know if the automation we are doing is even doing what we intend it to do. We definitely notice when it's broken, or the test fails, but how many of our tests that are passing are actually doing anything worth talking about?

In Matt's estimation, this is where we can do better, and he has the answer... well, he as an answer, nobody has THE answer just yet, unless that answer is "42" (just seeing who is paying attention ;) ). This is where Matt's method of MetaAutomation comes in. His approach obsoletes the need to automate manual tests, as in it would be caught in the code if implemented. Matt's emphasis is "if it doesn't find bugs, don't bother".

The idea is that the code includes self-documenting checks that show where issues are happening and communicate at each level. We shouldn't look at these as being unit tests, as they go into error checking at an atomic level where Meta Automation might be overkill but outside of atomic unit tests, it's meant to be useful to everyone from Business Analysts and Product Owners to software testers and developers. The real promise of Meta Automation, according to Matt, is that QA should be the backbone of communication and Meta Automation helps make that all happen. Bold statements, but to borrow from Foodstuff/Savor's Annie Reese.... "Meta Automation.... what is it?!"

Basically, MetaAutomation comes down to a Pattern Language that can be summed up in the following table (too hard to write and I don't want to mess this up):

Click on image to see full table in larger type

Here's a question to consider... "In Quality Automation, what would you like your software team to do better?" You have one minute. No, seriously, he asked that question, set a timer and asked us to talk amongst ourselves ;). A tells B, then B tells A.

What did we come up with?

It would be nice to improve visibility on tests that don't run due to gating conditions.
It would be nice to improve the understanding of what tests are actually doing and how they do it.
It would be good to get a handle on the external dependencies that may mess up our tests.
It would be great if the testers and developers could talk to each other sooner.

Granted, this is a tough topic to get to the bottom of in a blog post of a single session, but hey, if you are interested in hearing more, check out http://metaautomation.net/ for more details.



Efffective CI - a #pnsqc live blog

Hi everyone! Miss me ;)?

My talk is finished and on the whole, I think it went well.

As a paper reviewer, I get the change each year to help proofread and make suggestions for papers so that they can be published in the proceedings. This talk was one of the ones I reviewed, so I definitely had to see the presentation.


Ruchir Garg is with Red Hat and his talk was focused on methods to maximize the chance of success for incoming merge requests. His focus is on creating automated tests that can help ensure prior to CI that the code meets the guidelines decided by the team (if there are enough tests, references the feature it adds or what specifically it fixes, maintains compatibility, etc.). If the criteria are passed, then the tool allows the job to progress.  If it doesn't, it stops it before the Merge Request takes place.

In reality, this isn't test automation. It is more accurately described as process automation. It helps to enforce the company's criteria for what needs to be in place. If it doesn't pass, it never gets sent on to CI. I have to admit, it's an interesting idea :).


Automating Next-Generation Interfaces - a #pnsqc Live Blog



Normally, I'm usually not all that interested in attending what I call "vendor talks" but I made an exception this time because the topic interested me and I've been curious about this topic for awhile.

In "How to Automate Testing for Next-Generation Interfaces" Andrew Morgan of Infostretch covers a variety of devices, both common and emerging. Those of us who are most comfortable with working with web and mobile apps need to consider that there are a variety of devices we are not even interacting with (think bots, watches, voice devices like Siri and Alexa. These require a fundamentally different approach to testing. Seriously, how does one automate testing of Siri short of recording my voice and playing it back to see how well it does?

Additionally, we have a broader range of communications (WiFi, Bluetooth, biometric sensing, etc.). Seriously, how would someone automate testing of my FitBit Surge? How do we test face detection? How about Virtual Reality?

To accomplish this, your project team must be able to successfully pair device hardware capabilities and intelligent software technologies such as location intelligence, biometric sensing, Bluetooth, etc. Testing these systems and interfaces is becoming an increasingly complex task. Traditional testing and automation processes simply don’t apply to next-generation interfaces.

OK, so that's a good list of questions but what's the specifics? What does a bug look like in these devices and interfaces? Many of these issues are around User Experience and the usefulness of the information. If you are chatting with a bot, how long does it take for the bot to figure out what it is you are talking about? Does it actually figure it out in a way that is useful to you? Amazon Alexa has a drop-in feature where you can log into an Alexa and you can interact with it. Can these be abused? Absolutely! What level of security do we need to be testing?

Other things to consider:

- How are we connecting?
- How are we processing images?
- How are we testing location-specific applications?
- Is the feature effectively dealing with the date and time well?
- How do we handle biometric information (am I testing a fingerprint or the interaction with that fingerprint?)

At this point, we are into an explanation of what Infostretch provides and some examples of how they are able to interact with these devices (they have dedicated libraries that can be accessed via REST). Key takeaway is that there are a lot of factors that are going to need to be tested and I'm intrigued at how to start addressing these new systems.



Mobile Testing Beyond Physical Reach - a #PNSQC Live Blog


One of the additions to my reality as of late has been working with Browserstack, which is a company that provides a pool of physical devices as well as emulated devices to do mobile and desktop testing. It's an interesting service in that it allows for a lot of variation in device interactions. Still, while it has a good selection, it certainly doesn't pretend that it has every possible option out there. Plus, those services come at a cost. That's not to say that having a variety of devices doesn't have its own costs, but over time, it may make sense to keep that testing locally. Still, how can we leverage our devices in an efficient way and also leverage our time and attention?

In "Mobile Testing Beyond Physical Reach", Juan de Dios Delgado Bernal is describing a physical setup that he uses as m “Mobile Testing Farms”. By chaining a variety of devices together via an extended powered USB hub, his team is able to work with and automate a variety of devices remotely from a desktop or laptop. In short, the goal is to be able to test smartphones beyond physical reach.

We only sit and open our browser, then just access the Smartphone Test Farm. In this session, we will demonstrate how to use a Smartphone Test Farm (STF) using python automated scripts.

It's easier to interact with emulators but emulators certainly don't take the place of an actual device. Additionally, there are a number of states that a mobile device can be in and emulators are not able to replicate these (or at least, I'm not familiar with how they do this). By having a device farm, the various conditions and states for the device can be readily manipulated, as well as tested in that variety of conditions. Juan is using a variety of Python scripts so that the tests he is running can cycle through the devices in the farm.

Juan demonstrated these scripts in real time (hence why we see him at the computer :) ) and it's intriguing to see the potential and options that are available with a relatively simple and (arguably) reasonably priced apparatus. For my perspective, much of what I look at is focused on web and mobile applications. The examples that Juan is showing actually examine literal phone calls and the quality of the signal and the quality of the call which, frankly, I think is really cool. Its outside of my typical wheelhouse, but it gives me some ideas as to how I might leverage a farm like this. Not sure it's going to cause me to stop using Browserstack but it's definitely worth considering.

Follow the Road to Quality - a #PNSQC Live Blog


Greetings friends! It's been way too long since I've done a live blog series, so you are either lucky or unfortunate but nevertheless, I'm here for you these next couple of days ;).

Let's start with a few personal logistics. I'm excited to be presenting twice at PNSQC this year. First, I will be presenting today around lunchtime about "Future Proofing Your Software". The topic is a misnomer of sorts, as I'm not talking about technologically future proofing your code, but anthropologically so. I'm talking specifically about software and aging, as if we are lucky enough, all of us will grow older and get to deal with all that entails. What makes this talk special this year was that PNSQC invited me to speak on this topic. As I've been focused on this topic for a number of years I am really happy to be presenting it.

My second contribution is a workshop I will be giving on Wednesday called "Building a Testing Framework from Scratch". This is a co-presentation with my friend Bill Opsal (cute side note: Bill and I both live in the Bay Area but to date, with the exception of one day, we have only met up with each other at this conference). Hence we decided we should do something together and this workshop is it. I won't be live blogging about it specifically but I will be talking about it in the days following the conference.

Let's get started with the first talk. Michael Mah is the first speaker and true to form it starts out with a technical difficulty (really, what testing conference would be complete without at least one of these happening. Maybe with it happening at the beginning of the program it's a good omen ;) ). One of the factors that we often miss when we are talking about software delivery is how the business itself actually delivers. We spend a lot of time and attention (deservedly so) to make sure that the software we develop works and that we can deliver it on time with the best quality that we can. Do our businesses do the same Often the answer is "no".  Michael points out that when organizations scale up, they expect that there will be an increase in bugs. Double the people, double the bugs. Seems logical, right? Truth is, what really happens is the bug count goes up exponentially (if you double your team, you often increase your bug count by four times).

Additionally, what often happens is we believe that senior developers are smart, efficient and are "the better developers" as compared to junior programmers. Data shows otherwise. Senior developers often have a higher bug count than their junior collaborators. To bastardize a line from The Eagles, "Did they get tired or did they just get lazy?" In truth, it's a bit of both. There's a corollary with pilots. The incidence of plane crashes goes up with more experienced pilots. Again, it seems we tend to get complacent at a certain point and that is when we are at our most dangerous.

Michael is describing a number of companies that have tried to bring Agile up to a scale that is larger than many smaller Agile teams ever experience. To succeed in those larger environments, planning and execution is critical. You need a roadmap and you need to be aggressive with your backlog. Additionally, you have to know your limitations as an organization and be realistic about those limitations. I've gone through this process with a small company becoming part of a larger company and that larger company itself becoming part of an even bigger company. That adjustment has not always been comfortable and it has taken a different way of thinking and executing to get to the point where we can reliably deliver on sprint goals. At the same time, the more often we do meet those goals, the better we get at determining what is real and sustainable. The key to realize is that it's definitely not a one size fits all option but with time and practice, you get better at achieving your goals and objectives.

Speed is great and if you can do it with high quality, that's also great. The bigger question is "do the desired outcomes of your business also match this speed and quality?" In short, it's great we are making stuff the right way but are we making the right thing? Are the features we are developing providing the maximum value for our organization?



Thursday, September 13, 2018

What's That Smell? Angie Jones Talks at #BAST (LiveBlog)

Tonight is proving to be an exciting evening for the Bay Area Software Testers Meetup.

Angie Jones has come out to talk to us about "Code Smells", where we have code that "works", but may have some unusual issues that don't necessarily jump out at us.

More than just talk about the issue, Angie is actually walking us through and showing a real project that has some definite issues. The first example she uses is classes that are too long, where there is a lot of stuff put into a single class and it doesn't have a specific purpose. Oftentimes classes grow over time because there's no other logical place to put something. Think of writing a test where you put a lot of browser interaction into a class so that it's readily available. One one hand, it makes sense, but on the other hand, if it is overloaded or the items added to a class can be grouped somewhere else, take the time to put it there.

Another common smell is "Duplicate Code", where if you have an issue where something is repeated in multiple places, you end up having to do what Angie calls "Shotgun Surgery" (I just love that term :) ). Anything being duplicated more than twice is probably a really good indication that a separate class with that information probably makes a lot of sense.

Next up is the 'Flaky Locator Strategy". Oh, do I know this one! Angie is exhorting, loudly, to not trust the Copy XPath option that comes with Dev Tools, and I wholeheartedly agree. This is also something I wrote about in my "Less Brittle Automation" posts, part one and part two. Happy to see that something I found frustrating is also something Angie pointed out :).

Angie calls out the "Indecent Exposure" smell. If you are showing too much of your data, you may allow for people to get to your data and information in ways you may not want to. This is fixed by narrowing the scope of your classes, making certain methods private when they should be and only keeping public methods that actually need to be public that distinction.

Something that I'm all too familiar with is the issue of explicit waits. Yes, I do this. No, I'm not proud of this. Yes, I can do better. Angie encourages us to Wait Intelligently, by either polling for an element or to make sure that the test doesn't move forward until there's a confirmation that an item is present.

Frameworks are supposed to drive the logic of the browser and to create a state condition. Frameworks don't actually test anything, so putting assertion codes into the framework is a problem. Instead, we want to create state conditions that e can evaluate elsewhere. Otherwise, we may find ourselves painted in a corner unintentionally.

This was a really great discussion, and it was made the more effective by seeing the actual code being modified. I'm super happy to have had Angie come out to speak with us at BAST tonight and she has an open invitation to come out and talk any time she wants to :). For everyone else, thanks for indulging me this evening as I got to geek out a bit. 'Til next time!

Friday, August 17, 2018

The Testing Show - Conferences and Conferring with Anna Royzman, Claire Moss and Mike Lyles

I have been terrible with my shameless self-promotion as of late, but I think it's time I switch that up a bit. Remember my comment about that "grind" earlier in the week? Here's the end result and yes, I always feel better about it after it's all said and done because I enjoy seeing the end product and most importantly sharing it with everyone out there.



I have a little favor to ask as well if you are so inclined. Do you enjoy listening to The Testing Show? If you do, could I ask you to go to Apple Podcasts and write us a review? Reviews help people find the podcast and make it more likely to appear in the feed listings. Seriously, we all would really appreciate it. We work hard to bring these out and I'd love nothing better than to have more people be able to find it.



In any event, we hope you enjoy this month's episode. To set things up, here's the lead in for the show:

August is a busy time of year for software testing conferences (not to mention conferences in other industries). This month, we decided that, with everyone heading off to conferences hither and yon that we would dedicate a show to the topic, and we have done exactly that. Anna Royzman (Test Masters Academy), Claire Moss (DevOpsDays) and Mike Lyles (Software Test Professionals) join us as guests in their capacity as conference organizers, speakers and attendees (not necessarily in that order) to riff on Conferences and Conferring with Matthew Heusser, Michael Larsen, and Perez Ababa. Want to know where to go, what format to take part in or if you want to try your hand at speaking/presenting? We’ve got something for all those bases!

The Testing Show - Conferences and Conferring with Anna Royzman, Claire Moss and Mike Lyles: The Testing Show team discusses the QA conference season with Anna Royzman, Claire Moss and Mike Lyles. Tune in to learn more!


Sunday, August 12, 2018

When There's Nothing But The Grind

First off, I want to make sure that this doesn't come off sounding wrong. The title is accurate, but perhaps not for the reason folks might suspect.

As the Producer and Editor of The Testing Show, the heavy lifting of getting a show finished and up on Apple Podcasts is done by me. Yes, when we schedule the recording the panel and guests are all in on it together. When I deliver it to QualiTest, they take care of posting it so it's available. At times I am able to outsource the transcription process, but not always. This episode, it's all in my wheelhouse due to vacation an unavailability of my regular transcriptionist (who does a fantastic job, I might add and worth the price to have them do it).

There's no easy way to say it; to produce the show to the level I want to have it, with as clean audio as possible, grammatical edits and a full transcription so those who are hearing impaired can follow along, there are no time-saving shortcuts. Every second of the audio needs to be examined, edits made, transitions cleaned up, and waveforms spliced so that there is a minimum of intrusion. If I've done a good enough job, the end listener should just think it's a seamless conversation. For anyone who has actually done editing for podcast dialog, you know what I mean when I say, as soon as you've done these kinds of edits, you forever hear them in other podcasts, too. Perhaps I'm a little too aggressive in this process but since I do it this way, it takes quite a while to edit each show. The tricky part is (since I'm dealing with edited audio and transcribing edited audio), this is the literal definition of an extended monotask. When I write code, test, or write these blog entries, I can have other things going on in the background, take little detours, let my mind idle for a bit and then pick up again. When I'm editing the podcast, I don't get that break. I have to be focused and nothing else can be vying for my attention while I do it. For veteran podcasters, this probably comes as no surprise, but I can (and frequently do) spend about six hours per episode from completion of recording until handoff for posting. It can really chew you up some days but there is no exhilaration quite like when it's all finished and the email goes out saying "It's all ready to post, have at it!"

To relay this to testing and the oft-heard fact that monotasking is the best way to accomplish a goal, I both agree and disagree. Having to switch contexts from one project to another frequently does make for inefficient work and I have plenty of experience with that in my everyday testing life. However, too much focus on one thing, even if I am "in the flow", will physically chew me up and fatigue me fast. Still, there's no way around it. The show needs to be edited, it needs my undivided attention and that undivided attention can easily spread to six hours per show. What to do?

This is where I have to exercise supreme diligence if I don't want to drive myself crazy. Was I a smart individual, I would block out the track immediately, determine where I could perform rough cuts, do them within the first hour, and then approach a fine tuning on subsequent days, spending no more than an hour at a time on any given day. Of course, that doesn't often happen. What typically happens is I find myself coming up on a deadline and then the sense of "compression" takes over. With that sense of compression the "procrastination adrenaline" kicks in. From there, I move Heaven and Earth to get it all done in one Herculean shot and, once I do, I feel both joyous that I have finished it and absolutely mentally drained at the sheer volume of the process. This, of course, lingers in my memory, but for some strange reason, it doesn't make me any more likely to be more diligent the next time. So many times, it really does come down to "Dude, this is the last day before the deadline, now you have to do "The Grind" all in one day!" For a long time I thought I was just insane, but as I've talked to many creative types in various capacities, this approach is way more common than I thought. In a way, the combination of "time compression" and "procrastination adrenaline" all come flooding in at once, and that sense of terror is both motivating and, again, exhilarating. It's terrible on the nervous system, though. The hangover that comes after completion is rough and I literally feel like I need days to recover. Yet time after time I find myself doing it again. It's as though there is a form of "project amnesia" that creeps in. Either that or my amygdala just craves that terror state and tricks me into doing it again and again.

Question for those out there reading this... how do you deal with a project that demands sincere monotasking? Are you measured and deliberate? Are you a procrastination adrenaline junkie? Somewhere in between? How do you deal with it? What are your approaches to making it more manageable? Don't get me wrong, I love producing the show and will continue to do so. I'm just looking for better ways of keeping what little of my overall sanity I have intact ;).

Friday, August 10, 2018

Coming off a Long Break

Hello, everybody!

With the exception of yesterday's post to Jerry Weinberg, this blog has been quiet for a few months. That was partly by design, but it stretched way longer than I intended it to. There was a specific issue I was trying to deal with (elbow procedure that required me spending less time on the keyboard) but in reality, for a time, I felt like I had come to an end of what I wanted to say. After eight years and more than a thousand posts, I started to feel like I was going through the motions, repeating myself, or just posting content to promote other content I was producing elsewhere.

I intended to take a break for a few weeks but that break stretched into almost four months. In that time, my work life changed considerably. My company was acquired for the second time. Socialtext is still a part of PeopleFluent and PeopleFluent is now a part of LTG based in London, UK. Dealing with the changes from that took a lot of getting used to. In those changes is the fact that much of our old approach to work and doing things went out the window. I've had to adapt to new roles, new needs, and a new overall view as to what we should be doing. As part of that change, I've been asked to:

- Be the Scrum Master for our Engineering team (those who say "wait, isn't Socialtext a Lean/Kanban shop?", the answer is "we were, but now we are aligning with Scrum principles so as to be in line with other teams").

- Investigate a new approach to automation (we will be embarking on using Cucumber with the two lower layers in a yet to be determined configuration but it will be totally different than what we have used up to now).

- Upgrade and update our overall infrastructure for how we build and deliver our software (mostly dealing with a Jenkins modernization and moving to pipelining but also looking at our container strategy and what we could do differently or better).

- Somehow still find time to test software in an environment where we are now adjusting to two-week sprints.

While "life has been happening while I have been making other plans" I came to the realization I actually have been unlearning and relearning a bunch of new stuff. Why haven't I been talking about it here?

- Partly out of fear.
- Partly out of feeling inadequate.
- Partly out of not really understanding what I'm doing.
- Partly out of feeling like I'm making this all up as I go along.

I then realized,  "Wait! That's the whole point of this blog." Had I waited until I was an expert on anything, zero posts would have been written and this blog would not exist.

I was asked a couple of days ago on Twitter about what to do when someone wants to start blogging but doesn't know what to talk about or feels they don't really have any worthwhile experience to share. I said to "just start" and "while you may not be an expert in X, you are an expert in your own experience with X. Start there." That may have been helpful to them, but I realized that it was most helpful to me. It reminded me that there's a lot I'm learning and relearning and that process is rarely pretty, often incoherent and usually frustrating. That never stopped me before. Why am I letting it stop me now?

If you're still with me, thank you for indulging my "navel-gazing" for today. I have a lot of stuff I'm working on, and things I've wanted to talk about but not felt it would be worth saying. That attitude is over. It's time I started talking about my own worldview on "fill in the blank" again. Looking forward to re-engaging. I hope you will be, too :).