Thursday, May 16, 2019

Breaking in a New Meetup - Browserstack San Francisco (Live Blog)

Hello and welcome back to the TESTHEAD live blog. It's been a few weeks, primarily because I went into overdrive in late March and April around 30 Days of Testability and STP Con. Still, I was invited to come out to see Browserstack's first meetup in San Francisco. I was intrigued, specifically because we use Browserstack at my company (and then didn't but now do again, a post will be on that later).

We've kicked off the night listening to Browserstack CEO Ritesh Arora, who has been sharing the timeline of the growth and evolution of Browserstack. Again, I'm not necessarily here to talk about Browserstack as a product (I certainly can in future posts if interested) but I do appreciate the fact that they have developed one of the most responsive device farms anywhere. The mobile offering they have is pretty extensive but most important it is quick as far as spinning up multiple devices in the cloud goes. Their overall vision is that they want to develop the de-facto testing infrastructure of the Internet. That's a pretty sweet goal if I do say so myself. As a remote engineer, the challenges of gathering enough devices together to do meaningful testing is large. Using a tool like Browserstack, that helps me extend out and test a variety of products I don't actually have.

Dang it, I said I wasn't going to make this all about Browserstack. Oh well. For those curious, yeah, it's a tool I enjoy using. It's not perfect but it definitely made clear how much I liked it when I had to do without it for several months because of a change in focus and perspectives shifted us to different products. Suffice it to say, we are back and I'm happy about that.

OK, enough marketing spiel. Lanette Creamer flew down from Seattle to talk at this event, which, hey, that's pretty cool. Lanette and I have traded war stories for a few years now (the better part of a decade, really, but who's counting ;) ). Lanette is here to talk about "Small Projects" and how they can actually be more difficult than larger projects. The common challenge is that smaller projects will go faster, or at least that is the belief. However, there are times where smaller projects don't really end up being faster. In many cases, there are challenges that will cause users to not be able to really do what they hope to do. That small project can blow up very quickly and become a much bigger challenge than was expected.

When people talk about projects that are small, what they are actually aiming for is "something simple that we can do quickly". However, the problem often arises that scope rises in the allotted time frame. Thus, the trick is to make sure that the systems are in place to be able to keep everything in place. Small projects are risky projects, so it's vital that we trust our team members. We also have to realize that there is no way we will be able to do everything we would like to do. Perhaps swap real-time paring with your developers over having to write bug reports and waiting for a response. Get away from the time killers and project killers; avoid politics, get away from micromanaging and try to not be so beholden to perpetual status updates. Be aware that there is a freedom that can be had with the right attitude. You may not be able to do everything but it's better than doing nothing. Start there :).

---

The next talk was presented by Priyanka Halder. She currently heads the Quality Engineering team in GoodRx. Priyanka's talk is "Taming the dragon: Going from no QA to a fully integrated QA". I can relate to this from a variety of situations in my testing career, where I've either had no QA or we have had to retool the testing that was done previously. I've struggled in a few of these environments and have at times had to make new initiatives just to make headway. Some times I've had lots of automation but it had limited focus or it worked for part of the product and other areas needed coverage but it didn't exist yet. Very often, we had experiences where what worked in one part of the product didn't work in another area. As for the premise of Priyanka's talk, I also know what it feels like to be the first tester in a team that didn't have one previously. That's a special challenge but a real one and it requires a certain kind of finesse. It's not just that we have to test, we also have to make a case as to why we are worthwhile to have on the team.

Very often, the problem that we have to deal with up-front is the issue of testability. If we are coming into a new QA environment with a lot of manual testing, if the goal is to automate (and even if it isn't) placing an emphasis on testability early can reap large dividends.

---

The last talk for the night is being delivered by Brian Lucas of Optimizely. He is responsible for Build, Test, and Release engineering processes. His talk is about "Avoiding Continuous Disintegration at Optimizely". One of the keys to being successful in developing software are certain common traits: speed, engagement, and iterative development can help keep quality high. However, as the product becomes more complex, if the time and effort are not put in to be able to keep that process rolling, things can fall apart quickly. One of the approaches Brian suggests is the idea of releasing faster and shipping quicker comes down to working in smaller increments, shipping more frequently and working through experiments. Components usually work well in isolation but they struggle when they have to work with each other.  If your team is playing Dev and QA ping pong, stop it! BY working more openly with QA and doing testing earlier, it is possible to get more coverage even in smaller increments of time. In short, your testers are not going to be able to do all of it and groups that don't prioritize for this will lose that opportunity.

Optimizely crowdsources their QA to their entire engineering team so that all of the testing efforts don't just fall on the heads of the testers. More to the point, rather than to have one person testing a hundred commits, having the developers test their own commits speeds up the process and lessens the load on the testers so that they can focus on more pressing areas.

Continuous Integration, Continuous Delivery, and Continuous Deployment are areas that will take time and care to set up but by taking the time to do it it makes it easier down the line. Ultimately the goal is to improve the feedback cycles and build out the infrastructure in a way that allows for repeatability, testability, and implement-ability in many places.

---

On the whole, a nice first outing Browserstack. Thanks for having us. Also, thanks for the shirt and water bottle. I should get a few laughs at tomorrow's standup when I wear it (hey wait, isn't that... yes, yes, it is ;) ).


Thursday, April 4, 2019

Data: What, Why, How??: an #STPCon Live Blog Entry


All good things must come to an end and due to a need to pick up family members from the airport today, this going to have to be my last session for this conference. I'm not entirely sure why this is but Smita seems to be in the closing sessions at STP events. Thus Smita Mishra will be the last speaker I'll be live blogging for this conference. My thanks for everyone involved in putting on a great event and for inviting me to participate. With that, let's close this out.

Data is both ubiquitous and mysterious. There are a lot of details we can look at and keep track of. There is literally a sea of data surrounding our every action and expectation. What is important is not so much the aggregation of data but the synthesis of that data, making sense of the collection, separating the signal from the noise.

Data Scientists and Software testers share a lot of traits. In many cases, they are interchangeable terms. Data and the analysis of data is a critical domain inside the classic scientific method. Without data, we don't have sufficient information to make an informed decision.

When we talk about "Big Data" what we are really looking at are the opportunities to parse and distill down all of the data that surrounds us and utilize it for special purposes. Data can have four influences:


  • Processes
  • Tools
  • Knowledge
  • People

Data flows through a number of levels, from chaos to order, from Uncertainty to certainty. Smita uses the following order; Uncertainty, Awakening, Enlightenment, Wisdom, Certainty. We can also look at the levels of quality for data: Embryonic, Infancy, Adolescence, Young Adult, Maturity. To give these levels human attributes, we can use the following; Clueless, Emerging, Frenzy Stabilizing Controlled. In short, we move from immaturity to maturity, not knowing to know for certain, etc.

Dat can likewise be associated with its source; Availability, Usability, Reliability, Relevance and the ability to present the data. Additionally, it needs to have an urgency to be seen and to be synthesized. Just having a volume of data doesn't mean that it's "Big Data". The total collection of tweets is a mass of information. It's a lot of information to be sure but it's just that, we can consider it "matter unorganized". The process of going from unorganized matter to organized matter is the effective implementation of Big Data tools.

Dat can sound daunting but it doesn't need to be scary. It all comes down to the way that we think about it and the way that we use it.

Testing Monogatari: a #STPCon Live Blog Entry



Ah yes, the after lunch stretch. I will confess I'm a tad bit drowsy but what better way to cure that than to have a rapid-fire set of five-minute test talks.

We've had several speakers play along with a variety of topics:

Brian Kitchener getting hired into a QTP shop with a background in Selenium. His idea was talking about how to make organizational changes when they are a big ship and you are just one person. The answer is lots of little changes to help prove that making small scale changes and proving their benefits can be scaled to make larger changes.

Brian Saylor talked about the idea that "CAN'T" is a four letter word. To Brian, can't is the most offensive word.

"I can't do that" really means "I don't want to do that".
"It can't be done" often means "I'm too lazy to do this"
"You can't do that" often means "I can't do that, so therefore you shouldn't be able to"

Thus Brian asks us to be sensitive and to strike the word "can't" from our vocabulary and see what it is we are really saying.

Tricia Swift talked about changes she has seen over the last twenty years. The major changes she noticed was that there was about ten percent of her MIS class who were women. Compared to her CS friends, she was a lot more represented. She is happy to see that that has changed. ISO compliance and Waterfall was everything twenty years ago (and oh do I remember *that*). Thankfully, much of that has changed, or at least the approach has changed. Most important, she is seeing women in development where twenty years ago there were very few.

Raj Subramanian wants to make it clear that "It Is OK If You Mess Up". The issue with a culture where messing up is punished means that nothing creative or innovative will be tried. Cultures that penalize people for making mistal=kes ensure that they will remain stagnant. Raj basically shared an example where a test had an effect on production (an airline app) and the learning experience was that his rogue test exposed a series of policy problems with their production server. Raj still has that job and the product was improved, all because a mistake was acknowledged and accepted.

Anan Parakasan shared his experiences with the conference and the development and focus on "trust". It's a word that has a lot of different meanings and isn't exactly easy to nail down. Anan shared ways that he felt that trust could be better fostered and developed for our teams. Additionally, developing Emotional Intelligence will help considerably by making a team more effective. Think less adversarial and more collaboratively.

Final talk is Heath (?) and he talked about "feature flag insanity" and the fact that his organization has gone a bit trigger happy with their feature flags. Having them means running with them on, of and not there. However, his point was that the flag had contexts that were understood by some people but not others. Needless to say, the feature flag had an option to fail every credit card transaction. Not being able to find it and have it rolled out to production meant that every credit card transaction in production was failed and that cost a whole lot of money. In short, know your system, document it and make sure everyone knows what the feature flags do.

Came down to Raj and Brian Kitchener and they both have $50 Amazon gift cards. And with that, I'm much more awake ;).












Making the Move to Continuous Testing: a #STPCon Live Blog Entry


Sorry for the gap in the 10:00 hour but I took advantage of the opportunity to talk with a vendor that we actually work with during that hour to discuss an issue we've been dealing with. Kudo to them for taking the time, letting me vent and demonstrate and figuring out next steps.

With that, I'm definitely interested in seeing what is happening with Continuous Testing. I first started reading about this topic almost ten years ago and it is interesting that it is still a hot button topic.

We do a lot with Continuous Integration (CI) where I am and have made strides with Continuous Delivery (CD). Now I'm curious as to the possibility of implementing Continuous Testing (CT).

Alissa Lydon works with Sauce Labs, and she is starting with getting a better understanding of what CT actually means. It doesn't mean that automated tests are running constantly (though automation is a part of the discussion). It does mean testing at every stage of development.

The first step to getting CT to have a fighting chance is that everyone has to be committed to quality, which means every level of the development process needs to think about testing and staying ahead of the curve. Think of having programmers and testers sitting together, pair navigating and pair programming/testing to help get testing principles and approaches into the process of writing software before line one of code even gets written. The key is that everyone needs to be on the same page and as early as possible.

Step two is testing at every stage of the development cycle. No code should pass through any stage of development without some level of testing being performed. This can range everywhere from requirement provocations to unit tests to integration tests to end-to-end tests, whatever makes sense at the given point in time.

Let's segue here and mention that continuous testing does not necessarily mean automated testing. Automation can make sense at a variety of points but you can have a mix of automated and live/manual tests and still be considered CT. There is also a balance of the levels of testing that will be done at any given time.

Next is being able to leverage the skills of the team so that automated testing can advance and get put in place. While Automation is not the sole criteria for CT, it is definitely important to be able to make CT work.  It will take time and attention and it will certainly take some development chops. Automation needs to be treated every bit as important as the production code and I'll fight anyone who says otherwise ;).

Additionally, a breadth of devices and platforms are critical to getting a realistic view of how your environment will look on a broad cross-section of resolutions and sizes, as well as user agents.

The ability to scale the systems being tested is important. Yes, we may start with just one machine, but ideally, we want to be able to simulate our production environment, even if we go with a smaller number of machines and try to extrapolate what a larger number would provide.

The final step is implementing and actualy using/reviewing analytics. In short, know what your product is doing and what is being used to focus efforts.




Testing all the World's Apps with AI: an #STPCon Live Blog Entry



We are here on the last day of STP-Con Spring 2019. Today is going to be nice for me in that I won't have any responsibilities short of listening and typing these blog posts. It feels so good to be "done" when it comes to speaking. I enjoy it, don't get me wrong, but there's a level of focus and energy that takes up all of your focus until you're finished. I'm happy to be on the other side for the rest of the conference.


This morning we are listening to Jason Arbon talking about testing with AI. Part of the puzzle to use AI is to teach machines the following concepts:

See
Intent
Judge
Reuse
Scale
Feedback

These are concepts our machines have to come to grips with to be able to leverage AI and have us gain the benefits of using it. Our machines need to be able to see what they are working with, they need to be able to determine the intent of tests, to judge their fitness and purpose (machines making judgment calls? This might make a number of my long-held assertions obsolete ;) ). Machines need to be able to reuse the code and the learnings that they have been able to apply. The systems need to be able to scale up to run a lot of tests to gather a representative set of data. Finally, the scripts and apps need to be able to respond to the feedback loop.

Jason is showing examples of "Automated Exploratory Testing". Wait, what? well, yes, there are some things we can do that would certainly correspond with what we would call classic exploratory testing. At least we can look at a variety of examples of what we would normally do. Sure, a human could be super random and do completely whacked out things, but most of us will use a handful of paths and approaches, even if we are in exploratory mode. Thus, it is possible to look at a lot of "what if" scenarios. It requires a lot of programming and advanced work, but at some point, we can collect a lot of data to do certain things. There's a convergence force at play so yeah, I guess bots can do exploratory testing too :).


Wednesday, April 3, 2019

Using Agile to Adapt to Changing Goals: an #STPCon Live Blog Entry



Using Agile to Adapt to Changing Goals

There's an old joke that goes around that says, basically, "If you don't like the way things are today, come back tomorrow". Meaning the only true constant is change and change happens a lot. We all have choices to be a driver of change or to have change be driven onto us. To this idea, any goals that your team has made are just as likely to change.


Sue Jagodzinski described her own situation of change within her company and the chaos that resulted from it. Part of her story described the challenges of managing two teams that were focused on different things. Both teams needed to use and support the adoption of a test automation tool and to be able to enhance and support that tool. One team focused on build, test, and automation. The other team focused on training, support, and documentation. While the tool they were working on was the same, they had very different missions and purposes and that would show up in the backlogs they built up and the priorities they placed on their respective backlogs.

Here's a bit of nerdy for you. The training and support team decided to call themselves Team1. In retaliation, the other team called themselves Team0 because they understood how arrays were indexed (and yes, this was a real dynamic between these teams. I thought she was kidding, she assured me she was not.

To this end, Sue determined that there were some key danger signs happening here. By segregating responsibility between two teams, there was an unhealthy competition that developed. trust issues developed along the way. when there issues there was plenty of finger pointing to go along with it. Most visibly was the fact that there were conflicts where teams would determine what to work on was guided by whatever the other team was not working on.

By moving to a shared backlog, the teams came into immediate conflict and had to negotiate how to change that dynamic. Some of the areas that Sue addressed was:

- how can we determine the skills that were needed on each team, and institute necessary moves if needed?
- determine the soft skill levels of the team members, who were the leaders? Who could become a new leader if needed?
- who would be best served to go elsewhere?
- how could we restructure the teams to be less contentious?

The last one was easy-ish to solve by changing names. the two teams for purposes of this presentation were "Awesomation" and "Thorium". Both teams agreed to go to a single backlog. Teams were set up so that both teams had technical expertise in the test framework. More to the point, an emphasis was made to reward and encourage those that would share their knowledge and skill sets. By doing this Sue was able to get close to equalizing the team. By sharing a backlog, the grooming sessions were done together, with their expected challenges. Sue took over as the product owner and she had to learn what that entailed. She described the process as being "harder than many people might realize" in addition to getting a better knowledge of the product.

The net results of this process, though not easy, were that they had the ability to learn how to do any task on either of the teams. In other words, two teams with Generalizing Specialists (service mark Alan Page ;) ). In the process, each of the team members engagement increased, different perspectives were heard, learning and reflection were done in the retrospectives, and the teams learned to progress together. Perhaps the most valuable skill they both discovered they could do was to adapt to priorities and be able to pivot and change if/when necessary.

Don't Make Yourself Obsolete: an #STPCon Quasi Live Blog Entry


Design Inclusively: Future Proof Your Software

Since it will be impossible to have me post on my own talk, I'm going to give a pre-recorded recollection and thoughts about my talk and what I'm hoping to impart with it.

Accessibility deals with the ability to design software so that it can work with technologies to help people with various disabilities use applications that they otherwise would not be able to use. 

Inclusive Design allows programmers to create and design websites and applications that are available to the largest population possible without having to rely on external technology necessary for sites to be Accessible

Inclusive Design and Accessibility go hand in hand and are complementary endeavors but Inclusive Design, done early, can help make the last mile of Accessibility that much easier. That's the key takeaway I want to convince people to consider and advocate for. 

Inclusive Design is not magic. In many cases, it’s taking work that has already been done and making some small but significant changes. New web technologies help to make Inclusive Design more effective by utilizing semantic enhancements. More important, making this shift can also help you make better design choices in the future, without having to bolt on or re-architect your existing code, possibly at great cost in time, energy and finances. Sadly, we are still in a model where Accessibility/Inclusive Design is driven by two specific parameters:

- how much money do we stand to gain from doing this (because big deal pending and customer paying is demanding it)
- how much money do we stand to lose from not doing this (because we're actually being sued for being in violation of various disabilities acts)

Fact is, we can't really say what technology will be like in five, ten, or twenty years. We can, however, with great certainty, understand what we are likely to be like in those same time frames. When I talk about future proofing software. I don't mean from a technological factor, I mean in a usage factor. We're not future proofing for machines. We are future proofing for US! At some point, every one of us will leave the happy sphere of what is commonly called "normative". For some, it's never been a reality. For many, the cracks in that sphere start to appear around age 45. Seriously, I didn't care much about Accessibility or think much about it before I turned 45 and received the gift that keeps on giving (i.e. the need for reading glasses). That was my first step into the non-normative world of adaptive needs and being a target audience for Accessibility as an everyday part of life. I can assure you it will not be my last.

There are a variety of things that can be done and truth be told they do not have to be radical changes. Very often people will look at Accessibility and Inclusive Design changes and they will say "hey, wait a minute, we applied all of these changes and I don't see any difference." Right! That's the whole point. Accessibility and Inclusive Design doesn't have to be ugly or inelegant. I'd argue that Accessible and Inclusive software is actually more beautiful because its form is enhanced by its function.

Oh, and for those who have never seen my presentation, without spoiling the surprise, I'll share a phrase out of context that speaks volumes:

"IKEA GETS IT!!!"

Testers as Their Own Worst Enemies: an #STPcon Live Blog

Testers as Their Own Worst Enemies

Michael Bolton and I share a number of interests. We're both musicians and we have often talked about music and riffed with each other at conferences over the years. Michael starts out his talk with a clearly personal example. There's a, (rightly so, I believe) critical eye being placed on a variety of streaming services that are making a lot of money on technology to share music.

Those companies are leaving the key players of that product (i.e. the musicians) out of the equation, or at least are not compensating them in any way commensurate with their contribution. What happens when musicians are not ultimately compensated for their efforts and creativity? You either get less music or you get less quality music (a term that is hugely problematic and not something I will address; everyone has opinions on what good music is).

In many ways, software testers are feeling a bit like the musicians in the above example. Think about making music without musicians. Once that was considered unthinkable. Today, anyone with a laptop and some scripting skills *can* write a hit song (not that they necessarily will, but they absolutely can). What happens when we take musicians out of the equation of creating music? What do we lose? Likewise, what happens when we take testers out of the equation of making software? Some may argue that I'm stretching this analogy since programmers are closer to musicians than testers are but hey, my rules, I'm making them up as I go ;).



To torture this metaphor a little more, I want to make a plug to a person I think every tester should know. Truth be told, I do not know his real name. I only know him as "Blue".


Seriously, though, "Blue" is what I refer to as a "stand up philosopher" and in many ways, he's a great resource that I think any software tester will find value in both hearing and reading. Blue is young, like, I could be his Dad. Seriously. Still, Blue has a way of looking at the world and exploring how societies work that can be hugely helpful to a tester. He is also great at breaking down challenging topics and making them sound simple and I think this is the skill that every tester would benefit from. Seriously, check him out (end weird side tangent ;) ).

Testers need to be something other than just people who run tests. If that's all we bring to the table then we are ultimately expendable. We need to look at we actually do for a company. We test, sure, but if that's all we do, we are of limited value. If, however, we are capable of articulating what the issues are and why they would be positive or negative to the organization, using our brains and our persuasion, then we have some leverage. To that end, I will say that testers need to be stand up philosophers in our own right (see, I had a reason for pulling Blue into this conversation ;) ). When the dialogue about testers being social scientists comes up, this is what is meant by that. When we talk about risk, we need to humanize it. We need to make it relatable. We need to feel as though the issues affect us and others because, ultimately, they do.

Ultimately those of us that want to play in the testing game (for whatever reason) are going to have to make a case for the humanity that we provide. If we cannot or do not make the case for it, then we are effectively saying we are totally expendable. Testers need to stop looking at the areas where they can be farmed out or mechanized and draw attention to the areas that they really do provide value. Our mechanical output is not really that special. If it can be repeated, it can be automated, at least at the mechanical and procedural level. What can't be automated? Thoughtful synthesis and informed advocacy for doing the right thing and why our course of action would be right.

To borrow from my own advocacy area, I can talk about the fact that code can be checked for compliance when it comes to accessibility. Can I do that? Absolutely. can code do that? It sure can and probably a lot faster than I can. Can a system make an informed decision that the experience that a disabled person will have is comparable to one that a normative user will have? At this point in time, no machine can do that. You need people for that. People are good at advocacy. People are good at learning. People are good at making judgment calls. Testers would be well advised to place their efforts and emphasis on those humanities more so than the "Techne" of what they do.


QA/QE Supporting DevOps: an #STPCon Live Blog Entry

The QA/QE Role: Supporting DevOps the Smart Way

First off, Melissa Tondi is doing something I fully intend to steal. There are varying thoughts and approaches to having an introductory slide that introduces the speaker. Some don't use one at all. Some are required to do so at certain conferences. Melissa does something that I think is brilliant, funny and useful. Her first slide after the title simply starts with "Why Me?"

In short, Melissa is spelling out not who she is, or what her credentials are, but rather "you are here because you want to learn something. I want to give you the reasons why I think I'm the right person for that job here and now for you." Seriously, if you see me doing this at a future conference, props to Melissa and you saw it here first ;).



One of the avenues that Melissa encourages is the idea of re-tuning the methodologies that already exist. One aspect that I appreciate is Melissa's emphasis on not just QA (Quality Assurance) but also QE (Quality Engineering). They are often seen as being interchangeable, but the fact is they are not. They have distinctive roles and software testers frequently traverse both disciplines. The title is not as important as what is being done. Additionally, a key part of this is the ability to balance both technical acumen and user advocacy. In short, push yourself closer to Quality Engineering so that you can be an influence on the building of the software, even before the software gets built.

Introducing DevOps to an organization can be a wild ride since for so many people we don't even know what Dev Ops is. Melissa is using Anne Hungate's definition of "The collapse and automation of the software delivery supply chain". For many, that starts and ends with building the code, testing the code and deploying the code. The dream is a push button, where we press the button, everything is magic, and the software rolls out without any human interference. Sounds great and believe me, the closer we get to that, the better. We will step away from the fact that certain people won't be able to do that for practical business reasons but still having the ability in all of the key areas is of value.

There are some unique requirements in some countries and companies to have a title of "Engineer". That's a term that has a certain level of rigor associated with it and it's understandable that some would shy away from using an Engineering extension where it's not formally warranted. For this talk, let's set that aside and not consider QE as an official title but more as a mindset and a touch point for organizing principles. In short, you can be a QE in practice while still holding a QA title. Engineering presupposes that we are developing processes and implementing approaches to improve and refine work and systems.

On area that is definitely in the spotlight is test automation. A key point is that test automation does not make humans dispensible or expendable. It makes humans more efficient and able to focus on the important things. Automation helps remove busywork and that's a great place to apply it. Additionally, it's possible to automate stuff that nets little other than make screens flash and look pretty. Automating everything doesn't necessarily mean that we are automating important or intelligent items. Automation should get rid of the busy work so that testers can use their most important attribute (their brain) on the most important problems. Additionally, it's wise to get away from the "automate everything" mindset so that we are not making a monolithic monster that by its sheer weight and mass makes it unwieldy. By parallelizing or parameterizing tests, we can organize test scripts and test cases to be run when it's actually important to run them. In short, maybe it makes more sense to have "multiple runs" to come to a place of "multiple dones" rather than "run everything just because".

Use automation to help define what is shippable. There shouldn't be an after the fact focus on automating tests if they are actually important. By focusing on automation earlier in the process, you get some additional valuable add-ons, too. You limit the accrual of technical debt. You shake out issues with unscripted testing first. More to the point, you can address testability issues sooner (yes, I've mentioned this multiple times during this conference. I completed "30 Days of Testability" and now I have it clearly on the brain. Testability should be addressed early and it should be addressed often. the more testable your application, the more Automizeable the application will be (Oh Alan Richardson I have so fallen in love with that word ;) (LOL!) ).


What not to Test: an #STPCon Live Blog Entry


“Help, I’m Drowning in 2 Week Sprints, Please Tell Me What not to Test”

This talk title speaks to me at such a profound level. Be warned, I may veer into tangent territory here. There are what I cal "swells" that come with sprint madness and sprint fatigue. It's never constant, it's like a set of waves that you would look to time. For those familiar with surfing, this likely makes sense. For those not familiar, waves tend to group together and swells grow and shrink. These series of increasing and decreasing waves are referred to as "sets" and the goal is to time the set that feels good to you. Too early and you don't catch a wave. Too late and the wave wipes you out. In between is the rideable waves to drop in.

Sprints are the software mental metaphor that goes with "timing the waves" but the problem is that the timing of sprints is constant, but wave sets are not. Likewise, even figuring out what matters for a sprint may take more or less time any given sprint. Tasks like backlog grooming, story workshops, sprint planning, etc. all come down to making sure that we have an understanding of what matters and what's actually available to us.

Risk-based testing is the idea that we focus our attention on the areas that present the most potential danger and we work to mitigate that. We all know (or should know) that we can't get to everything. Thus we need to focus on the areas that really matter.

Mary recommends that we place emphasis on testing ideas. Testing ideas should go beyond the acceptance criteria. We can easily be swayed to think that focusing on the acceptance criteria is the best use of our time but often, we discover that with a little additional looking, we can find a variety of problems that simply looking at acceptance criteria won't cover. We also need to be aware that we can also range far afield, perhaps too far afield if we are not mindful. Test ideas are helpful but don't just play "what if" without asking the most basic question of "which would be the riskiest area if we weren't to address?"

An area that I find that happens (tangent time) is that I will be testing something and find that we have to deal with an issue related to our product ut has nothing to do with stories in play. As I am the owner of our CI/CD pipeline (note: that doesn't mean I'm the expert, just that I own it and I am the one responsible for it working properly). If something happens to our CI/CD pipeline, who do you think is the first person to spring into firefight mode? Are you guessing me? Congratulations! In a sprint, I don't have the luxury of saying "oh, sorry, I can't deal with pipeline issues, I have to finish testing these stories". Therefore, any time I have issues such as a pipeline problem that needs to be addressed, I immediately put a spike into the sprint. I do my best to consider how much time it will take and if I can handle it myself (often the case) or if I need to pull in development or ops resources (also often the case).  What happens over time is that we get a clearer picture of not just actual testing focus but also the legitimate interruptions that are real and necessary to deal with. In a sprint, there is a finite amount of time and attention any of us can spend. Time and attention spent on one area necessitate that it is not spent elsewhere and no, saying you'll stay up later to cover it is robbing your future self of effectiveness. If you are doing that, STOP IT!!!

Performing a test gap analysis is also helpful. In a perfect world, we have test cases, they've been defined and we have enough information to create automated tests around them as the functionality is coming together. Reality often proves to scuttle that ideal condition, or at least it means that we come up short a bunch. What we often discover is a range of technical debt. Areas may be well covered and easily documented with test cases and automated tests. Other areas may prove to be stubborn to this goal (it may be as simple as "this is an area where we need to spend some time to determine overall testability).

The Pareto Principle is a rule of thumb, it's not absolute. Still, the old adage that twenty percent of something is going to give you eighty percent of outcomes is remarkably resilient. That's why it's a rule of thumb in the first place.

Twenty percent of test ideas can help you find eight percent of the issues.
Twenty percent of the application features will be used by eighty percent of the customers.

What does this mean? It means you need to get a read on what's actually being used. Analytics and an understanding of them are essential. More important, using analytics on your test systems is important, not just the prod numbers. One thing that was driven home to me some time back was the fact that analytics need to be examined and the configurations need to be experimented with. Otherwise, yes, you can have analytics in place but do you actually know if you have them turned on in the right places? How would you know?

One more interesting avenue to consider is that you cannot test everything but you can come up with some interesting combinations. This is where the idea of all pairs or pairwise testing comes into play. Testers may be familiar with the all pairs terminology. It's basically an orthogonal array where you take a full matrix and from that matrix, you look at the unique pairs that can be created (some feature paired with some platform as an example). By looking for unique pairs, you can trim don a lot of the tests necessary. It's not perfect, and don't use it blindly. Some tests will require they be run for every supported platform and not doing so will be irresponsible. Still, prudent use of pairwise testing can be a huge help.




One Thing Testers Can Learn from Machine Learning: an #STPCon Live Blog Entry


Today's keynote is coming to us courtesy of Mary Thorn. Mary's been in the game about twenty years including a stint as a COBOL programmer relative to the Y2K changes that were happening at the time. Mary is also going to be doing a second talk a little later, so she's going to be in this blog twice today.

All right, let's get a show of hands. How many people have seen a marked increase in hearing the term Machine Learning? How many people feel that they understand what that is? It's OK if you are not completely sure. I feel less sure each time I see a talk on this topic. Let's start with a definition. Arthur Samuel defined it as: “Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.”  The key here is that the machine can learn and can then execute actions based on that learning. This includes a couple of terms, supervised and unsupervised learning. Supervised learning is the process of learning something that maps an input to an output based on example input-output pairs. Each example is a pair consisting of an input and the desired output. Unsupervised learning groups unlabeled and unclassified data and by using cluster analysis identifies commonalities in the data and reacts based on the presence/absence of commonalities.



OK, that's fascinating but what does this actually mean? What it means is that our systems are not just the dumb input/output systems we tend to look at them as. There's a change in the skills we need to have the ways that we focus on our testing efforts. More to the point, there's a variety of newer skills that testers need to develop. It may not be necessary to learn all of them but testers (or at least test teams) would be well suited to develop levels of understanding around automation, performance, dev ops, exploratory testing, and pipeline CI/CD skills. Those skills may or may not reside in the same team member but they definitely need to reside in the team.

There are a couple of things that are helpful to make sure that this process leads to the best results.  The first is that there is a specific goal or set of goals to be pointing towards. In the process, we need to look at the outputs of our processes and examine what the data tells us and following it where it leads. Be sure, we may learn things we don't really want to know. Do we have defined tests for key areas? How are they being used? Do they matter in the first place? What are the interesting things that jump out to use? How can we help to determine if there is a cluster of issues? This is where exploratory testing can be a big help. Automation can help us consolidate the busywork and gather things together in areas. From there, we as individual testers can look at the data output and look for patterns. Additionally, and this is apropos to the 30 Dys of Testability focus I jumped through in March, we can use the data we have received and analyzed to help us determine the testability of an application. Once we determine areas where testability might be lacking, we should do what we can to emphasize and increase the overall testability of our applications.

Analytics are hugely helpful here. By examining the analytics we can look at areas where we can determine what platforms are used, what features actually matter, and what interactions are the most important. In short, let's make sure we learn what our app is actually doing, not just what we think or want them to do.




Getting underway at #STPCon Spring 2019


Welcome everyone.

We are just a little bit away from getting the conference started. The workshop days are over and with them my “quieter than normal streak. Today starts the multi blog entries that you either love or dread ;).

A couple of reminders. First, when I live blog it’s stream of consciousness. I may or may not have clean posts. Grammar may be dodgy. I may spin off into tangents. Some of my thoughts may come off half baked. I assure you this is normal :). 

I may go back and clean up entries for gross grammatical issues but other than that what you see is what you get. My impressions, mostly unfiltered. Hope you enjoy the series :).

Tuesday, April 2, 2019

Saab 99 GLE vs Mazda Miata MK1: Adventures in Car Restoration and Test Framework Building

Now that I have had a couple of times (and some dress rehearsals leading up to them) I feel pretty good about the material that I've been presenting in my workshop "How to Build a Testing Framework From Scratch". Actually, I need to take a small step back and say that the "From Scratch" part isn't really the truth. This workshop doesn't really "build" anything from the initial code level.

Instead, it deals with finding various components and piecing them together and with additional glue code in various places making everything work together. As a metaphor for this, I like to make a comparison to restoring cars. I'm not super mechanically inclined but like many people who were young and had more imagination than common sense, I harbored a desire to take an older car and "restore" it. My daughter has had a similar desire recently. Both of us have undertaken this process in our late teen/early 20s but both of us had dramatically different experiences.

When I was younger, I had the opportunity to pick up relatively cheaply a 1978 Saab 99 GLE. It looked a lot like this:

1998 Saab 99 GLE Hatchback autobile, burgundy paint


For those not familiar with Saab, it's a Swedish car brand that produced cars under that name from 1945 until 2012. It's a boutique brand, with a dedicated fan base. It has a few distinctive features, one of the entertaining ones being the fact that the ignition (at least for many of the vehicles) was on the floor between the driver seats. The key point is that it was not a vehicle where a large number of them were made. It's a rare bird and finding parts for rare birds can be a challenge. In some cases, I was not able to find original parts, so I had to pay for specialized aftermarket products and those were expensive. It also had a unique style of transmission that was really expensive to fix. Any guesses on one of the major projects I had to undertake with this car? The price tag for this was $3,000 and that was in 1987 dollars :(. When it ran, it was awesome. When it broke, it was a pricey thing to fix. Sadly, over the few years I had it, the number of days where it didn't work or needed work outweighed the days when it was working in a way that made me happy. I ultimately abandoned the project in 1990. There were just too many open-ended issues that were too hard or too expensive to fix.

By contrast, my daughter has embarked on her own adventure in car restoration. Her choice? A 1997 Mazda MX-5 Miata MK1. Her car looks a lot like this:

1997 Mazda Miata convertible, red paint, black convertible top

Her experience with "restoring" her vehicle and getting it to the condition she wants it to be, while not entirely cheap, has been a much less expensive proposition compared to my "Saab story" (hey, I had to put up with that pun for years, so you get to share with me ;) ). The reason? The Mazda Miata was and is a very popular car, what's more, a large number of them were made and they have a very devoted fan base. Because of that, Mazda Miata parts are relatively easy to find and there are a large number of companies that make aftermarket parts for them. With popularity and interest comes availability and access. Additionally, with a small size and relatively simple construction, there are a lot of areas that she can do work on the car herself that doesn't require specialized parts or tools. In short, her experiences are night and day different as compared to mine.

Have you stuck with me through my analogy? Excellent! Then the takeaway of this should be easy to appreciate. When we develop a testing framework, it may be tempting to go with something that is super new or has some specialized features that we fall in love with. There is a danger in loving something new or esoteric. There may or may not be expertise or support for the tools you may like or want to use. There may be a need to make something that doesn't currently exist. The more often that needs to be done, the more tied into your solution you are and will have to be. That may or may not be a plus. By contrast, using something that is more ubiquitous, something that has a lot of community support will be easier to implement and will also be easier to maintain and modify over time. It also allows for a greater flexibility to work with other applications where an esoteric or dedicated framework with exotic elements may not have it.

Stay tuned in future installments as I tell you why I chose to use Java, Maven, JUnit, and Cucumber-JVM to serve as the chassis for my testing framework example. Consider it my deciding I'd rather restore a Mazda Miata over a Saab 99 GLE.

The Second Arrow: an #STPCon Live-ish Blog Entry

Yesterday was the start of the workshops day at STP Con and I was happy to present for the second time "How to Build a Testing Framework From Scratch". It's done. I've had a chance to sleep on it after being emotionally spent from giving it. Now I can chat a bit about the experience and some lessons learned.

First, I was able to deliver the entire presentation in three hours, which blows my mind.

Second, I think the time spent talking about the reasoning behind why we might do certain things is every bit as important as the actual technical details.

Third, I've come to realize that there is an odd excitement/dread mix when presenting. Many people say that they are most nervous the first time they present a talk or presentation. I've decided I'm more nervous the second time I present something. The first time I may get through on beginner's luck or if I do take an arrow in the process (meaning I realize areas I messed up or could do better) that's in the moment and it's experienced, processed and put away for future reflection.

I use the term "arrow" specifically due to an old podcast where Merlin Mann represented this idea. Someone in battle feels the first arrow that hits them. It hurts, but it doesn't hurt nearly as much as the second arrow. The reason? The first arrow hits us by surprise. The second arrow we know is coming. It's the same impact but because I've been there and done that, I am often frustrated when the efforts to mitigate the issues I dealt with the first time aren't mitigated or something else I hadn't considered happens.

Much of this came down to making materials available to people in a way that was useful and timely. As I talked to a number of participants, we realized we had several similar problems:

- the materials were made available in advance but some people waited until the night before at the hotel to download them and discovered the hotel bandwidth couldn't handle it.

- the flash drive I handed off (though I did my best to make sure it was read/write on as many machines as possible) ended up as read-only on some machines. Thus it meant copying everything over to bring up the environment, which took close to a half hour for many people.

- even with all of this, I still managed to have to hear (more times than I wanted to), "sorry, my Hyper-V manager is set up by my company. I can't mount the flash drive or open the files". Ugh! On the "bright side" that was a situation that I couldn't control for or do anything about even if everything else worked flawlessly. Still, it was frustrating to have to tell so many people to buddy up with someone who could install everything.

So what did I learn taking my second arrow with this presentation?

1. The immediate install party will only ever happen if everyone in advance confirms that they are up and running well before the event. While the flash drives certainly help, they don't provide that large a time savings as compared to just having everyone set up when they walk in.

2. The "set up" and "rationale" part of my talk... since it's a workshop, what I should be doing (I think), is getting into the nuts and bolts immediately, and sharing rationale around each part of the process as we are getting into it. As it was, my introductory material took about 40 minutes to get through before we fired up the IDE and explored the framework itself. That's too long. Granted, it's there so that people can get everything installed but I think I can pace it better going forward.

3. Though the framework I offer is bare bones, I think I can comment better in the examples and should have some before and after examples that use different aspects and let people see them as a natural progression. Perhaps have three maven projects, each being a further progression from the last one.

Don't get me wrong, I had a blast giving this workshop and I hope the participants likewise enjoyed it. Still, I hope I can make it better going forward and here's hoping I'll get another chance to present it at another conference and hopefully not end up taking the third arrow ;).

Sunday, March 31, 2019

Book Review: Team Guide to Software Testability: a #30DaysOfTesting Testability #T9y Entry

Woohoo!!! Day 30 completed on March 31! Truth be told, this was a bit much and I have no one to blame but myself for how this turned out. Still, I feel like I learned a lot and covered a lot of ground. Some of it felt familiar but there was quite a bit of new information and perspectives I had a chance to look at and think about how to implement with my team. It's time to bring the formal "30 Days of Testability" to a close and to do that, I'm going to review a book that, as I came to the Ask Me Anything post, made me realize that the sudden availability of this book wasn't an accident, as one of the authors of the challenge, Ash Winter, was also one of the writers of this book. How convenient :)!

What book did you choose on Day 3 and what did you learn from it?


The book I chose was "Team Guide to Software Testability" by Ash Winter and Rob Meaney.

This version was published with Leanpub on 2019-03-08. Again, mighty convenient :)! While the book is listed as being 30% completed, there is plenty to consider and chew on with regards to testability for apps and navigating how to approach testability for you, your team and your applications. To borrow a line from the Introduction:

"We want to show that testability is not only about testers and, by extension, not solely about testing. It is almost indistinguishable how testable your product is, from how operable and maintainable
your product is. And that’s what matters to your (organisation’s) bottom line, how
well your product fits your customer’s needs."

Testability goes hand in hand with predictability. When an application responds in a predictable manner, we are able to interact with better certainty that we will see the results we expect. That, in turn, helps inform the testability or lack thereof of our applications.

Testability Mapping allows users and teams to get a better feel for the areas of their application that are working well and those that still need to be worked on. When we have a low understanding of the underlying testability architecture, we often struggle with anything approaching effective testing. To help address that, it's important to take stock of the testability of our applications and see how far afield that might take us (applications are rarely monolithic today, they have dependencies and other components that may or may not be obvious.

Our testing environments should be set up and configured in a way that allows us the maximum understanding of its underpinnings. Doing so lets us get started with testing and receiving meaningful feedback from the application. Paying attention to these environments and periodically addressing the testability helps make sure that complacency doesn't come into play. Environments are not static, they grow and develop and the technologies that are good for one period of time may be inadequate later.



Bottom Line:

Even at 30% complete, there is a lot of good information in this book. Is it worth purchasing as it currently is, with the idea that more will arrive over time? I say "yes". If the idea of helping your team develop a testability protocol sounds exciting and necessary, there is a lot to like in this book. Check it out!!!


Grinding Halt: a #30 DaysOfTesting Testability #t9y Entry

So close to the end! Just a couple more to go and I'll be able to call this challenge "surveyed". I won't really be able to call it "done" or fully "completed" until all of these aspects are put into place and we've moved the need in regard to their being examined. Still, this has been an active month, with a bit of an overly-ridiculously active final week (alas, nobody to blame for that but me). Nevertheless, for those who want to play along, check out the "30 Days of Testability" checklist and have some fun with me.

Do you know which components of your application respond the slowest? How could you find out?

This is a tricky question as there are different levels of what could be causing slowness. In the first instance, there are native components within our application itself, as in not those that we consume by other groups through microservices. Honestly, the ability to render full wiki pages as widgets with all of their formatting makes for some interesting interactions, some of them take time to pass along. As we can render some seriously complex HTML and CSS in these pages, to then have that displayed as a widget and then to have that displayed in a Responsive interface, it just takes time and in certain instances yes, it can be felt.

Other areas are a little harder to define but fortunately, we do have a way to determine how long it takes. Our application has a scheduler that runs in the background and every major interaction gets logged there. Want to know how long it takes to process ten thousand users and add them to the system? I can look that up and see (hint: it's not super quick ;) ).

The other areas that are challenging are where we consume another product's data via microservices to display that information. This isn't so much an issue of fast vs. slow as it is an issue of latency and availability. Sometimes there are things beyond our control that makes certain interactions feel "laggy". On the plus side, we have similar tools we can use to monitor those interactions and see if there are areas we can improve the system and network performance.

Watching the Detectives: a #30DaysOfTesting Testability #T9y Entry

I realize I probably should feel a little bit shameless right now at the silly puns I use for many of these titles. I should... but I don't ;). Besides, we are almost done and next week you can watch me shift gears and Live Blog about STP-Con, so just bear with me for a couple more "30 Days of Testability" posts :).


Pair with an internal user or customer support person and explore your application together. Share your findings on The Club.


This is actually a pretty recent experience for me in that through our LTG merger I was acquainted with a new manager, a new overall test director, a new VP of Engineering, and a CTO all at the same time. By interacting with each of these people, I've had the chance to show what Socialtext is, what it isn't, and what we need to do so that we can get all of our moving parts to work in context together. Each interaction helped me see both the areas that people understood about our product and areas where there were some gaps in understanding or context.

Because of the nature of how we configure our product suite (this is going outside of Socialtext now) we had several people discussing our product (the broader PeopleFluent/LTG product offerings in this case) and when it clicks with people exactly what our product does and how it does it, there's this smile and a reckoning of what goes where and why. Don't get me wrong, sometimes there are other emotional reactions other than smiles but those are thankfully the more common ones. Additionally, it's also neat to see what other arms of our broader product do and how they interact with and integrate with our platform. Consider this a strong "two thumbs up" to demoing and looking t as much of your product with as many people from as many business interests as you can. My guess is you will learn a lot from those interactions. I certainly have and will most likely continue to :).

Stop Me If You Think That You've Heard This One Before: a #30DaysOfTesting Testability #T9y Entry

Hah! I 've finally found a way to work that title into a TESTHEAD post (LOL!). Seriously, though, I'm hoping that after this flurry of posts, you all still like me only slightly less than you used to... OK, enough of that, let's get back to the "30 Days of Testability", shall we?

Use source control history to find out which parts of your system change most often. Compare with your regression test coverage.

I already know the answer to this since it's been a large process. Our most changed code is literally our front end. The reason is straightforward. We are redesigning it so that it will work as a Responsive UI. that means everything is getting tweaked around the front end. Our regression testing, therefore, is in the spin cycle. It's getting majorly overhauled. Our legacy interface, on the other hand, is doing well and will still be there for those who choose to use it, so that is adding an exciting challenge as well.

the biggest challenge I am personally facing is that the tests we have for our legacy interface are solid and they work well, but they are almost totally irrelevant when it comes to our Responsive interface. the IDs are different, the rendering code is different, the libraries that are used are different. the workflows are similar and in many ways close to the same but don't quite lend themselves to a simple port with new IDs. Thus I'm looking at the changes we are making and figuring out how we can best automate where it makes sense to. Needless to say, it's never dull.


Told You So: a #30DaysOfTesting Testability #T9y Entry

Well, here we are, home stretch. Almost all the way through the "30 Days of Testability" Challenge. there are several of these I haven't done yet and I'm sorely tempted to go back and do them retroactively. Yeah, let's get this one finished first before I demonstrate I've completely lost my mind ;).

Relationships with other teams affect testability. Share your experiences on The Club.

When we discuss just Socialtext, we are actually very small. When discussed within PeopleFluent, there are many more groups and departments with products that interact with us. Extend out to LTG and that number grows even more. Put simply, we have seven or so different business units and about twice that many more distinct product items that interact with Socialtext. Thus, yes, we are very aware that our product may work flawlessly (haha, but work with me here) and if we cannot show another business unit's product in ours, we're just as broken as they are.

Recently, we have focused on greater communication and interaction with a variety of team leads so that we can discuss how our product interacts with theirs and how we can simplify/streamline approaches to help make the interaction smoother. Primarily this is done through microservices, so that has been a recent uptick in our focus and attention for testing on all fronts.

One of the ways that I try to help increase the level and ability to enable better testability is with my install and config project. To that end, I try to see how many of the components of other business units I can get to install and run on a given appliance and be useful. As our platform is the chassis that everything else rides along in, we are the ultimate consumer and presenter. To that end, any and all options I can configure and very]ify are working at the same time helps with that aim. 

Saturday, March 30, 2019

Fear of the Unknown: a #30DaysOfTesting Testability #T9y Entry

Last one for today. My plan is to finish up the final entries tomorrow and thus finish this "30 Days of Testability" challenge on time :).

Ask your team if there are any areas of the system they fear to change. How could you mitigate that fear?

I've talked a bit about this and again, one has to be careful not to tattle to much on one's company. Again, I don't think this would be much of a surprise to anyone so I think I'm on safe ground here. As I've stated before, our company developed the original version of our product in 2004. At the time, it was written primarily in Perl. That was a skill that was prevalent at the time and we had a lot of the best Perl development capability on staff. Over the past fifteen years, that has changed and the prevalence of Perl has diminished. Additionally, the number of people proficient in Perl on our engineering team has shrunk while newer technologies are more in demand and we are staffing for those demands.

What this means is there are some areas of code that are legacy and, while they work well, there is a genuine concern that making changes to it will be difficult to maintain and there is also real uncertainty as to what the effects of making changes are. To that effect, we have taken a different approach of modernizing components with newer languages and shifting over to those newer components wherever possible. This allows us to slowly shrink down the dependencies on those older modules and lessen their footprint. At some point, we will reach a minimum where we will have to say "OK, we have cut this down as far as we can go and now we need to go this last mile." that is an ongoing process and one that will probably take years to fully complete.

Time in the Trenches: a #30DaysOfTesting Testability #T9y Entry

Ahhh, today's topic (well, this numbered topic, I'm not actually doing it on the designated day) is one that is near and dear to my heart, too. I think that many are the software testers who have also had some tenure doing tech support in either an official or secondary capacity. As this is another entry in the "30 Days of Testability" challenge, feel free to follow along and try out the day's exercises for yourself :).

What could you learn about your application’s testability from being on call for support? This eBook could help you get the most out of taking support calls.

The answer is "a great deal" and this comes from several years of personal experience. Customer Support engineers have a special kind of testing skill if they have been at it any length of time. It's what I refer to as "forensic testing" and many support engineers treat each call like an active crime scene. The best of them tend to be really quick at getting necessary information and if at all possible, getting to the heart of the matter fast and being able to retrace steps necessary to recreate a problem.

That was a skill that I found very helpful not just in the ability to find and confirm customer reported bugs but to also help me understand the various pain points that customers deal with. Getting into the customer's frame of reference and being able to appreciate the challenges they are facing can quickly help orient our everyday testing efforts. after time, we get a much clearer view of what matters to them.

If your support engineer isn't involved in an active firefight, ask them if they'd mind you shadowing them for a bit and listening in on their calls or working through an active issue. as one who has been both observer and active support personnel, I can assure you that you will learn a great deal regardless of your testing acumen and experience.

Digging in the Dirt: a #30DaysOfTesting Testability #T9y Entry

Coming up on the home stretch. Thanks to all who are reading these whenever you might be. For your own scorecard, go to "30 Days of Testability" and you can follow along :).

Share an article about application log levels and how they can be applied.

Ah, log files. I love them. I hate them. I really can't live without them. They are often a mess of information, and not a super glamorous topic but definitely worth talking about.


I found Erik Dietrich's article "Logging Levels: What They Are and How They Can Help You" to be interesting. I like his comment that logging can range everywhere from:

"Hey, someone might find this interesting: we just got our fourth user named Bill."

to

"OH NO SOMEONE GET A FIRE EXTINGUISHER SERIOUSLY RIGHT NOW."

As someone with an application that is already logging heavy, I appreciate the ability to not just have various logging levels, but to have a method to dynamically set them. At the moment (again, don't want to tattle but I doubt we're the only ones dealing with this) our logging can be dynamically set but it typically requires a restart of the application so that the logging changes will be picked up by all of the subsystems.

I think it would be cool to make a little interface that would allow me to have a dashboard specific with each log that I could view and then either with selecting a radio button and sending a POST command, modify that individual log file and to a level that I'm interested in. Hmmm, may have just found a project to dig into ;). 

Built for Speed: a #30DaysOfTesting Testability #T9y Entry

Is anyone else excited about the fact that the Stray Cats are touring this year? Just me? Feels really strange to see how old some of my heroes and contemporaries in the music world are. That means that I must be... (shakes head vigorously)... oh no, we're not going down that road right now. NOPE!!!

So, more "30 Days of Testability"? Fantastic!

How long does it take to set up a new test environment and start testing. Could this be faster?

This is a passion project for me as I have been actively working on a way to speed this up for years. We deploy our software to a cloud-based server(s) (depending on the configuration it can be several) and for that purpose, I prefer testing with a device(s) that matches that environment. In our world, we call these installations and configurations "appliances" so when I talk about an install, an appliance could mean a single machine or multiple machines connected together to form a single instance. The most common is an all in one appliance, meaning a single machine with everything on it.

I have a project that I have worked on for a couple of years now that has built up on all of the things that I normally do to set up a system for testing. For any software tester that wants to get some "code credit in the source repo, this is actually a really good project to start with. Much depends on the system you are working with but if you are developing and deploying a Linux based application, there is a huge amount that can be done with native shell scripts. Tasks such as:

- setting up host details for environments,
- provisioning them for DNS
- setting up security policies
- configuring networking and ports
- downloading necessary libraries and dependencies
- standard installation and configuration commands for an application
- setting up primary databases and populating them with starter data

all of these can be readily set up and controlled with standard shell scripts. With a typical appliance setup, there are three stages:

- machine setup and provisioning so it can respond on the network
- basic installation of the application
- post-installation steps that are heavy on configuring the appliance and importing starter and test data

One of the nicer things to be able to talk about at standup or retrospectives is when someone mentions an item to make part of installation or setup and often I can just say "oh, grab the appliance setup repo, I have that taken care of" or "hey, that is a good idea, let me drop that into the appliance setup repo".

The speed of setup is something I'm a little bit obsessed with. I frequently run timing commands so I can see how long a given area takes to set up and configure and my evergreen question is "OK, am I doing this in an efficient enough way? How can I shave some time off here or there?" It's also fun when someone asks "hey, how long will it take to get a test environment set up?" and I can give them an answer to within forty-five seconds, give or take ;).

Could I make setup faster with containers or other tools? I certainly can but since we are not at this point deploying containers for customers, that's a question for another time :).