Showing posts with label career development. Show all posts
Showing posts with label career development. Show all posts

Monday, August 25, 2025

Building Your Tester Survival Guide with Dawn Haynes: A CAST Live Blog

For the past couple of days as we have been getting CAST ready to go, I've gone and done a number of item runs, food stops, and logistical troubleshootings with Dawn Haynes, which is a common occurrence over my years with CAST. Dawn and I have frequently been elbows deep in dealing with the realities of these conferences. One funny thing that we quipped about was the fact that any time we appear at conferences together as speakers, somehow we are always scheduled at the same time (or at least a lot of the time). I thought that was going to be the case this time as well but NO, the schedule has allowed us to not overlap... for ONCE :)!!!

I first learned about Dawn through her training initiatives long before I was actually a conference attendee or speaker. She appeared as a training course provider in "Software Test and Performance" magazine back in the mid 2000s. Point being, Dawn has been an expert in our field for quite some time, and thus ,if Dawn is presenting on a topic, it's a pretty good bet it's worth your time to sit and listen. Daw is the CEO and resident Testing Yogini at PerfTestPlus, so if you want to get a first hand experience with her, I suggest doing it if you can. For now, you get me... try to contain your excitement ;).

Onekey area tht Dawn and I are both aligned on and wholeheartedly agree with is that we are individually as testers, quality professionals, whatever we call ourselves, we are responsible for crating our own careers and if you have been in testing for an extended period, you have probably already had to reinvent yourself at least once or twice. Dawn wants to encourage all testers and quality professionals to actively develop their survival instincts. Does that sound dire. It should... and it shouldn't. Dawn's point is that testing is a flexible field and what is required one day may be old hat and not needed the next. As testers, we are often required to take on different roles and aspects. During my career, I have actually transitioned a few times into doing technical support over active day to day testing.  That's a key part of my active career curation. I've actually been hired as a tech support engineer only for them to realize that I have had a long career in software testing and the next thing I know, I'm back and actively doing software testing full time. In some cases, I have done both simultaneously and that has kept me very busy. My point is, those are examples of ways that testing skills can be applied in many different ways and with many different jobs. 

Automating stuff, doing DevOps, running performance or security audits, or looking at areas your organization may not be actively working towards and playing around with those areas. As you learn more and bring more to the table, don't be surprised that you may be asked to do more of it or leverage those skills to learn about other areas.

Some areas are just not going to be a lot of fun all of the time. Sometimes you will take a while to get the skills you need. You may or may not get the time to do and learn these things but even if you can just spend 20 minutes a day, those efforts add up. Yes, you will be slow, unsure, and wary at first. You may completely suck at the thing that you want to/need to learn. You may have deficiencies in the areas that you need to skill up on. The good news is tat's normal. Everyone goes through this. Even seasoned developers don't know every language or every aspect of the languages they work with. If you are not learning regularly, you will lose ground. I like Dawn's suggestion of a 33/33/33 aproach. Learn something for work, reach out to people, train and take care of yourself. By leveraging these three areas, we can be effective over time and have the healeth and stamina to actually leverage what we are learning. We run the risk of burning ourselves out if we put too much emphasis on one area, so take the time to balance those areas and also, allow yourself to absorb your learning. It may take significant time to get good at something but if you allow yourself the time (not to excess) to absorb what you are learning, odds are you will be better positioned to maintain and even grow those skills.

One of the best skills to develop is to be collaborative whenever possible. Being a tester is great but being able to help get the work done in whatever capacity we can is usually appreciated. A favorite phrase on my end is, "There seems to be a problem here... how can I help?" Honestly, I've never to date been turned down when I've aproached my teams with that attitude.

Glad to have the chance to hear Dawn for a change. Well done. I'm next :).   



Tuesday, May 9, 2023

I Guess Nothing Lasts Forever: TESTHEAD At Large

I've recently found myself in an interesting position - after working for the same company for the past decade, I'm now looking for a new job. To be clear, this isn't entirely by choice. However, I have no ill will or hard feelings for the company that has chosen to put me at liberty. They are making choices that make sense for them and I've been here before.  Granted, it's been twenty years since I had to deal with this in such a stark way but this brings me to a realization. For the first time in a decade, I am free to explore and consider whatever career I want. I am literally unsupervised. To borrow from the old joke, yeah, it freaks me out a little bit, too, but the possibilities are endless.

My Motto for today. BTW, this shirt with this motto is available at:
https://www.etsy.com/listing/671632845/i-am-currently-unsupervised-i-know-it


I remember talking many times with people and they asked me if I had the chance and the choice to go into exactly the line of work and the area that I wanted to, what would it be? For anyone who has followed this blog for any length of time, that might seem obvious.

I would like to actively explore and advocate for better accessibility and Inclusive Design, whether that be in the digital or the physical world.

What is interesting to me is the fact that when I started working with my previous company, accessibility was the first major project I was responsible for and worked towards. It developed in me a desire for advocacy and speaking about the topic for the better part of a decade. However, due to shifting needs, I haven't worked with a hands-on active work project around accessibility since 2018. I miss being actively engaged with this at a level beyond speaking about and writing about it. 

Over the years, I've seen firsthand how important it is to design and build software that is accessible to everyone, regardless of their abilities. I've come to realize that accessibility isn't just something that's nice to have - it's a fundamental aspect of good design. It's good business and frankly, it's something every one of us will have to come to grips with at some point in some capacity.

Thuis to that end, I have decided to come back to my old friend, TESTHEAD, and recommit to sharing accessibility ideas, approaches, methodologies, and hey, maybe dive deeper into some programming aspects and ways to make accessibility tools that myself and others might want to use.

I'm excited to explore new opportunities, and if a good one comes along that's not specifically focused on accessibility, I'll certainly not dismiss it. However, this is a chance to put that very specific feeler out there, to see if someone out there would be interested in a passionate accessibility advocate and having them join their team or even working peripherally with them. Regardless, this blog has been quiet for too long outside of live blogging of conferences. I hope you will join me in my journey to change that. 

Wednesday, April 3, 2019

QA/QE Supporting DevOps: an #STPCon Live Blog Entry

The QA/QE Role: Supporting DevOps the Smart Way

First off, Melissa Tondi is doing something I fully intend to steal. There are varying thoughts and approaches to having an introductory slide that introduces the speaker. Some don't use one at all. Some are required to do so at certain conferences. Melissa does something that I think is brilliant, funny and useful. Her first slide after the title simply starts with "Why Me?"

In short, Melissa is spelling out not who she is, or what her credentials are, but rather "you are here because you want to learn something. I want to give you the reasons why I think I'm the right person for that job here and now for you." Seriously, if you see me doing this at a future conference, props to Melissa and you saw it here first ;).



One of the avenues that Melissa encourages is the idea of re-tuning the methodologies that already exist. One aspect that I appreciate is Melissa's emphasis on not just QA (Quality Assurance) but also QE (Quality Engineering). They are often seen as being interchangeable, but the fact is they are not. They have distinctive roles and software testers frequently traverse both disciplines. The title is not as important as what is being done. Additionally, a key part of this is the ability to balance both technical acumen and user advocacy. In short, push yourself closer to Quality Engineering so that you can be an influence on the building of the software, even before the software gets built.

Introducing DevOps to an organization can be a wild ride since for so many people we don't even know what Dev Ops is. Melissa is using Anne Hungate's definition of "The collapse and automation of the software delivery supply chain". For many, that starts and ends with building the code, testing the code and deploying the code. The dream is a push button, where we press the button, everything is magic, and the software rolls out without any human interference. Sounds great and believe me, the closer we get to that, the better. We will step away from the fact that certain people won't be able to do that for practical business reasons but still having the ability in all of the key areas is of value.

There are some unique requirements in some countries and companies to have a title of "Engineer". That's a term that has a certain level of rigor associated with it and it's understandable that some would shy away from using an Engineering extension where it's not formally warranted. For this talk, let's set that aside and not consider QE as an official title but more as a mindset and a touch point for organizing principles. In short, you can be a QE in practice while still holding a QA title. Engineering presupposes that we are developing processes and implementing approaches to improve and refine work and systems.

On area that is definitely in the spotlight is test automation. A key point is that test automation does not make humans dispensible or expendable. It makes humans more efficient and able to focus on the important things. Automation helps remove busywork and that's a great place to apply it. Additionally, it's possible to automate stuff that nets little other than make screens flash and look pretty. Automating everything doesn't necessarily mean that we are automating important or intelligent items. Automation should get rid of the busy work so that testers can use their most important attribute (their brain) on the most important problems. Additionally, it's wise to get away from the "automate everything" mindset so that we are not making a monolithic monster that by its sheer weight and mass makes it unwieldy. By parallelizing or parameterizing tests, we can organize test scripts and test cases to be run when it's actually important to run them. In short, maybe it makes more sense to have "multiple runs" to come to a place of "multiple dones" rather than "run everything just because".

Use automation to help define what is shippable. There shouldn't be an after the fact focus on automating tests if they are actually important. By focusing on automation earlier in the process, you get some additional valuable add-ons, too. You limit the accrual of technical debt. You shake out issues with unscripted testing first. More to the point, you can address testability issues sooner (yes, I've mentioned this multiple times during this conference. I completed "30 Days of Testability" and now I have it clearly on the brain. Testability should be addressed early and it should be addressed often. the more testable your application, the more Automizeable the application will be (Oh Alan Richardson I have so fallen in love with that word ;) (LOL!) ).


Thursday, August 6, 2015

Do You Have To Be...

I'm sure you all thought you were through with hearing me post comments and thoughts from CAST, seeing as the conference finished last night, and all that's left today is to clean up the last bits and go home. while the conference is over, the reverberations from sessions should continue for some time to come. I woke up this morning and checked my Twitter and saw that I appeared in an interesting side discussion, and it's that side discussion, from a session I was not actually in, that prompted this particular post.

I've had the pleasure of getting together with and talking over dinner and at several other events with Jeff Morgan (@chzy) and Henrik Andersson (@henkeandersson) over the past few years. I have worked through Jeff's book "Cucumber and Cheese" and found it to be both great fun, and a great way to construct an automation framework. I have used Henrik's "Context-Driven Robots" module as a core component in teaching about context to new testers in the SummerQAmp curriculum. Both of these gentlemen took part in a spirited debate about whether or not software testers should code. Again, I could not attend this session because I was facilitating and co-presenting a session with Albert Gareev (@agareev) about how to design and how to test for Accessibility. Still, much was said and many tweets resulted from this discussion. I replied to some, retweeted others, and it was in this process and in light of one exchange that things got interesting :):


There's more to this conversation, and if you would like to see the rest,  I encourage looking at the other tweets, but the last message is what prompted this post today. I look forward to others who will post on the topic, too.

There are three aspects that I think are coming into play with this discussion:

1. Do we want to have a separate role and focus of energy within organizations for programmers and testers?

2. Do we want to see more testers become programmers in their own right?

3. Do we want to see programmers develop the skills to become better software testers?

Note: this may not be where the original conversation meant to go, and that's totally OK. This is my thoughts on this, and based on talks we had at CAST (I'm specifically thinking of Jessie Alford's talk about Pivotal and Cloud Foundry and how they code and test).

Let's start with the first. Several organizations are either considering, or have already decided, that there will not be a separate role for software testing that is distinct from programming (role as in dedicated teams or members of staff that do that particular job). Jessie Alford presented a model that Cloud Foundry uses where programmers rotate into a role for a time as explorers, and then they go back to programming. The focus and activities are the same. There is definitely software testing happening, and software testing as those of us who consider ourselves being testers would recognize as what we want to see testing be represented as. It is not, however, being performed by strictly manual testers. It is being performed by the programmers on the team. It's an interesting model, and from Jessie's talk, it's working quite well for them. Can a programmer be an excellent tester? If Jessie's experience is to be believed, the answer is "yes". This certainly indicates that the third point I mentioned above is not only possible, it is happening. 

There are arguments for and against a dedicated test team within an organization. My company has both roles and dedicated staff for them. We have dedicated programmers and dedicated explorers. We report to the same person, the VP of Engineering. At this time, the test team is integrated into the programming team, but we have a clear distinction between programmer and tester. That doesn't mean we don't program, especially in the area of automation. Each of us on the team has the skills and the experience to create and edit the automated tests that we use. I have the experience to be the release manager for the company, in addition to being a software tester. Thus, I certainly do feel that having programming skills can be a solid benefit to a tester. At the same time, I will not pretend that I have the same level of programming skill or experience as our production programmers do. I'm also not being asked to provide that. We have a dedicated person whose sole responsibility is to create automated tests. My teammates and I supplement their efforts, but generally, it's our efforts as explorers (to borrow from Jessie once again) that is desired most by our engineering team, at least at this time.

Michael Bolton makes a clear point in the exchange above and in other tweets that were part of the conversation. Do I have to be an expert photographer to edit National Geographic? Do I have to be an expert healer to be a pathologist? Do I have to be an expert mechanic to be a race car driver? Note, Michael didn't actually say that last one, I added it for the fun of it. The point is, we do not have to be any of those things to do the things that we excel at... but having a good knowledge of each area will certainly be helpful, as it will help us to understand better the domains we work in, and will help inform what we do. 

As testers, being able to understand code and what makes it work can give us a tremendous boost into ways we can test and ideas we can develop. At the same time, I've had experiences where the code I have written, and tested, has been enhanced by an external tester who can think of things I did not consider. I value that software tester's help and insights. I also realize that that software tester can be a programmer on my team. Exploration skills and mastery of testing approaches and ideas are important. If a team can do that with having programmers take on the role of explorers for a time, or reciprocate for one another in that capacity, it may prove to be a very workable solution. In other teams, it may make sense to have a dedicated testing group do that. 

As was made clear in the debate, the process of programming and testing are both vital. Both need to be done. There are many ways to accomplish those goals, and varying approaches will be used. Personally, I know enough programming to be dangerous, and enough testing to help mitigate danger. I'm not personally good enough at both to mitigate my own danger, but I'm working on it. My personal opinion as relates to the debate is that no, software testers do not have to be production level programmers in addition to being excellent software testers, but if you would like to get to know more about how systems work and what influences those systems to work, and if you have that base level curiosity that all good testers have (in my opinion), adding some programming to your personal portfolio would certainly not be a detriment. It's possible that you might learn something and decide that it's not for you. Again, that is OK, but don't be surprised if the effort to learn starts to give you new ways to think about the testing you are already doing, and presumably quite good at performing. In my opinion, that seems to be a good trade :).

Saturday, August 1, 2015

Bringing Energy Back to Testing - Live from #TestRetreat

Picture the scenario. A tester has been in the game for a number of years. They know the details, they know the product, they've done countless stories, and at some point, the testing becomes rote and paint by numbers. The thrill is gone. the excitement level has left the building. Do you see yourself in this scenario? If not, fabulous, this talk is not for you ;).

If, however, you have been in this situation, or currently see yourself in this situation, then Phil McNeely's session on "Bringing Energy Back to Testing" is for you. I know how this feels. I've been there and done that. At a certain point, I lost the joy and fun that testing used to provide, and at some point, I was just going through the motions. It wasn't intentional on my part, I didn't intend to go on autopilot, but I did find that there have been times where I just did what I had to do, and often what I really didn't want to do. Often the reasons came down to doing the same thing over and over. Sometimes people just burn out. For many, their  passion is somewhere else, like snowboarding, or knitting, or writing a novel. Fort those people, it helps a lot to accept that their true passions lie elsewhere and to encourage them to invest in those areas to the point they feel they are getting satisfaction there, and the follow on effect is that they can focus on what they do at work.

In my case, I often found myself overcommitted to too many good things. It's not that I was necessarily burned out on my work, but that I was committed to my day job, and to writing, and to teaching, and to community engagement. In short, I found I was spreading myself too thin in too many areas. At those times, it became apparent that I need to "give myself a haircut" across the board. When a tester who was at one time productive seems to be less focused or engaged, it might be worth making a lunch date and just getting to see what is going on in that tester's life at that moment, both inside and outside of work. It's possible you might discover that they have recently become the PTA president at their school, or they have taken on an important but time consuming position at their church, or there may be a family situation due to an illness or situation in the family that is drawing upon their energy. I know when I have too many things happening in my life, every area suffers, and yes, that includes work. By realizing that energy commitments have changed, we can help that person (or ourselves) consider what options we have to make modifications.

I wrote about this a few days ago over at the ITKE Uncharted Waters blog, but something I am using to help me stave off this challenge is using an Objective Journal. By having me consider what I am working on, and questioning it on a regular basis, I can keep myself engaged with the problem, rather than waiting for something to complete and going of and working on something else that's not on target or, sometimes, not even remotely productive. The trick for me with the Objective Journal is that it allows me to see small, everyday wins. By seeing those wins, I stay motivated and excited.

Ultimately, regardless of how engaged or not engaged we are, at the end of the day we need to realize that WE are the ones that need to develop our motivation. We can encourage and offer to help others achieve motivation, but externally motivated people tend to not stay motivated for too long. Internally motivated people can stoke that fire indefinitely, so work to encourage that spark in others, but more importantly, help them develop and maintain that spark in themselves.

What is the Career Path for a Tester? - Live from #TestRetreat

Carl Shaulis asked a simple question, or what seemed to be a simple question... what are some of the skills needed at various points of a career for a software tester?

There are many variations of software testing and approaches to software testing, but for many of us, it seems that there is a specific path. The first round is what we as a group called "the muscle", i.e. classic manual software testing. The starting point for this is, to borrow from Jon Bach, a need for curiosity. Jon has said that he can teach people technical stuff as needed, but he can't teach people how to be curious (I will come back and get the quote and post where he says this, but for now, forgive the live blogger and lack of immediate attribution ;) ).

Another aspect we discussed is that the skill levels are not specifically better at higher job levels. Many of the skills of a Level 1 tester are still applied at higher levels. they don't necessarily have specialized knowledge, but they do have experience using it over several years. What is expected for any level is the ability to evaluate a product, to look at the product with an eye to look at a workflow and determine if something is out of place or not working in the way that people expect. Critical thinking skills are valuable at any level of the job. The differentiators tend to not be the skills, but the level of influence within the organization the individual has. Junior testers and senior testers are often differentiated not so much by their skill level, but with experience and leadership, as well as overall influence.

One obvious question to ask is "does a tester need to learn how to code and make coding part of their job if they want to advance?" My answer is that, if you want to be a toolsmith and work on test tooling, then yes, programming is essential. If you don't aim to be a toolsmith, or if you are not interested in focusing on automation or tooling, then programming may not be essential. Having said that, I think that many more questions are capable of being asked and evaluated when a tester can look at the underlying code and understand what is happening, even if just in a general sense.

Much of these discussions come into play so that they can have a shorthand when it comes to job titles, compensation, and ability to allocate people into an organization. I've primarily worked in smaller companies the past fifteen years, but during the first ten years of my career, I worked with Cisco Systems, which went from a smallish 300 person company when I joined it in 1991 to a 50,000 plus person company when I left in 2001. Early in the life of Cisco, job titles were less important than they were later. When dealing with a company that is much larger, titles and the shapes of the "cogs" starts to matter. Within smaller teams, generalists and people able to cover lots of different areas are much more important, and there's a fluidity in the work that we do. Career path is less relevant in a smaller company, and the reward aspects are different. In many ways, in a smaller company, you are not rewarded with titles or advancement, you are rewarded with influence (and in some cases, with money or equity).

As a closer, it was suggested that we check out the Kitchen Soap article "On Being a Senior Engineer".  There is a lot of meat in this one so I might do a follow-on post just on this article :). ETA: Coming back to this two and a half years later, I was asked if I would consider adding "7 Reasons You Can't Get a Junior Web Developer Job" as a follow-on read. Granted, a little outside of the software testing space, but many of the same issues also fit this discussion, so yes, considered and added :).

Saturday, June 6, 2015

Learning With Repetitive Action

Yesterday I posted about the idea of developing a "mise en place" system for the things that you do, and I also mentioned that this is something we can do with any number of tasks and goals. One of the things that I have always wanted to do, admired, but never seemed to get any closer to doing, was Native American beadwork. This, I know will probably come across as a strange subject, but it's something I've admired and wanted to learn about and practice for years. Like so many other things, though, the barrier to entry and to regular practice is real.

Below is a picture of what, on the surface, may look like a simple project. As far as bead working goes, this is relatively simple. This is also my "mise en place" for when I do this type of beadwork.



It's a "roach stick" or a "roach pin". For those who dance in Native American Pow Wows, the common head-dress is the "porcupine guard hair roach". These headdresses are held on by a lock of braided hair or through a base material that can be tied to the wearers head (me being bald, I opt for the latter). The roach pin is what is used to hold the headdress in place, either passing through the braid, or acting as an anchor for the shoestring used for tying. These pins are often beaded with rich colors and patterns.

The style of beading that this is is called "three drop gourd stitch". Unlike applique, loom or lane stitch techniques that allow you to string lots of beads together and put them down at one time, gourd stitch is done one bead at a time. This is because the beads are literally woven into a spiral net, and each pass of the thread goes through an adjacent bead to make the pattern. Because of this, you have to plan your design both stylistically and mathematically. To have a design that truly repeats around the piece, you need to make sure that your piece can have beads that wrap around it in a multiple of three, and even better, also in multiples of six. This allows pattern design to match up.

I use what is called "bead graphing", where I take a piece of paper, the bead spirals arranged in the pattern I wish to use (typically that means a clockwise pattern with the beads stacking in a right to left diagonal going up). On these papers, I make a design, and then I count the "circuits" needed to make the design, meaning the number of times around the object in a given row. As an example, this roach stick needs 24 beads to make a straight line around the project. Divided by three, that means I have eight beads in any given circuit. Divided by six, that means I have four regions I can create a repeating design. With these values, I can chart and decide what I want to make.



I'm working with seed beads, which in this case are 11/0 cuts. 11/0 is the size of the beads. Eleven strung beads will cover one inch. For every square inch, there will be 110 beads. Multiply by the total area of a project, even for small projects, that's a lot of beads. These beads can go as small as 18/0 (eighteen in series an inch long, or 324 beads per square inch.

The other tools needed for this are very thin but strong thread, thin needles, and clumps of beeswax to wax the thread (which helps it slip through the glass beads and not fray or, worse, break).

On a good day, if I'm in the groove, it takes me about an hour to do a single inch of beading. On days when things don't quite work out right, it may take me a lot longer to cover the same inch. Additionally, the pattern ideas and designs I work with now are fairly rudimentary. I stick to bands of color, zig-zags, or hexagons, but I would like to expand to do more unique shapes like feathers, flags, flowers and birds. The challenge, of course, is that I won't actually know how to do them until I sit down and painstakingly, bed by bead, actually construct them.

This is where many aspiring bead-workers falter, get frustrated, and stop altogether. I know, I've reached this point several times over the years. This pictured roach pin is the current largest piece I've ever made. I've planned to make four of them, each different, each using motifs that I want to get better at. If you look just behind my beading palette, you will see a flat fan with a white handle. After I finish the roach sticks, that will be my next project. I already know it will be huge. It may take me 40 hours plus to do it. What's been helpful is that I've gotten used to using the tools and thinking about the process, and actually losing myself in the process, and discovering ways that it can be done more quickly, more efficiently and more effectively.

I also allow myself the ability to let my mind dictate what I can and will do in a given day. Right now, it's a simple goal, one hour each day. Some days, I put in a lot more. Some days, I literally just practice a pattern and then cut the threads loose to try another approach. As I've talked to other bead workers, they've all commiserated with me and said that they have been there and understand. We tend to be in awe of those who can do these things effortlessly, who seem to be so much better than we are. In truth, they are better than we are, but not because they have unerring instincts or some super natural talent. Instead, it's because they have built a skill, honed over perhaps hundreds or thousands of hours of practice, often to meet needs they have, and frequently with false starts, frustrations, and cutting the whole thing apart to start again.

This is decidedly different than most of my posts, but the parallels to software testing are abundant. We can talk a mean game about the techniques and tools we use, and the ways in which we use them, but to become truly good at them, we need to put the time in, hone our craft, practice, get frustrated, throw in the towel and walk away, and then come back and start it all again. Can it be tedious? Sure. Is it always fun. Not remotely! Will we get better if we persist? Very likely. Will we wow people right out of the gate? Again, most likely not, but that's beside the point. The best we can do is to work and practice and make things that help us. As we get better at that, others will possibly notice our work, too, and over time, consider us the experts. In their minds, we may well be, but we'll know the truth, won't we ;)?

Friday, June 5, 2015

The Value of Mise en Place

I have to give credit to this idea to a number of sources, as they have all come together in the past few days and weeks to stand as a reminder of something that I think we all do, but don't realize it, and actually utilizing the power of this idea can be profound.

First off, what in the world is "mise en place"? It's a term that comes rom the culinary world. Mise en place is French for "putting in place", or to set up for work. Professional chef's use this approach to organize the ingredients they will use during a regular workday or shift. I have a friend who has trained many years and has turned into an amazing chef, and I've witnessed him doing this. He's a whirlwind of motion, but that motion is very close quartered. You might think that he is chaotic or frantic, but if you really pay attention, his movements are actually quite sparse, and all that he needs is right where he needs them, when he needs them. I asked him if this was something that came naturally to him, and he said "not on your life! It's taken me years to get this down, but because I do it every day, and because I do my best to stay in it every day, it helps me tremendously."

The second example of mise en place I witness on a regular basis is with my daughter and her art skills. She has spent the better part of the past four years dedicating several hours each day drawing, often late into the evening. She has a sprawling setup that, again, looks chaotic and messy on the surface. If you were to sit down with her, though, and see what she actually does, she gathers the tools she needs, and from the time she puts herself into "go" mode, up to the point where she either completes her project or chooses to take a break, it seems as though she barely moves. She's gotten her system down so well that I honestly could not, from her body language, tell you what she is doing. I've told her I'd really love to record her at 10x speed just to see if I can comprehend how she puts together her work. For her, it's automatic, but it's automatic because she has spent close to half a decade polishing her skills.

Lately, I've been practicing the art of Native American beading, specifically items that use gourd stitch (a method of wrapping cylindrical items with beads and a net of thread passing through them). This is one of those processes that, try as hard as I might, I can't cram or speed up the process. Not without putting in time and practice. Experienced bead workers are much faster than I am, but that's OK. The process teaches me patience. It's "medicine" in the Native American tradition, that of a rhythmic task done over and over, in some cases tens of thousands of times for a large enough item. Through this process , I too am discovering how to set up my environment to allow me a minimum of movement, an efficiency of motion, and the option to let my mind wander and think. In the process, I wring out fresh efficiencies, make new discoveries, and get that much better and faster each day I practice.

As a software tester, I know the value of practice, but sometimes I lose sight of the tools that I should have at my beck and call. While testing should be free and unencumbered, there is no question that there are a few tools that can be immensely valuable. As such, I've realized that I also have a small collection of mise en place items that I use regularly. What are they?

- My Test Heuristics Cheat Sheet Coffee Cup (just a glance and an idea can be formed)
- A mindmap of James Bach's Heuristic Test Strategy Model I made a few years ago
- A handful of rapid access browser tools (Firebug, FireEyes, WAVE, Color Contrast Analyzer)
- A nicely appointed command line environment (screen, tmux, vim extensions, etc.)
- The Pomodairo app (used to keep me in the zone for a set period of time, but I can control just how much)
- My graduated notes system (Stickies, Notes, Socialtext, Blog) that allows me to really see what items I learn will really stand the test of time.

I haven't included coding or testing tools, but if you catch me on a given day, those will include some kind of Selenium environment, either my companies or my own sandboxes to get used to using other bindings), JMeter, Metasploit, Kali Linux, and a few other items I'll play around with and, as time goes on, aim to add to my full time mise en place.

A suggestion that I've found very helpful is attributed to Avdi Grim (who may have borrowed it from someone else, but he's the one I heard say it). There comes a time when you realize that there is far too much out there to learn proficiently and effectively to be good at everything. By necessity, we have to pick and choose, and our actions set all that in motion. We get good at what we put our time into, and sifting through the goals that are nice, the goals that are important, and the goals that are essential is necessary work. Determining the tools that will help us get there is also necessary. It's better to be good at a handful of things we use often than to spend large amounts of time learning esoteric things we will use very rarely. Of course, growth comes from stretching into areas we don't know, but finding the core areas that are essential, and working hard to get good in those areas, whatever they may be, makes the journey much more pleasant, if not truly any easier.

Thursday, April 2, 2015

Delivering The Goods: A Live Blog from #STPCON, Spring 2015



Two days goes by very fast when you are live-blogging each session. It's already Thursday, and at least for me, the conference will end at 5:00 p.m. today, followed by a return to the airport and a flight back home. Much gets packed into these couple of days, and many of the more interesting conversations we have had have been follow-ups outside of the sessions, including a fun discussion that happened during dinner with a number of the participants (sorry, no live blog of that, unless you count the tweet where I am lamenting a comparison of testers to hipsters ;) ). I'll include a couple of after hour shots just to show that it's not all work and conferring at these things:


---

Today I am going to try an experiment. I have a good idea the sessions I want to attend, and this way, I can give you an idea of what I will be covering. Again, some of these may matter to you, some may not. At least this way, at the end of each of these sessions, you will know if you want to tune in to see what I say (and this way I can give fair warning to everyone that I will do my best to keep my shotgun typing to a minimum. I should also say thank you (I think ;) ) to those who ran with the mimi-meme of my comment yesterday with hashtag "TOO LOUD" (LOL!).

---

9:00 am - 10:00 am
KEYNOTE: THINKING FAST AND SLOW – FOR TESTERS’ EVERYDAY LIFE
Joseph Ours



Joseph based his talk on the Daniel Kahneman book  "Thinking Fast and Slow". The premise of the book is that we have two fundamental thinking systems. The first thinking system is the "fast" one, where we can do things rapidly and with little need for extended thought. It's instinctive. By contrast, there's another thinking approach that's required that makes us slow down to work through the steps. that is our non-instinctual thinking, it requires deeper thought and more time. both of these approaches are necessary, but there's a cost to switch between the two. It helps to illustrate how making that jump can lose us time, productivity and focus. I appreciate this acutely, because I do struggle with context-switching in my own reality.

One of the tools I use if I have to deal with an interruption is that I ask myself if I'm willing to lose four hours to take care of it. Does that sound extreme? Maybe, but it helps me really appreciate what happens when I am willing to jump out of flow. By scheduling things in four hour blocks, or even two hour blocks, I can make sure that I don't lose more time than I intend to. Even good and positive interruptions can kill productivity because of this context switch (jumping out of testing to go sit in a meeting for a story kickoff). Sure,the meeting may have only been fifteen minutes, but getting back into my testing flow might take forty five minutes or more to get back to that optimal focus again.

Joseph used a few examples to illustrate the times when certain things were likely to happen or be more likely to be effective (I've played with this quite a bit over the years, so I'll chime in my agreement or or disagreement.

• When is the best time to convince someone to change their mind?

This was an exercise where we saw words that represented colors, and we needed to call out the words based on a selected set of rules. When there was just one color to substitute with a different word, it was easier to follow along. When there were more words to substitute, it went much slower and it was harder to make that substitution. In this we found our natural resistance to change our mind as to what we are perceiving. The reason we did better than other groups who tested this was that our ability to work through the exercise was much more likely to be successful in the morning after breakfast rather than later in the day after we are a little fatigued. Meal breaks tend to allow us to change our opinions or minds because blood sugar gives us energy to consider other options. If we are low on blood sugar, the odds of persuading a different view are much lower.

• How do you optimize your tasks and schedule?

Is there a best time for creativity? I know a bit of this, as I've written on it before, so spoilers, there are times, but they vary from person to person. Generally speaking, there are two waves that people ride throughout the day, and the way that we see things is dependent on these waves. I've found for myself that the thorniest problems and the writing I like to do I can get done early in the morning (read this as early early, like 4 or 5 am) and around 2:00 p.m. I have always used this to think that these are my most creative times... and actually, that's not quite accurate. What I am actually doing is using my most focused and critical thinking time to accomplish creative tasks. That's not the same thing as when I am actually able to "be creative". What I am likely doing is actually putting to output the processing I've done on the creative ideas I've considered. When did I consider those ideas? Probably at the times when my critical thinking is at a low. I've often said this is the time I dod my busywork because I can't really be creative. Ironically, the "busy work time" is likely when I start to form creative ideas, but I don't have that "oh, wow, this is great, strike now" moment until those critical thinking peaks. What's cool is that these ideas do make sense. By chunking time around tasks that are optimized for critical thinking peaks and scheduling busy work for down periods, I'm making some room for creative thought.

Does silence work for or against you?

Sometimes when we are silent when people speak, we may create a tension that causes people to react in a variety of different ways. I offered to Joseph that silence as a received communication from a listener back to me tends to make me talk more. This can be good, or it can cause me to give away more than I intend to. The challenge is that silence doesn't necessarily mean that they disagree, are mad, or are aloof. They may just genuinely be thinking, withholding comment, or perhaps they are showing they don't have an opinion. The key is that silence is a tool, and sometimes, it can work in unique and interesting ways. As a recipient, it lets you reflect. as a speaker, it can draw people out. the trick is to be willing to use it, in both directions.

---

10:15 am - 11:15 am
RISK VS COST: UNDERSTANDING AND APPLYING A RISK BASED TEST MODEL
Jeff Porter

In an ideal world, we have plenty of time, plenty of people, and plenty of system resources, and assisting tools to do everything we need to do. Problem is, there's no such thing as that ideal environment, especially today. We have pressures to release more often, sometimes daily. While Agile methodologies encourage us to slice super thin, the fact is, we still have the same pressures and realities. Instead of shipping a major release once or twice a year, we ship a feature or a fix each day. The time needs are still the same, and the fact is, there is not enough time, money, system resources or people to do everything comprehensively, at least not in a way that would be economically feasible.



Since we can't guarantee completeness in any of these categories, there are genuine risks to releasing anything. we operate at a distinct advantage if we do not acknowledge and understand this. As software testers, we may or may not be the ones to do a risk assessment, but we absolutely need to be part of the process, and we need to be asking questions about the risks of any given project. Once we have identified what the risks are, we can prioritize them, we can identify them, and from that, we can start considering how to address them or mitigate them.

Scope of a project will define risk. User base will affect risk. Time to market is a specific risk. User sentiment may become a risk. Comparable products behaving in a fundamentally different manner than what we believe our product should do is also a risk. We can mess this up royally if we are not careful.

In the real world, complete and comprehensive testing is not possible for any product. That means that you will always leave things untested. It's inevitable. By definition, there's a risk you will miss something important, and leave yourself open to the Joe Strazzere Admonition ("Perhaps they should have tested that more!").

Test plans can be used effectively, not as a laundry list of what we will do, but as a definition and declaration of our risks, with prescriptive ideas as to how we will test to mitigate those risks. With the push to removal of wasteful documentation, I think this would be very helpful. Lists of test cases that may or may not be run aren't very helpful, but developing charters based on risks identified? That's useful and not wasteful documentation. In addition, have conversations with the programmers and fellow testers. Get to understand their challenges and areas that are causing them consternation. It's a good bet that if they tell you a particular area has been giving them trouble, or has taken more time than they expected, that's a good indication that you have a risk area to test.

It's tempting to think that we can automate much of these interactions, but the risk assessment, mitigation, analysis and game plan development is all necessary work that we need to do before we write line one of automation. All of those are critical, sapient tasks, and critical thinking, sapient testers are valuable in this process, and if we leverage the opportunities, we can make ourselves indispensable.

---

11:30 am - 12:30 pm
PERFORMANCE TESTING IN AGILE CONTEXTS
Eric Proegler

the other title for this talk is "Early Performance Testing", and a lot of the ideas Eric is advocating is to look for ways to front load performance testing rather than wait until the end and then worry about optimization and rework. this makes a lot of sense when we consider that getting performance numbers early in development means we can get real numbers and real interactions. It's a great theory, but of course the challenge is in "making it realistic". Development environments are by their very nature not as complete or robust as a production environment. In most cases, the closest we can come to is an artificial simulation and a controlled experiment. It's not a real life representation, but it can still inform us and give us ideas as to what we can and should be doing.



One of the valuable systems we use in our testing is a duplicate of our production environment. In our case, when I say production, what I really mean is a duplicate of our staging server. Staging *is* production for my engineering team, as it is the environment that we do our work on, and anything and everything that matters to us in our day to day efforts resides on staging. It utilizes a lot of the things that our actual production environment uses (database replication, HSA, master slave dispatching, etc.) but it's not actually production, nor does it have the same level of capacity, customers and, most important, customer data.

Having this staging server as a production basis, we can replicate that machine and, with the users, data and parameters as set, we can experiment against it. Will it tell us performance characteristics for our main production server? No, but it will tell us how our performance improves or degrades around our own customer environment. In this case, we can still learn a lot. By developing performance tests against this duplicate staging server, we can get snapshots and indications of problem areas we might face in our day to day exercising of our system. What we learn there can help inform changes our production environment may need.

Production environments have much higher needs, and replicating performance, scrubbing data, setting up a matching environment and using that to run regular tests might be cost prohibitive, so the ability to work in the small and get a representative look can act as an acceptable stand in. If our production server is meant to run on 8 parallel servers and handle 1000 consecutive users, we may not be able to replicate that, but creating an environment with one server and determining if we can run 125 concurrent connections and observe the associated transaction can provide a representative value. We may not learn what the top end can be, but we can certainly determine if problems will occur below the single server peak. If we discover issues here, it's a good bet production will likewise suffer at its relative percentage of interactions.

How about Perfomance Testing in CI? Can it be done? It's possible, but there are also challenges. In my own environment, were we to do performance tests in our CI arrangement, what we are really doing is testing the parallel virtualized servers. It's not a terrible metric, but I'd be leery of assigning authoritative numbers since the actual performance of the virtualized devices cannot be guaranteed. In this case, we can use trending to see if we either get wild swings, or if we get consistent numbers with occasional jumps and bounces.

Also, we can do performance tests that don't require hard numbers at all. We can use a stop watch, watch the screens render, and use our gut intuitions as to whether or not the system is "zippy" or "sluggish". They are not quantitative values, but they have a value, and we should leverage our own senses to encourage further explorations.

The key takeaway is that there is a lot we can do and there's a lot of options we have so that we can make changes and calibrate our interactions and areas we are interested in. We may not be able to be as extensive as we might be with a fully finished and prepped performance clone, but there's plenty we can do to inform our programmers as to the way the system is behaving under pressure.

---

1:15 pm - 1:45 pm
KEYNOTE: THE MEASURES OF QUALITY
Brad Johnson

One of the biggest challenges we all face in the world of testing is that quality is wholly subjective. there are things that some people care about passionately that are far less relevant to others. The qualitative aspects are not numerable, regardless of how hard we want to try to do so. Having said that, there are some areas that counts, values, and numbers are relevant. to borrow from my talk yesterday, I can determine if an element exists or if it doesn't. I can determine if the load time of a page. "Fast" or "slow" are entirely subjective, but if I can determine that it takes 54 milliseconds to load an element on a page as an average over 50 loads, that does give me a number. The next question of course is to think "is that good enough?" It may be if it's a small page with only a few elements. If there are several elements on the page that take the same amount of time to serially load, that may prove to be "fast" or "slow".



Metrics are a big deal when it comes to financials. We care about numbers when we want to know how much stuff costs, how much we are earning, and to borrow an oft used phrase "at the end of the day, does the Excel line up?" If it doesn't, regardless of how good our product is, it won't be around long. Much as e want to believe that metrics aren't relevant. Sadly they are, in the correct context.

Testing is a cost. Make no mistake about it. e don't make money for the company. We can hedge against losing money, but as testers, unless we are selling testing services, testing is a cost center, it's not a revenue center. To the financial people, any change in our activities and approaches is often looked at in the costs those changes will occur. their metric is "how much will this cost us?" Our answer needs to be able to articulate "this cost will be leveraged by securing and preserving this current and future income". Glamorous? Not really, but its essential.

What metrics do we as testers actually care about, or should we care about? In my world view, I use the number of bugs found vs number of bugs fixed. That ratio tells me a lot. This is, yet again,  a drum I hammer regularly, and this should surprise no one when I say I personally value the tester whose ratio of bugs reported to bugs fixed is closest to 1:1. Why? It means to me that testers are not just reporting issues, but that they are advocating for their being fixed. Another metric often asked about is the number of test cases run. Tome, it's a dumb metric, but there's an expectation outside of testing that that is informative. We may know better, but how do we change the perspective? In my view, the better discussion is not "how many test cases did you run" but "what tests did you develop and execute relative to our highest risk factors?" Again, in my world view, I'd love to see a ratio of business risks to test charters completed and reported to be as close to 1:1 as possible.

In the world of metrics, everything tends to get boiled down to Daft Punk's "Harder Better Faster Stronger". I use that lyrical quote not just to stick an ear-worm in your head (though if I have you're welcome or I'm sorry, take your pick), but it's really what metrics mean to convey. Are we faster at our delivery? Are we covering more areas? Do we finish our testing faster? Does our deployment speed factor out to greater revenue? One we either answer Yes or No, the next step is "how much or how little, how frequent or infrequent. What's the number?"

Ultimately, when you get to the C level execs, qualitative factors are tied to quantitative numbers, and most of the time, the numbers have to point to a positive and/or increasing revenue. That's what keeps companies alive. Not enough money, no future, it's that simple.

Brad suggests that, if we need to quantify our efforts, these are the ten areas that will be the most impactful.


It's a pretty good list. I'd add my advocacy and risk ratios, too, but the key to all of this is these numbers don't matter if we don't know them, and they don't matter if we don't share them.

---

2:00 pm - 3:00 pm
TESTING IS YOUR BRAND. SELL IT!
Kate Falanga


One of the oft heard phrases that is heard among software testers and about software testing is that we are misunderstood. Kate Falanga is in some ways a Don Draper of the testing world. She works with Huge, Huge is like Mad men, just more computers, though perhaps equal amounts of alcohol ;). Seriously, though, Kate approaches software testing as though it were a brand, because it is, and she's alarmed at the way the brand is perceived. The fact is, every one of us is a brand unto ourselves, and what we do or do not do affects how that brand is perceived.



Testers are often not very savvy about marketing themselves. I have come to understand this a great deal lately. The truth is, many people interpret my high levels of enthusiasm, my booming voice, and my aggressive passion and drive to be good marketing and salesmanship. It's not. It can be contagious, it can be effective, but that doesn't translate to good marketing. Once that shine wears off, if I can't effectively carry objectives and expectations to completion, or to encourage continued confidence, then my attributes matter very little, and can actually become liabilities.

I used to be a firebrand about software testing and discussing all of the aspects about software testing that were important... to me. Is this bad? Not in and of itself, but it is a problem if I cannot likewise connect this to aspects that matter to the broader organization. Sometimes my passion and enthusiasm can set an unanticipated expectation in the minds of my customers, and when I cannot live up to that level of expectation, there's a let down, and then it's a greater challenge to instill confidence going forward. Enthusiasm is good, but the expectation has to be managed, and it needs to align with the reality that I can deliver.

Another thing that testers often do is they emphasize that they find problems and that they break things. I do agree with the finding problems, but I don't talk about breaking things very much. Testers generally speaking don't break things, we find where they are broken. Regardless of how taht's termed, it is perceived as a negative. It's an important negative, but it's still seen as something that is not pleasant news. Let's face it nobody wants to hear their product is broken. Instead, I prefer, and it sounds like Kate does to, that emphasizing more positive portrayals of what we do is important. Rather than say "i find problems", instead, I emphasize that "I provide information about the state of the project, so that decision makers can make informed choices to move forward". Same objective, but totally different flavor and perception. Yes I can vouch for the fact that the latter approach works :).

The key takeaway is that each of us, and by extension our entire team, sells to others an experience, a lifestyle and a brand. How we are perceived is both individual and collectively, and sometimes, one member of the team can impact the entire brand, for good or for ill. Start with yourself, then expand. Be the agent of change you really want to be. Ball's in your court!

---

3:15 pm - 4:15 pm
BUILDING LOAD PROFILES FROM OBJECTIVE DATA
James Pulley

Wow, last talk of the day! It's fun to be in a session with James, because I've been listening to him via PerfBytes for the past two years, so much of this feels familiar, but more immediate. While I am not a performance tester directly, I have started making strides to start getting into this world because I believe it to be valuable in my quest to be a "specializing generalist" or a "generalizing specialist" (service mark Alan pAge and Brent Jensen ;) ).



Eric Proegler in his talk earlier talked about the ability to push Performance testing earlier in the development and testing process. To continue with that idea, I was curious to get some ideas as to how to build a profile to actually run performance and load testing. can we send boatloads of request to our servers and simulate load? Sure, but will that actually be representative of anything meaningful? In this case, no. What we really want to create is a representative profile of traffic and interactions that actually approaches the real use of our site. To do that, we need to think what will actually represent our users interactions with our site.

That means that workflows should be captured, but how can we do that? One of the ways that we can do this is analysis of previous transactions with our logs, and recreating steps and procedures. Another way is to look at access or error logs to see what people want to find but can't, or see if there are requests that look to not make any sense (i.e. potential attacks on the system). The database admin, the web admin and the CDN Administrator are all good people to cultivate a relationship with to be able to discuss the needs and encourage them to become allies.

Ultimately, the goal of all of this is to be able to steer clear of the "ugly baby syndrome" and look to cast blame or avoid blame, and to do that, we really need to make it possible to be as objective as possible. with a realistic load of transactions that are representative, there's less of a chance for people to say "that test is not relevant" or "that's not a real world representation of our product".

Logs are valuable to help gauge what actually matters and what is junk, but those logs have to be filtered. there are many tools available to help make that happen, some commercial, some open source, but the goal is the same, look for payload that is relevant and real. James encourages looking gat individual requests to see who generated requests, who referred the request, what request was made, and what user agent made the request (web vs mobile, etc.). What is interesting is that we can see patterns that will show us what paths users use to get to our system and what they traverse in our site to get to that information. Looking at these traversals, we can visualize pages and page relationships, and perhaps identify where the "heat" is in our system.

---

Wow, that was an intense and very fast two full days. My thanks to everyone at STP for putting on what has been an informative and fun conference. My gratitude to all of the speakers who let me invade their sessions, type way too loudly at times (I hope I've been better today) and inject my opinions here and there.

As we discussed in Kate's session, change comes in threes. The first step is with us, and if you are here and reading this, you are looking to be the change for yourself, as I am looking to be the change in myself.

The next step is to take back what we have learned to our teams, openly if possible, by stealth if necessary, but change your organization from the ground up.

Finally, if this conference is helpful and you have done things that have proven to be effective, had success in some area, or you are in an area that you feel is under representative, take the third step and engage at the community level. Conferences need fresh voices, and the fact is experience reports of real world application and observation have immense value, and they are straightforward  talks to deliver. Consider putting your hat in the ring to speak at a future session of STP-CON, or another conference near you.

Testing needs all of us, and it will only be as good as the contributors that help build it. I look forward to the next time we can get together in this way, and see what we can build together :).


Tuesday, March 10, 2015

TESTHEAD Turns Five Today

With thanks to Tomasi Akimeta
for making me into "TESTHEAD" :)!!!
On March 10, 2010, I stepped forward with the boldest boast I had ever made up to that point. I started a blog about software testing, and in the process, I decided I would try to see if I could say something about the state of software testing, my role in it, and what I had learned through my career. Today, this blog turns five years old. In the USA, were this blog a person, we would say I'm just about done with pre-school, and in the fall, I'd be getting ready to go into Kindergarten ;).

So how has this blog changed over the past five years? For starters, it's been a wonderful learning ground for me. Notice how I said that: a learning ground for me. When I started it, I had intended it to be a teaching ground to others. Yeah, that didn't last long. I realized pretty quickly how little I actually knew, and how much I still had to learn (and am still learning). I've found it to be a great place to "learn in public" and to, in many ways, be a springboard for many opportunities. During its first two years, most of the writing that I did in any capacity showed up here. I could talk about anything I wanted to, so long as it loosely fit into a software testing narrative.

From there, I've been able to explore a bunch of different angles, try out initiatives, and write for other venues, the most recent being a permanent guest blog with IT Knowledge Exchange as one of the Uncharted Waters authors. While I have other places I am writing and posting articles, I still love the fact that I have this blog and that it still exists as an outlet for ideas that may be "not entirely ready for prime time", and I am appreciative of the fact that I have a readership that values that and allows me to experiment openly. Were it not for the many readers of this blog, along with their forwards, shares, retweets, plus-one's and mentions in their own posts, I wouldn't have near the readership I currently have, and I am indeed grateful to all of you who take the time to read what I write on TESTHEAD.

So what's the story behind the picture you see above? My friend Tomasi Akimeta offered to make me some fresh images that I could associate with my site. He asked me what my site represented to me, and how I hoped it would be remembered by others. I laughed and said that I hoped my site would be seen as a place where we could go beyond the expected when it comes to software testing. I'd like to champion thinking rather than received dogma, experimentation rather than following a path by rote, and champion the idea that there is intelligence that goes into testing. He laughed and said "so what you want to do is show that there's a thinking brain behind that crash test dummy?" He was referring to the favicon that's been part of the site for close to five years. I said "Yes, exactly!" He then came back a few weeks later and said "So, something like this?" After I had a good laugh and a smile at the ideas he had, I said "Yes, this is exactly what I would like to have represent this site; the emergence of the human and the brain behind the dummy!"

My thanks to Tomasi for helping me usher in my sixth year with style, and to celebrate the past five with reflection, amusement and a lot of gratitude for all those who regularly read this site. Here's to future days!

Thursday, February 26, 2015

Less Eeyore, More Spock

Today's entry is inspired by a post that appeared on the QATestLab blog called "Why Are Software Testers Disliked By Others?" I posted an answer to the original blog, but that answer started me thinking... why does this perception exist? Why do people feel this way about software testing or software testers?

There's a lot of jokes and commentary that go with being a software tester. One of the most common being "if no one is talking about you, you're doing a great job". Historically, Quality Assurance and Software Testing was considered somewhat like plumbing. If we do our jobs and we find the issues and they get resolved, it's like the plumbing that drains our sinks, tubs, showers and toilets. Generally speaking, we give the pipes in our walls next to no thought; that is, until something goes wrong. Once there's a back up, or a rupture, or a leak, suddenly plumbing becomes tremendously present in our thoughts, and usually in a negative way. We have to get this fixed, and now! Likewise, software testers are rarely considered or thought about when things are going smoothly, but we are often looked at when something bad has happened (read this as "an issue has made itself out into the wild, been discovered, and been complained about").

This long time perception has built some defensiveness for software testers and software testing, to the point where many testers feel that they are the "guardians of quality". We are the ones that have to find these horrible problems, and if we don't, it's our heads! It sounds melodramatic, but yes, I've lived this. I've been the person called out on the carpet for missing something fundamental and obvious... except for the fact that it wasn't fundamental or obvious two days before, and interestingly, no one else thought to look for it, either.

We can be forgiven, perhaps, for bringing on ourselves what I like to call an "Eeyore Complex". For those not familiar, Eeyore is a character created by A.A. Milne, and figures in the many stories of "Winnie the Pooh". Eeyore is the perpetual black raincloud, the one who finds the bad in everything. Morose, depressing, and in many ways, cute and amusing from a distance. We love Eeyore because we all have a friend that reminds us of him.

The problem is when we find ourselves actually being Eeyore, and for many years, software testing deliberately put itself in that role. We are the eternal pessimist. The product is broken, and we have to find out why. Please note, I am not actually disagreeing with this; the software is broken. All software is, at a fundamental level. It actually is our job to find out where, and advocate that it be fixed. However, I have to say that this is where the similarities must end. Finding issues and reporting/advocating for them is not in itself a negative behavior, but it will be seen as such if we are the ones who present it that way.

Instead, I'd like to suggest we model ourselves after a different figure, that of Spock. Yes, the Spock of Star Trek. Why? Spock is logical. He is, relatively speaking, emotionless, or at least has tremendous control over them (he's half human, so not devoid of them). Think about any Star Trek episode where Spock evaluates a situation. He examines what he observes, he makes a hypothesis about it, tests the hypothesis to see if it makes sense, and then shares the results. Spock is anything but a black raincloud. He's not a downer. In fact, he just "is". He presents findings and data, and then lets others do with that information what they will. Software testers, that is exactly what we do. Unless we have ship/no ship decision making authority, or the ability to change the code to fix what is broken (make no mistake, some of us do), then presenting our findings dispassionately is exactly what we need to be doing.

If people disagree with our findings, that is up to them. We do what we can to make our case, and convince them of the validity of our concerns, but ultimately, the decision to move forward or not is not ours, it's those with the authority to make those decisions. In my work life, discovering this, and actually living by it, has made a world of difference in how I am perceived by my engineering teams, and my interactions with them. It ceases to be an emotional tug of war. It's not a matter of being liked or disliked. Instead it is a matter of providing information that is either actionable or not actionable. How it is used is ultimately not important, so long as I do my best to make sure it is there and surfaced.

At the end of the day, software testers have work to do that is important, and ultimately needs to be done. We provide visibility into risks and issues. We find ways in which workflows do not work as intended. We noticed points that could be painful to our users. None of this is about us as people, or our interactions with others as people. It's about looking at a situation with clear understanding and knowing the objectives we need to meet, and determining if we can actually meet those objectives effectively. Eeyore doesn't know how to do that. Spock does. If you are struggling with the idea that you may not be appreciated, understood, or liked by your colleagues, my recommendation is "less Eeyore, more Spock".

Thursday, February 5, 2015

How I am Overcoming "Expectational Debt"

A phrase I have personally grown to love, and at times dread, is the one that Merlin Mann and Dan Benjamin coined in 2011 during their Back to Work podcast. That phrase is "expectational debt". Put simply, it's the act of "writing checks your person can't cash" (there's a more colorful metaphor that can be and is often used for this, but I think you understand what I mean ;) ).

Expectational Debt is tricky, because it is very easy to get into, and it is almost entirely emotional in nature. Financial debt is when you have to borrow money to purchase something you want or need, and it invariably involves a financial contract, or in the simplest sense a "personal agreement" where the money owed will be paid back. It's very tangible. Technical debt is what happen in software all the time, when we have to make a shortcut or compromise on making something work in the ideal way so that we can get a product out the door, with the idea that we will "fix it later". Again, it's also tangible, in the sense that we can see the work we need to do. Expectational debt, on the other hand, is almost entirely emotional. It's associated with a promise, a goal, or a desire to do something. Sometimes that desire is public, sometimes it is private. In all cases, it's a commitment of the mind, and a commitment of time and attention.

I know full well how easy it is to get myself into Expectational Debt, and I can do it surprisingly quickly. People often joke that I have a complete inability to say "no". That's not entirely true, but it's close enough to reality that I don't protest. I enjoy learning new things and trying new experiences, so I am often willing to jump in and say "yeah, that's awesome, I can do that!" With just those words, I am creating an expectational debt, a promise to do something in the future that I fully intend to fulfill, but I have not done the necessary footwork or put the time in to fully understand what I am taking on. Human beings, in general, do this all the time. We also frequently underestimate or overestimate how much these expectations matter to other people. Something we've agreed to do could be of great importance to others, or of minor importance. It's also possible that we ourselves are the only ones who consider that expectation to be valuable. Regardless of the "weight" of the expectation, they all take up space in our head, and every one of them puts a drag on our effectiveness.

Before I offer my solution, I need to say that this is very much something I'm currently struggling with. These suggestions are what I am doing now to rein in my expectational debt. It's entirely possible these approaches will be abandoned by yours truly in the future, or determined not to work. As of now, they're helping, so I'm going to share them.

Identify your "Commitments"

Get a stack of note cards, or use an electronic note file, or take a 365 day single day calendar, whatever method you prefer, and sit down and write out every promise you have made to yourself and to others that you have every intention of fulfilling. Don't just do this for things in your professional life, do this for everything you intend to do (family goals, personal goals, exercise, home repairs, car repairs, social obligations, work goals, personal progress initiatives, literally everything you want to accomplish).

Categorize Your "Commitments"

Once you have done this, put them in a priority order. I like to use the four quadrants approach Steven Covey suggests in "Seven Habits of Highly Effective People". Those quadrants are labeled as follows:

I. Urgent and Important

II. Important but Not Urgent

III. Urgent but Not Important

IV. Not Urgent and Not Important



My guess is that, after you have done this, you will have a few items that are in the I category (Urgent and Important), possibly a few in the III category (Urgent but Not Important), and most will fall into Categories II and IV.

Redouble or Abandon Your "Commitments"

These are questions you need to ask yourself for each commitment:

- Why do I think it belongs here?
- Will it be of great benefit to me or others if I accomplish the goal?
- Is it really my responsibility to do this?
- If I were to not do this, what would happen?

This will help you determine very quickly where each of the items fall. Most expectational debt will, again, fall in categories II and IV.

For your sanity, as soon as you identify something that falls into Category IV (Not Urgent and Not Important), tell yourself "I will not be doing this" and make it visible. Yes, make a list of the things you will NOT be doing.

Next is to look at the items that fall into Category III. These are Urgent but Not Important, or perhaps a better way to put this is that they are Urgent to someone else (and likely Important). They may not be important to you, but another person's anxiety about it, and their visible distress, is making it Urgent to you. It's time for a conversation or two. You have to decide now if you are going to commit time and attention to this, and figure out why you should. Is it because you feel obligated? Is it because it will solve a problem for someone else? Is it because you're really the only one who can deal with the situation? All of these need to be spelled out, and in most cases, they should be handled with a mind to train up someone else to do them, so that you can get out of the situation.

The great majority of things you'll want to do, and want to commit to, will fall in Category II. They are Important, but they are not Urgent (if they were Urgent and Important, you'd be doing them... probably RIGHT NOW!!!). Lose weight for the summer. Learn a new programming language. Discover and become proficient with a new tool. Plan a vacation for next year. Read a long anticipated book. Play a much anticipated video game. These are all items that will, in some way, give us satisfaction, help us move forward and progress on something, or otherwise benefit us, but they don't need to be done "right now". Your goal here is to start scoping out the time to do each of these, and give it a quantifiable space in your reality. I believe in scheduling items that I want to make progress on. In some cases, getting a friendly "accountability partner" to check in on me to make sure I'm doing what I need to do is a huge incentive. A common tactic that I am using now is to allocate four hours for any "endeavor space". I also allocate four hours in case I need to "change tracks" and take care of something else. This may seem like overkill (and often, it is), but it's a shorthand I use so I don't over-commit or underestimate how long it will take to do something. Even with this model, I still underestimate a lot of things, but with experience I get a bit better each time.

This of course leaves the last area (Category I), the Urgent and Important. Usually, it's a crisis. It's where everything else ends up getting bagged for a bit. If you ever find yourself in an automobile accident, and you are injured, guess what, your getting treatment and recovery rockets to Category I. In a less dire circumstance, if you are the network operations person for your company and your network goes down, for the duration of that outage getting the network to work is Category I.

I hate making promises I can't keep, but the truth is, I do it all the time. We all do, usually in small ways. Unless we are pathological liars, we don't intend to get into this situation, but sometimes, yes, the expectations we place on ourselves, or the promises we make to others, grow out of proportion to what we can actually accomplish. Take the time to jettison those things that you will not do. Clear them from your mind. Make exit plans for the things you'd really rather not do, if there is a way to do so. Commit to scheduling those items that provide the greatest benefit, and if at all possible, do what you can to not get into crisis situations. Trust me, your overall sanity will be greatly enhanced, and what's more, you'll start to develop the discipline to grow an expectation surplus. I'm working towards that goal, in any event ;).