Showing posts with label context-driven. Show all posts
Showing posts with label context-driven. Show all posts

Tuesday, October 10, 2017

Product Ecology and Context Analysis: #PNSQC Live Blog

Ruud Cox is an interesting dude who has done some interesting stuff. To lead off his talk, he described a product that he was involved with testing. A deep brain stimulator is effectively a pacemaker for the brain. Ruud described the purpose of the device, the requirements, and issues that have come to light during the procedures. I've worked on what I consider to be some interesting projects, but nothing remotely like this.


Ruud was responsible for organizing testing for this project, and he said he immediately realized that the context for this project was difficult to pin down. In his words, the context is a blur. However, by stepping back and addressing who the users of the product are (both the doctors and medical staff performing the procedures and the individuals who are having the procedures performed).

One tool that Ruud found to be helpful was to create a context diagram, and in the process, he was able to sketch out all of the people, issues, and cases where the context was applicable, and that it was somewhat fluid. This is important because as you build out and learn about what people value, you start to see that context itself shifts and that the needs of one user or stakeholder may differ from, or even conflict with, another user.



Patterns and giving those patterns meaning are individual and unique. Ruud points out that, as he as making his context diagram, he was starting to see patterns and areas where certain domains were in control and other domains that had different rules and needs. Our brains are "belief" engines", meaning we often believe what we see, and we interpret what we see based on our mental model of the world. Therefore, the more actively we work with diagramming context, the more we understand the interactions.

Ruud refers to these interactions and the way that we perceive them as "Product Ecologies". As an example, he showed how a person has been asked if they can make an apple pie. Once the person says yes, they then look at the factors they need to consider to make the pie. The PRoduct Ecology considers where the apples come from, what other ingredients are needed and where they come from, the tools and necessary methods of preparing and combining to create an apple pie suitable for eating. In short, there' a lot that goes into making an apple pie.

Areas that appear in a Context analysis need to be gathered and looked at in regards to their respective domains. Ruud has the following  approach to gathering factors:


  • Study artifacts which are available
  • Interview Domain experts
  • Explore existing products
  • Hang out at the coffee machine (communicate with people, basically)
  • Make site visits
Another way to get insights on your context is to examine the value chain. Who benefits from the arrangements and in what way. Who supplies the elements necessary to create the value? Who are the players? What do they need? How do they affect the way that a product is shaped and developed over its life?

User scenarios try to diagram the various ways that a user might interact with a product. The more users we have, the more unique the scenarios we will accumulate and the more likely that we will discover areas that are complementary and contradictory. Ruud showed some examples of a car park with lighting that was meant to come on when being used and go dark when not. As he as diagramming out the possible options, he realized that the plane and the angle of the pavement had an effect on the way that the lights were aligned, how they turned on or off, or even if they turned on or off.

Currently, Ruud is working with ASML, which creates tiny circuit elements on chips. One of the factors he deals with is that, to create a wafer in a scanner and fabricator, it can take months to produce a single wafer. Testing a machine like this must be a beast! Gathering factors and requirements likewise is also a best, but it can be done once the key customers have been identified.

Thursday, August 6, 2015

Concentration on Information Radiation



Ever have an experience at a conference that just sticks with you, that one thing that seems like such a little thing and then, as you consider it more and more, you just slap your forehead and say to yourself "for cryin' out loud, why didn't I think of that?!"

On Monday while I was at CAST, I was the room helper for Dhanasekar Subramaniam's tutorial about "Using Mind Maps for Mobile Testing". Much of the session was around heuristics for mobile testing and the ability to capture heuristics elegantly inside of mind maps. as part of the process, we spent a bit of time creating mind maps in XMind to use later with our chartered test sessions. I've done this before. I've even created a full mind map of the entire James Bach Heuristic Test Strategy Model (yes, one mind map and yes, when fully expended it is massive. Probably too massive). As we were working to create nodes and sub-nodes, Sekar pointed out that there were many labels that could be applied to the nodes, and that the labels were additive. In other words, each node could have several labels applied to them. 

As I was looking at this, and seeing labels such as pie chart fill, green check boxes, people silhouettes of several different colors, red x's, green yellow and red explanation points, and many others, I started thinking about how, by color and proximity, we could gauge how much coverage we have given a particular mindmap (or in this case, how completely we have applied a heuristic to testing) and what the results were for that. Instead of stopping to write down lots of notes, each node we were testing would get a label placed, and each label would have a semantic meaning. Green check box meant good, red X would mean failed or something wrong, a quarter pie chart would mean a quarter done, a yellow square would mean something that was a warning, but maybe not an error. Different color people icons would mean the person who performed that set of steps, and so on.

As I was looking at this, I joked with Sekar that we could tell the entire testing story for a feature or the way we applied a heuristic to a story in one place with one relatively small map. We both chuckled at that, and went on to do other things.

The more I thought about this, though, the more I liked the idea. In a previous company, we set up a machine and had a couple of flat screen monitors attached. These flat screens were placed in the main area and left on, cycling the images that were shown, only in this case, they were graphs and pages of results that were relevant to us. In short, they were acting as information radiators for our team. At a glance, we could know if the build had failed, if deployment was successful or not, and where the issue would be if there was one. We could also use this technique for information radiation. Imagine a charter or set of charters. Each one had their own mind map. Each mind map could be cycled through presentation on the monitor(s). the benefit would be that, at a glance, the team would know how testing was going fort that area, and we could update it all very quickly. I kept experimenting with it, and the more I did, the more I became convinced this just might work.

To that end, I am holding a Weekend Testing session this coming Saturday, August 8, 2015 at 10:00 a.m. PDT. We will look at mind mapping in general and XMind in particular, and we will develop a small heuristic for a feature (within XMind itself) to test and to update. I really like this idea, but I want to see if it can be tinkered with, and if it might be a useful approach to others.

If you think this might be a fun way to spend a couple of hours, come join us on Skype. Contact "weekendtestersamericas" and add us as a contact. On Saturday, get on Skype about 20 minutes before the session and say you want to be added to the session. Prerequisite, if you want to follow along and actually do the exercise, would be to download the XMind app for your platform.

 Again, I apologize for the short notice, but I hope to see you Saturday.

Wednesday, August 5, 2015

The Future Is Here - Live at #CAST2015

It's Wednesday, and to be honest, the events of the past several days have become a blur. I've been in Grand Rapids since Friday, and I've been moving non-stop since I've been here. Test Retreat, conference setup, facilitator meetings, elections, logistics, rooms, venue preparations... it's easy to lose track of where we are and why we are here. I joked a few days back that I and the rest of the board and conference committee were busy doing all we could "to make all your wildest conference dreams come true". I'm not sure how we've delivered on that, but from the tweets I have seen, and the comments directed to me thus far, I think we're doing pretty good on that front :).

I was excited to see that Ajay Balamurugadas was chosen to be Wednesday's Keynote speaker. Ajay was one of the first software testers I had the pleasure to interact with when I chose to plug into the broader software testing community. Many testers were saying things and spouting ideas, but Ajay was rolling up his sleeves, doing stuff in real time, and sharing his results, both good and bad. Ajay introduced me to Weekend Testing, and then encouraged me to bring it to the USA. He stayed up late for his time to shadow me and offer suggestions for the first few sessions we did, and then he let me fly on my own. He has participated with many of our Weekend Testing sessions, including a session with flawk, which is a company my friend Matt Coalson has been building the past few years. Matt's literal words to me about the session was "Dude, that guy, Ajay? Wow, he's the real deal!" Ajay has put the time and the energy in to prove, time and again, that yes, he is indeed the real deal!

Ajay did something pretty bold for a keynote speaker. he put up a mind map of his talk and the details titled "Should I listen to Ajay?" In a nutshell, he says that he will be covering Learning opportunities, a trend in who tests, testing education, testing & other fields, Standards and schools, and his own thoughts. He then said he invited those with more important things to do to leave, and he would be totally OK with that. Notice I'm still typing ;). Right now, this is the most important thing I can be doing :).

Ajay starts with a quote from Aldous Huxley... "try to learn something about everything, and everything about something". In a nutshell, to borrow from Alan Page (and yes, others say it too, but Alan is famous for talking about this on the AB Testing Podcast) "be a generalizing specialist as well as a specializing generalist". Be T-Shaped, a jack of al trades who makes a priority of getting genuinely geeky with a few areas that you enjoy and feel are valuable. Don't just be a software tester, actually learn about software testing. Some ideas are going to be better than others, but dig in and try ideas out. Learn as much as you can, even if it's to decide what you will discard and what you will keep. Why do we so often welcome new testers with test cases? Do we not trust them to be able to test, or do we just insist on them doing what we tell them of front, with hopes that their creativity will appear later? If they are given prescriptive test cases, and told to execute them, don't be surprisedif that hoped for creativity does not appear.

there are several organizations taht exist to help teach software testers, some obvious and some less so. Ministry of testing, Weekend Testing, BBST Testing Courses, Udemy, Coursera, Test Insane's Mind Map collection, the Software Testing World Cup... there's *lots* of places we can learn and try out new ideas.

Ajay said something pretty cool (attribution missed, will fill in later).. "If you would like to double your income, triple your learning!" we each need to take the opportunities we have, and we need to apply them. I personally believe that my blog exists for this purpose. Sometimes I have let several days go fallow without writing because I feel I don't have anything unique to share. However, I have had such a rush these past few days writing summaries and interpretations of each of the sessions I've been involved in since Saturday. Before August, my largest number of blog posts for any given month was nine, and sometimes I felt like I struggled to get those out. Right now, I'm writing the eighteenth blog post for August, all of which inspired by my being here in Grand Rapids, with the activities I've been participating in. If all goes well, I may have four more to offer by the end of today. Seriously, that's twenty-two blog posts in five days! What's interesting is that, as I've written so many, I'm feeling energized, and I want to keep that energy going. That's the power of diving into your learning, and creating in the process. I want to see what it takes to keep it going.

Ajay has asked why you need a title to be a leader? The truth is, you don't. You can lead right now, and you can be an example and a guide to others. You do not need to ask permission, you just need to act with conviction and determination. Figure out the things you can do without having to ask permission, and dig in. If a process is slow, try another one. In parallel, if you must, or totally replace the old efforts with a new approach if you can do so. People may feel frustrated if you go and do something without asking, but they will likely keep what you are doing if you deliver a better result than what they were getting before.

What do you say when someone says "I'd like to become a software tester, what do I need to know?". Do we tell them the truth, that it can be exceptionally hard, and that there is so much to learn? Do we tell them that there's a lot of things they can get involved with? Do we encourage their curiosity, and get them engaged where they are? Personally, I think we can do a lot of good by starting with where people are and showing them the fun and experience software testing can be. Granted, it's not always fun, but there's plenty of opportunities to explore and be curious. De-emphasize the mechanics of testing, encourage the curiosity. Software testing classes are developing. I'm biased, but I'm pretty fond of the BBST courses and what they offer. Still, there's a need for more, and we have an opportunity to help make that happen. It will take time, of course, but there is a need for excellent software testing training. Let's do what we can to foster and develop it.

My thanks to Ajay and his devotion to our craft. He's a role model that I believe any software tester would do well to emulate, myself included. At this point I need to get ready for Open Season and help facilitate questions and answers. Thanks for playing along with me today, I'll be back in a bit :).


Tuesday, August 4, 2015

Leaping into Context - Live from #CAST2015

Erik Brickarp gets the nod for my last session today. After facilitating three talks, it feels nice to just sit and listen :).

Erik's talk focuses on "A leap towards Context-Driven Testing". When chaos and discord start raining down on our efforts, sometimes the best break through comes with a break with.  In 2012, he joined a new team at a big multinational telecom company. That team had a big, clunky, old school system for both documentation and loads of test cases (probably based on ISO-9001, and oh do I remember those days :p ). What's worse, the teeam was expected to keep using these approaches. To Erik's credit, he decided to see if he could find a way out of that agreement.

The team decided they needed to look at the product differently. Rather than just focus on features  and functions, he also decided to look at ways that the project could be tested. In the process of trying to consider what the test approach had to be, they moved from multiple spreadsheets to web pages that could allow collaboration. By using colors in tables (as they used previously in cells) they were able to quickly communicate information by color and by comment (reminds me of Dhanesekhar's tutorial yesterday ;)).

By stepping away from iron-clad rules and instead focusing on guidelines, they were able to make their testing process work more efficiently. Of course, with changes and modifications, this welcomes criticism. The criticism was not based on the actual work, but they were upset that the junior team member went behind the back of the organization to "change the rules". Fortunately, due to the fact that the work was solid and the information being provided was effective, informative and actionable, they let them continue. In the following weeks, they managed to make the test teams deliverables slimmer and more meaningful, faster to create and easier to maintain. By using a wiki, they were able to make the information searchable, reports listable, and easy to find.

Erik admits that the approach he used was unprofessional, but he was fortunate in the fact that the effort was effective. As a lesson learned, he said that he could have approached this with better communication and could have made these changes without going behind their backs. Nevertheless, they did, and so they have a much more fun story to tell. The takeaway here is that there is a lot of things we can do to improve our test process that don't specifically require corporate sanction. It also shows that we can indeed make changes that could be dramatic and not introduce a ton of risk. Support is important, and making sure the team supports your efforts can help testers (or any team) make transitions, whether they be dramatic or somewhat less so.

Additionally, if you have a hope to change from one paradigm to another, it helps a great deal to understand what you are changing to and how you communicate those changes. Additionally, make sure you keep track of what you are doing. Keeping track doesn't mean having to adopt a heavy system, but you do have to keep track. Exploratory testing doesn't mean "random and do anything". It means finding new things, purposefully looking for new areas, and making a map of what you find. When in doubt, take notes. After all that, make sure to take some time to reflect. Think about what is most important, what is less important, and what I should be doing next. Changing the world is important, and if you feel the need to do so, you might want to take a page from Erik's book. I'll leave it to you to decide if it makes sense to do it in full stealth mode or with your company's approval. the latter is more professional, but the former might be a lot more fun ;).

Let's Move it Forward - Live from #CAST2015

Today the rubber meets the road. Day one of the full CAST 2015 conference is underway. We have had breakfast, we have introduced the program, and we have announced the election running. To that end, I want to remind all AST members that you have until 7:00 p.m. Eastern time TODAY to cast your vote for next year's Board of Directors.

Last year Karen Johnson and I had a discussion at CAST in New York where we commented on the fact that there was an "echo chamber" developing in the software testing world, and that it seemed that the voices we most needed to hear from we were not hearing from. She and I discussed the idea that the industry seems to value "rock stars", to which I laughed that those who use that term haven't known very many rock stars in real life (I have, and truth be told, they are not necessarily the most reliable people on the planet, but they are often fun to be around and listen to ;) ). Karen has been a solid voice in the testing world, and I was excited to see that she was the opening keynote for CAST 2015.

One of the great things about going to conferences for the first time is that the reaction we most often have is "Oh, wow, I'm not alone!" Getting that confirmation that first time can be huge, and it helps make it possible to frame our place in the world of software testing. Karen has been in the software testing world for 30 years, and like many of us, didn't have any intention of being a tester when she started out. She planned to be a journalist (which I think is really cool because I often look at software testing and journalism to be very kindred careers). Karen shared a lot of highlights from her career, and when flashed across the screen, made clear that she's had and continues to have a remarkable career! I recommend checking out the webCAST video of her keynote when it posts.

The theme of CAST 2015 is "Moving Testing Forward", and that indicates that in many ways, testing is seen as not moving forward. Software development has changed radically these past twenty five years (that's my time frame, since I started really thinking about it when I stated working in IT in 1991). Many of the development techniques have changed, but the way that software has been tested, at least in a number of organizations I have been in, has changed very little. It's easy for testers to feel "stuck" at various points, and when we try to make those forward steps, we often receive push back, and at times that pushback comes from our own colleagues. I had a similar experience in 2009, after nearly twenty years of software testing. I felt like I was doing the same things the same ways, and there was very little I felt I had to show for it beyond what I learned the first few years. Yes I had twenty years experience, but it felt like I had two years experience repeated ten times.

Stepping forward takes courage, it takes a willingness to know who you are and what you are good at. It also means you have to be ready to accept that there are things you are not good at, and often, that's the hardest part. However, it's important to realize that the things you are not good at can be improved, and the things you are good at can be boosted even more by focusing a bit on what you don't feel you are good at. Karen and I look to be on the same page here, and while we realize that there are so many things in the world we will never be amazing at, we can always improve our odds by working on the things we are good at. We can't do that exclusively, and yes, some things that are distasteful or uncomfortable come with the territory. Deal with them, but don't obsess on it.

Another valuable point comes in with who we work for. Karen recommends strongly to do all you can to not work for people you do not respect. If you work for someone you do not respect, your entire relationship will be off kilter. You will know it, and they will know it. When you don't respect who you work for, your best work rarely comes out. When you respect who you work for, it's not uncommon to walk through fire for those people. I've had a few of those experiences, most recently with my dearly departed Director of Quality Assurance, Ken Pier. I can truly say I would walk through fire for him, and I strive today to be worthy of the respect he had for me as well.

There will be office politics. Do not believe you can escape it. You can't. It's part of the culture, and to borrow a recent quote from @DocOnDev... "your office does not have a culture, your office IS a culture". Cultures are dynamic, they are lived, and they are managed, for god or ill, and every one of us is part of that reality whether we like it or not. We cannot choose to not deal with people unless we literally work for ourselves only. I don't have that reality, and I'm guessing you don't either ;).

Karen mentioned that there was a value to having a manager/boss that you worked with instead of for. If you can develop a relationship that is closer to that of a peer, you can make amazing strides. true, you do work "for" someone in the literal sense that they sign your reviews and approve your bonuses and pay raises, but outside of that, it is much easier and more enjoyable to work with people rather than under people. As I said before, one of the great experiences of my career was working with Ken Pier, because he emphasized the working with. He was my director, but he hated being a manager. He wanted to be a doer, and when it came to the work of our team, he shouldered as much work as the rest of us, and often more. He wasn't an office manager or a bureaucrat, he was in the trenches with us, every day, and that make working with him both easy and enjoyable.

Along with managers, we have co-workers. Other testers, programmers, project managers, along with a myriad of other people. An important question to ask is "would I want to work with this person again at another company? If I had to change jobs and companies tomorrow, who would I want to bring with me? Who would I want to leave behind?" those people you identify as those you want to take with you, cultivate relationships with them, not in the sleazy "networking" way, but really get to know them and foster a relationship with them. Let them do the same with you.

Many people think that moving testing forward is about technical prowess only, but in truth it also requires people living and experiencing life. It might seem strange to think of work/life balance as a way to move testing forward, but it is important to keep moving and learning and evaluating to keep from becoming stagnant. the fact is, our living and interacting is what lets us actually excel. There's a phrase that I remember hearing about juggling several balls, and that all of the balls are rubber except for one, and that ball is made of glass. What do you do? The point of the story is that the glass ball must never be dropped. The other part of the story is that the glass ball is never the same thing. At a given time, the glass ball may be family. It may be work. It may be health. It may be leisure. The point is, everything will be bounced and dropped at various times, but we need to be alert and aware at what point in time the glass ball's label has changed, and what it has changed to.

Outside of work, there's many opportunities to learn and interact. Conferences are an obvious one, but there are many other ways to get involved in the community at large. Meetups, message boards, weekend testing, organization involvement, even participating in conversations on Twitter all help to foster that sense of community, but for it to matter, we need to engage. I have often said there are many who are consumers, but few who are active producers. It takes some courage to become an active producer, but the great thing is, we all can, and we can all start right where we are and move forward from there.

OK, I'm going to go help handle open season at this point, so I'll be back with you in another post in a bit. Thanks for following along :).

Friday, June 5, 2015

The Value of Mise en Place

I have to give credit to this idea to a number of sources, as they have all come together in the past few days and weeks to stand as a reminder of something that I think we all do, but don't realize it, and actually utilizing the power of this idea can be profound.

First off, what in the world is "mise en place"? It's a term that comes rom the culinary world. Mise en place is French for "putting in place", or to set up for work. Professional chef's use this approach to organize the ingredients they will use during a regular workday or shift. I have a friend who has trained many years and has turned into an amazing chef, and I've witnessed him doing this. He's a whirlwind of motion, but that motion is very close quartered. You might think that he is chaotic or frantic, but if you really pay attention, his movements are actually quite sparse, and all that he needs is right where he needs them, when he needs them. I asked him if this was something that came naturally to him, and he said "not on your life! It's taken me years to get this down, but because I do it every day, and because I do my best to stay in it every day, it helps me tremendously."

The second example of mise en place I witness on a regular basis is with my daughter and her art skills. She has spent the better part of the past four years dedicating several hours each day drawing, often late into the evening. She has a sprawling setup that, again, looks chaotic and messy on the surface. If you were to sit down with her, though, and see what she actually does, she gathers the tools she needs, and from the time she puts herself into "go" mode, up to the point where she either completes her project or chooses to take a break, it seems as though she barely moves. She's gotten her system down so well that I honestly could not, from her body language, tell you what she is doing. I've told her I'd really love to record her at 10x speed just to see if I can comprehend how she puts together her work. For her, it's automatic, but it's automatic because she has spent close to half a decade polishing her skills.

Lately, I've been practicing the art of Native American beading, specifically items that use gourd stitch (a method of wrapping cylindrical items with beads and a net of thread passing through them). This is one of those processes that, try as hard as I might, I can't cram or speed up the process. Not without putting in time and practice. Experienced bead workers are much faster than I am, but that's OK. The process teaches me patience. It's "medicine" in the Native American tradition, that of a rhythmic task done over and over, in some cases tens of thousands of times for a large enough item. Through this process , I too am discovering how to set up my environment to allow me a minimum of movement, an efficiency of motion, and the option to let my mind wander and think. In the process, I wring out fresh efficiencies, make new discoveries, and get that much better and faster each day I practice.

As a software tester, I know the value of practice, but sometimes I lose sight of the tools that I should have at my beck and call. While testing should be free and unencumbered, there is no question that there are a few tools that can be immensely valuable. As such, I've realized that I also have a small collection of mise en place items that I use regularly. What are they?

- My Test Heuristics Cheat Sheet Coffee Cup (just a glance and an idea can be formed)
- A mindmap of James Bach's Heuristic Test Strategy Model I made a few years ago
- A handful of rapid access browser tools (Firebug, FireEyes, WAVE, Color Contrast Analyzer)
- A nicely appointed command line environment (screen, tmux, vim extensions, etc.)
- The Pomodairo app (used to keep me in the zone for a set period of time, but I can control just how much)
- My graduated notes system (Stickies, Notes, Socialtext, Blog) that allows me to really see what items I learn will really stand the test of time.

I haven't included coding or testing tools, but if you catch me on a given day, those will include some kind of Selenium environment, either my companies or my own sandboxes to get used to using other bindings), JMeter, Metasploit, Kali Linux, and a few other items I'll play around with and, as time goes on, aim to add to my full time mise en place.

A suggestion that I've found very helpful is attributed to Avdi Grim (who may have borrowed it from someone else, but he's the one I heard say it). There comes a time when you realize that there is far too much out there to learn proficiently and effectively to be good at everything. By necessity, we have to pick and choose, and our actions set all that in motion. We get good at what we put our time into, and sifting through the goals that are nice, the goals that are important, and the goals that are essential is necessary work. Determining the tools that will help us get there is also necessary. It's better to be good at a handful of things we use often than to spend large amounts of time learning esoteric things we will use very rarely. Of course, growth comes from stretching into areas we don't know, but finding the core areas that are essential, and working hard to get good in those areas, whatever they may be, makes the journey much more pleasant, if not truly any easier.

Wednesday, October 1, 2014

An Alternative Approach to Teaching History?

Over the past several years, I've found Dan Carlin to be one of the most entertaining and thought provoking podcasters that I've listened to. For some, he is grating, irritating, and frustrating. He doesn't use the standard narrative. In fact, he steadfastly refuses to. Both of his podcasts, Common Sense and Hardcore History, strive to look at current events and history from what he calls a "martian" perspective. In many ways, I consider Dan to be the most "testerly" of podcasters. He strives to take views and commentary from all sides, consider the possibilities and the arguments made, and then presents them in a way that boils down to central themes and core ideas.

In his most recent podcast (Common Sense 281 – Controlling the Past), he makes a few points that I think would be immensely helpful in regards to not just K-12 education as it relates to history, but the entire way that we teach any subject. Perhaps it's the contrarian in me, or perhaps it reflects my own frustrations and misgivings with the way that school is taught, but I think we do several things wrong, and the net result is that many children develop a serious aversion to actual learning and discovering the joy of learning and education.

I am currently living this reality with my three children. I now have a college freshman, a high school sophomore, and an eight grader. I see what they are currently trying to do to get through their days in their various schools, and the adaptations each has to make. Ultimately, though all three are slightly different, they tend to suffer from the same problem. We operate our schools on the notion of facts and figures and dates and formulas that need to be memorized, need to be spit out on tests, evaluation is made, and then we move on to the next bit. Sometimes, this works well. Sometimes, it doesn't. As an adult who works in an ever changing landscape, I've had to embrace a different approach to learning. Also, as a software tester, I've had to often approach learning from a skeptical and often even cynical viewpoint. I'm not paid to say the product works. I'm paid to try to find out where it might be broken. My entire workaday life is the process of disproving and refutation... and I get paid for that ;).

Back to Dan and this podcast... one of the things that Dan highlights, especially in history, is that we tend to go through waves of revisionism. Fifty years ago, the Founding Fathers were near mythical deities. today, in many circles, they are seen as greedy despotic "white men" who built a society on a veneer of freedom at the cost of slavery and subjugation of others. every few years, there seems to be some tug of war about whether or not we should be exposing every one's sins, or instilling virtue through printing hagiography. Dan's thought, and one I share, is "why are we doing either?". In other words, if we truly want to teach history and what has come before, why are we necessarily giving one narrative more air time than others? What if, instead, we did something similar to what the news magazine "The Week" does? For those not familiar, The Week is a journal that presents many of its stories and headlines as a distillation of a variety of views from different sources. If a topic is going to be presented, it would take headlines and stories from both "liberal" and "conservative" pundits, publications and writers, and generally avoid making an editorial of its own, with the exception of actual editorials that it publishes, and clearly states as such that the writers are doing exactly that.

What is the purpose of this type of presentation? It actually allows for the reader to synthesize what they are reading, see the various viewpoints, the pros and the cons, and even the inherent biases of each side, and then leave it up to the reader to reason out what they are reading and what it actually means. It also helps give a more balanced view of the events and the key players. Rather than force a viewpoint based on an ideology, it allows the reader to process what they are seeing and apply their own litmus test to the material, and let them look for the coherence or the inconsistencies, something that testers are very well familiar with doing. Think about what history would look like if we allowed this same approach. We don't tell the story or George Washington or Geronimo or Martin Luther King from just one side. It isn't hagiography or character assassination. It isn't sanitized or prettied up to meet an agenda. It's given as is, with the idea that the reader discovers who the people actually are, and that they really are just that, they are people. Possibly extraordinary, possibly flawed, almost always misrepresented. Gather multiple views, present them as is, and then let the student actually practice some critical thinking skills, synthesize the data presented, and then (gasp!) actually give an opinion or discussion on what they've covered.

It's possible I may be completely insane proposing such a thing, but ultimately, I think the benefits would be huge. We talk a mean game about the importance of critical thinking. Wouldn't it be awesome to actually let students, I don't know... critically think?! Also, and I may just be speaking for myself here, but wouldn't this also make the idea of studying history (or any other subject) way more fun? As a tester, the ferreting out of the causes and effects, and advocating for the information discovered, is a huge part of the fun of testing. How great would it be to actually let students experience that in their everyday learning?

Again, it's a scary and bold proposition, but I'm just crazy enough to think teenage students are able to handle it, and might actually learn to enjoy these subjects in a way they've never really been able to before. What do you think? Realistic objective? Pie in the sky dream? If you had the chance to reshape how primary and secondary education were presented, what would you do?


Saturday, August 9, 2014

From Mid-Town Manhattan, it's #TestRetreatNYC


In the offices of LiquidNet, a group of intrepid, ambitious testers met and decided to discuss what aspects of testing mattered to us. Test Retreat is an Open Space conference, where the sessions begin when the participants want to have it begin (and end), the people who are there are the people who need to be there, and the law of two feet rules. We took some time to discuss topics, pull together similar topics and threads, and then head into our places to talk about the stuff that is burning us up inside.

---

The first session I attended was based around developing a testing community within a city or region. As many of us were part of different communities around the world, there were a variety of experiences, ranging from very small markets with perhaps a hundred testers total, to large regions with hundreds of thousands of testers, but little in the way of community engagement. 

Rich Robinson led the discussion and shared his own experiences with growing and scaling the community in Sydney, Australia. Rich shared that there were three areas that were consistently asked for and looked at as the goals of the attendees: looking for work, finding out the latest trends, and honing and sharpening skills. Rather than try to be all things to all people, they developed a committee and separate initiatives, with committee members focusing on the initiatives that mattered to them. For the group that wanted to get work, they made an initiative called “opportunity seekers”, and focused energy on those who had that as their biggest objective. For those who want to focus on latest trends, they have an avenue for that as well. 

Different regions have unique challenges. In the Bay Area, we have an overabundance of meetups for technical topics, of which a handful of them relate to software testing. As a founder of Bay Area Software Testers (BAST), Curtis Stuehrenberger, Josh Meier and I have chosen to try to focus on areas that are not as often discussed (we tend to steer clear of automation tools and techniques, since there are dozens of other meetup groups that cover those topics). Another challenge is frequency. In some cases, regular and frequent meetings are key. In others, having them less frequently works, but the key is that they meet regularly (monthly, every six weeks, or quarterly seem to be the most common models).

Rich also shared that their initiatives branch away from the formal meetup sessions, and have other opportunities that they initiate that occur outside of the formal meetup times. By having each initiative have people committed to it and resources to help drive those initiatives, buzz gets generated and more people get involved. One of the key things that Rich emphasized was getting people involved and engaged for these initiatives. The Sydney tester group has a committee of ten members that helps make sure that these initiatives are staffed and supported.

Another challenge is the local regions. Some cities have sprawl, others are difficult to get to in a timely manner due to traffic and population density. For example, the San Francisco Bay Area has four general regions: San Francisco (and the upper San Francisco Peninsula, Silicon Valley (and the lower San Francisco Peninsula), the East Bay, and the North Bay. With a few exceptions, people who participate in one generally do not regularly participate in the others. To reach out to a broader community in these sub-regions, it may require using technology and remote-access options for people to participate. 

Ultimately, growing a community takes time, it takes dedicated people, it takes a range of topics that matter to the attendees (including making sure that food and drink is there ;) ). To get those people you want to be involved, it helps to be very specific about what is needed. Saying “I need help with this” is less effective than saying “I need this specific thing to be done at this time for this purpose”. Specificity helps a lot when recruiting helpers.


The next session was “Sleep No More”, presented by Claire Moss, and focused on the model of the performance/play called Sleep No More (which is, in some ways, described as “an immersive performance of Macbeth”). It’s a darkened environment, all participants wear masks, no photography, no talking, just experience as the person sees it. Exploratory Testing, in many ways, has similarities to this particular experience. Claire used a number of cards to help display the ideas, and one of the first ideas she shared was “fortune favors the bold”. Curiosity and a willingness to go in without fear and deal with a substantial amount of “vague” is a huge plus. If you already have that, you have a strong advantage. If this is not natural for you, it can be developed.

Each room in the Sleep No More experience was part of the performance, and at any time, rooms could be empty or have people filter in during the performance. There are “minders” in the event that help to make sure that people don’t completely lose track of where they are. At times, there are very personal experiences that take place based on your tracks and where you go. Claire described a very intense experience of the performance based on where she went and what she observed and chose to follow up on. She also said that, up to this point, no one that she knew had anything like the experience she had.

The experience of Sleep No More was bizarre, creepy, full of strange triggers, and the potential to go into wildly unexpected direction. Software testing in many ways mirrors this experience. While there may be familiar areas and ideas, very often, our choices and angles may take us into very unexpected places. To give an example of the scope of the space, this was in a six story building, in an area that used to be a hotel (and whole areas of the building were gutted, in some spaces multiple floors were open to the air and visible). 

Claire described a feeling of “amazement fatigue”, where the level of stimulus is so high that there is no way to take it all in. The participants have to make conscious choices as to where they will go, and many of the participants will have wildly different experiences. Sometimes, they would follow a character, only to watch them go through a door and close and lock it, so that they couldn’t be followed any longer. This reminds me of following threads of a feature, and being brought to a dead end. People will observe different things, and they will also observe what other people do, and what they focus on. This can give us clues as to areas we want to explore next. 

This experience sounds amazing, and I am definitely interested in going and doing it myself, if time and commitments permit me to do so. Looks like I will be attending the August 10 performance :).


The next session was based on “Leadership”, and Natalie Bennett led the session with the idea that she wanted to see where individuals felt their experiences or needs for leadership were, as opposed to her telling us what she felt about Leadership and how to do it. Questions that Natalie wanted to discuss were:

- What is the purpose of a test team lead?
- What is it for?
What makes it different than being a test manager?

The discussion shifted from there into ways that test team leads and test managers were similar and where they differed. Some of the participants talked about how they led by example, and that they divvied up the work among the group based on the people involved and what they were expected to do. Team leads in general do not have hiring/firing authority, and they typically do not write reviews or have salary decision input. In other environments, the team manager and team lead are one in the same. There are some who are cynical about the effectiveness of this arrangement, while some feel that it is possible to be both a team lead and a team manager.   One attendee who is a Director of Q.A. for her company said that she was “the face of Q.A.” to the organization, and as such, she was setting the direction and expectations for the organization, as well as for her own direct reports.

Team leads are expected to teach and coach the members of their group, as well as be the point of contact for the group. It’s seen as important that they be able to focus on and develop their own role and make it responsive to their own environment. The team lead stands up for the group, and defends them from encroachment of issues and initiatives that are counter-productive to their success. Responsibility and authority tends to be on a sliding scale. Different companies allow for a different level of authority for the leads. Some give a level of authority that is just short of being an actual manager. In others, the leads is considered a “first contact” among equals.

One of the bigger challenges is to deal effectively when team members are failing. Failing in and of itself is not bad. It’s important to learn, and failing is how you learn, but when the failing is chronic or insurmountable, there needs to be a different level of interaction. Lean Coffee, direct mentoring, or even a serious re-consideration of experiences and goals can be hugely beneficial, both for the individual and the team as a whole.


Matt Heusser led a session about “Teaching Testing”, and some of the challenges that we face when we teach software testing to others. When we have an engaged and focused person, this usually isn’t a problem. When the person in question isn’t engaged, or is just going through the motions, then it’s a little more difficult.  The question we focused on at first was “what methods of teaching have worked for you?” Testing is a tactile experience, rather than looking at an abstract questions. We are familiar with questions like “how do you test a stapler?” or “how can you test a Rubik’s Cube?”. The presentation of this challenge may be the most important aspect. For some, they might look at “how do you test a stapler?” as demeaning. They are professionals, what is this going to teach me?

In my experience, one of the things I found to be helpful is to actually spell out how challenging the exercise could be. Rather than ask “How do you test a stapler?”, I might instead say “Tell me the 120 ways that you can test a stapler to confirm it’s fit for use?” This sets a very different expectation. Instead of saying “oh, this is trivial”, by seeding a high number, they may want to try to see how they might be able to meet or exceed that number. They become engaged.

To borrow a bit from Seth Godin, there are two primary goals for everyone. It’s the important aspects that we need to learn, regardless of the discipline. The first is to focus on authentic problems. The second is to be able to lead. Domain knowledge is a huge factor in helping to identify authentic problems. It’s not the only means, but getting to really know the domain can help inform the testing ability. Another important aspect is to understand how people learn, Everyone goes about learning a bit differently. Helping each person learn how they learn can be a huge step in helping to teach them. Sometimes the most ripe area of learning is to wade into an area where people disagree, or where there might be a number of people or groups where there might be dysfunction, where team members don’t talk to each other, or there’s simmering hostility between people. If there’s hostility between two programmers, and they write software that interacts with each other, it’s a good bet that there might be a goldmine of issues between their interaction points (I think this is a very interesting idea, btw :) ) .

Key to teaching testing is the ability to reflect and confirm what has been taught and learned, and for me, I think that Weekend Testing does this very well. The benefit of Weekend Testing, beyond just doing the exercise, is that we can see lightbulbs turning on, and there’s a record of it that others can see and learn from. Creating HowTo’s can also be a helpful mechanism for this. 


This section is the talk that Smita Mishra and I gave about “Hiring Testers and Keeping them Engaged Once We’ve Hired Them”. I recorded this session, and I will transcribe it later ;).


Claire Moss led a session on “Communicating to Management” and we went through and considered a list of questions that are important to frame the conversation(s):

What does quality like to our organization?
Why spend money on testing?
What does testing do?
What value are we getting out of testing?
I read this about QA, and it says we should do this… why aren’t we doing this?” 

These are all questions that we need to be prepared to answer. The question is, how do we do that?

There are several methods we can use, but first and foremost, we need to determine what we need to speak with management about, and if possible, use the opportunities to help educate them about what it is we can do, and at the same time, get a clear understanding about what their view of the world is.

Looking to standards and practices that are helpful can give us guidance, but it doesn’t always represent our reality. Information needs to be specific to explaining where we stand at the given time. Testing is primarily focused on giving quality information to the executives so that they can make qualified decisions. That is first and foremost our mission. Information that we can effectively provide is: 

- Framing of the ecosystem on a global scale (browser standards, trends, data usage histories)
- Impact on customers (client feedback, analytics data)
- Clarify issues and questions (heading off the executive freakout)
- Managing expectations (especially when dealing with something new)
- Explaining how likely issues brought to their attention really are problems worth investing in
- Explaining risk factors and methods to mitigate those risks

—-

At the end of the day we had a lot of new idea, feedback for some new initiatives, an emphasis on better communication, more focused due diligence, and the fact that so many participants had a lot they felt they could contribute. This was a fun and active day, and a lot of learning and connecting. One of the key things I am always impressed about when it comes to these events is that we really have a lot of solid people in the testing community, but we need even more.

I encourage every tester that admires craftsmanship, skill, and thinking make it a point to come to these now annual events (this is the third of these, so I think it’s safe to say that it’s a thing now ;) ). Once again, thanks Matt (Heusser) and Matt (Barcomb) for organizing what has becoming my favorite Open Space event. May there be many more.


Tuesday, December 31, 2013

Under the Rocks and Stones

I've deliberately taken some time to step away from blogging and post less the last few weeks. Part of that was because I had completed a large multi part series (more about that in a bit), and part of that was a conscientious decision to spend some more time with my family during the holidays, but I can't let the year end without my "the year that was" post, and see if I can find another line from the Talking Heads "Once in a Lifetime" to keep the string going. For the fourth year, I'm still able to do that :).

This year was definitely one of digging into my mind and my experiences, and taking advantage of the fact that what I can learn, and what I struggle with, has a value to others as well. I focused the first part of the year on running through the Practicum approach and Selenium 2, specifically David Burns book. I found this process to be enjoyable, enlightening, and yes, it required me to be willing to dig further than the printed material. Plain and simple, even with the best guide, the best materials, and the most specific examples, the natural drift of software and software revisions means that we need to be alert to the fact that we have to do some digging of our own. Sometimes, we get frustrated in our efforts, but if we continue to dig, and ask questions while we dig, we can do more and get better at what we are doing.

SummerQAmp was an important focus for me and others this year, and we expanded on the modules that were offered. We made inroads on what we covered, but also discovered that the materials and the model we were using was less effective as we tried to branch out and try different ideas. The biggest challenge? How to make the material engaging with self directed readers and interactions. Much of the best software testing training that is out there that focuses on skills-based testing is best learned with a group of people discussing and debating the ideas and approaches. Taking that approach and making it work for a single individual that needs to learn the material is a challenge we are still working on, and I am hopeful we will make good on improving this process in 2014.

Weekend Testing in the Americas has had a great run, and it has been blessed with additional facilitators who are helping to take the weight off of my shoulders and bring in ideas and approaches that are different from mine (which is great :) ). Justin Rohrman and JeanAnn Harrison have been regular contributors and facilitators, and to both of them I owe a huge debt of gratitude. There's definitely more opportunities to dig for more and better ideas when there are additional facilitators helping to look for and pitch ideas.

If there was one concept or test idea/approach that became foremost in my thoughts this year, it would have to be "accessibility" or how to interact with systems and information for those with physical disabilities. Much of my work this year was associated with learning about and working with stories that focused on exactly how we can make our interactions better for those who cannot see or interact with systems in ways that most of us take for granted. I worked primarily with an intern to help make this a better focus for our product, and to learn how to continually ask questions and consider ways that we can better focus on what we do to deliver a usable and worthwhile experience to all of our users.

I participated in three conferences this year, two of them here in the United States, and one in Sweden. STP-CON in April (held in San Diego, CA) was a chance for me to talk about Agile testing and how I adapted to being a Lone Contributor on a team (a situation that I am no longer specifically doing, as I am now part of a larger testing organization, but still small enough that many of the lessons still apply). August 2012 saw me presenting in Madison, Wisconsin at CAST 2013 and Test Retreat and Test Leadership Camp (a continuous week long event of learning, interacting, and developing ideas that I would present in other talks and places). Finally, I was invited to speak on two topics (balancing automated and exploratory testing, and how to "stop faking it" in our work lives) at the Øredev 2013 conference in Malmö, Sweden.

In addition to formal conferences, I participated in numerous Meetup events in and around the San Francisco Bay Area, and what's more, with Curtis Stuerenberg, helped to launch the Bay Area Software Testers Meetup group. This is a general interest software testing group, with the goal of expanding into topics that, we hope, go beyond just testing and into aspects that can help software testers get a better feel and understanding for more areas of the software development business.

An interesting challenge came my way in 2013. I've been blessed with additional outlets to write and present my ideas beyond this blog. Smartbear, Zephyr, Atlassian, Testing Circus, Tea Time With Testers, and StickyMinds have all been outlets where I have been able to present ideas and write to a broader audience, and my thanks to the many readers of this blog who have seen those articles, shared them with others, and helped make it possible for me to keep writing for these sites. I appreciate the vote of confidence and the comments and shares of my work with others, and if you will keep sharing, I will keep writing :).

The project that, for some, will be the most recognizable for 2013 was, in many ways, just another bold boast that I figured would be some basic ideas that I would write on each day. Instead, my expansion and developing workshops on the "99 Ways to Become a Better Software Tester" e-book offered by the Ministry of Testing in the UK became a multi-month process, and one in which I had to do some significant digging to bring to completion. While the series is now finished, the ideas and the processes I learned are still having ripple effects. I think there is more where this came from, and I want to explore even more how I can expand on the ideas I wrote about, and make them better.

Through the process of being on the Board of Directors for the Association for Software Testing and working as the chair for the Education Special interest Group, I learned a lot about how to better deliver software testing training, and to help expand on the mission of AST and how we deliver training and the people involved in that training. I took a step back this year to let others teach and become leaders, and I am grateful for the level of participation and focus given by so many people to step up and help teach others. It cements my belief and testimony that software testers in our broad and worldwide community contain some of the most giving and helpful people I've ever met.

This year saw me interacting with two additional initiatives, one I've been involved with for a few years now, and one that is very new to me. Miagi-Do had a banner year, in which we started a blog, developed more challenges and sought to get more involved as a group in the broader software testing discussion. We brought on board many additional testers, many of which are doing so much to put into practice ways to help share and grow the skills of the broader testing community (many of the current facilitators for Weekend Testing around the world are also Miagi-do students and instructors). Additionally, I was invited to participate in a mentoring program through PerScholas, and have interacted with a number of their STeP program graduates (many of whom have also come through and been participants in Weekend Testing as well).

All in all, this has been a year of digging, a year of discovery, a year of new efforts and making new friends. It's been a year of transition, of picking up, and letting go. A year of seeing changes, and adapting to them. It's been a year of learning from others, and teaching what I can to those interested in learning from me in whatever capacity I can teach. Most importantly, it has shown me that there are many areas in testing I can learn more about, perform better, and get more involved than I already am. What will 2014 bring? My guess is that it will be a year of new challenges, new ideas, and more chances to interact with my peers in ways I may not have considered. Once thing's for sure, it will not be "Same as it ever was" ;).

Saturday, October 26, 2013

Try Always to See What Happens When You Violate the Rules: 99 Ways Workshop: Epilogue

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific. My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #100: Try always what happens when you violate the rules. So here's the second sentence also. - Teemu Vesala


The original eBook ended with a number 100. Why? Just because. It was meant to be a little poke in the ribs to say "ha, see, we put in 100, just because we wanted to". In other words, expect the unexpected.


Epilogue: Where I've been, and what I've learned


This has been a wild ride, to tell the truth. It was started a bit anxiously, then felt like I got into a bit of a groove, then I decided to go for it and put out two a day where I could. For some of the entries, it was easy to come up with examples. For some others, it felt like pulling teeth to make examples that were coherent and understandable. Still, I found when I was pushing out two posts a day, I felt confident I could push out two posts a day. Then CAST (the Conference for the Association for Software Testing) came, and I had to travel, go and participate in the conference, and I decided to put down this project for a week to focus on other things. I figured it wouldn't be too hard to pick it back up again. Wow, was I wrong.


As I've said in other items I've written, the habit becomes the driver. When we create a habit and put it into practice, it become easier to meet the habit. When we put down the process, even if it's just for a week, it's so much harder to pick it back up again.


September and October also coincided with a increase of writing and focusing on initiatives preparing for talks I will be delivering at Oredev in November. I found myself fighting with, and trying to make time, to finish this project. One stretch of working through the Excel example, and condensing it down so that I could put it into a blog post in a way that was coherent, took almost two weeks for me to get together.


One of the funny things I noticed as I was writing these examples: I would find myself talking to people in other contexts, and as I was talking with them, I had to stop and think "wait, is that something I wrote about?" If the answer was yes, I would go back and see if I agreed with what I wrote originally, or if I would want to modify what I had said. If I was discussing something that wasn't in the examples thus far, I would make notes and say "hey, that conversation would be great as a topic for number 78. Don't forget it!" 


Mostly, it feels really good to know that I made it from start to finish. Seeing re-tweets and favorites on Twitter, plus likes on Facebook and the comments to the blog posts themselves, shows me that a lot of you enjoyed following along with this. More to the point, this project has doubled my daily blog traffic. Now, of course, I feel a little concerned… will all those readers drop away now that this project is finished? Will they stick around to see what I have planned next? What do I have planned next?


I do have a new project, but it's going to take me longer to do it. Noah Sussman has posted on his blog what he feels would make a great "Table of Contents" for a potential book. The working title is "How to Become a More Technical Tester". I became intrigued, and said I would be happy to be a "case study" for that Table of Contents, and would he be OK with me working through the examples and reporting on them? He said that would be awesome, and thus, I feel it necessary to dive in and do that next.


So, did I just throw out a "bold boast" immediately after completing another one?! Didn't I learn anything from this experience? The answer is yes. What I learned the most is that creativity strikes and skill grows when we actively work them. Without that up-front work it stagnates, and becomes harder to draw out. Therefore, I would rather "keep busy" and make more "bold boasts" so that I can keep that energy flowing. This will, however, be a more involved process. I am not going to make any promises as to how much I can update or how frequently. This may take a few months, it may take a year. It's hard to say on the surface. I do know that I want to give appropriate attention to it and do the content justice. Who knows, maybe Noah will be willing to consider me a collaborator for this book… but I'm getting ahead of myself again ;).


My overall goal for this project was to do more than I figured I ever would if I just read the list and said "hey, those are cool". Here's hoping that my example will encourage you to try to likewise reach in side and fins something you can use. Your outcome may be remarkably similar to mine, or entirely different. If you do decide to take any of them on, please blog about them (and please, put a comment in my blog so I can see what you have written).


Now, however, it's time to close this project, at least for the time being. Time will tell if we've seen the last of me on this (hint: probably not ;) ).

Question the Veracity of 1-98, and Their Validity in Your Context: 99 Ways Workshop #99

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #99: Question the veracity of 1-98, and their validity in your context - Kinofrost


Heh, I think it's somewhat appropriate that we close out this project (and land on #99) with a need to talk about context directly. Yes, I admit it, I consider myself an adherent and a practitioner who believes in and tries to follow the context driven principles (below in the workshop, btw ;) ). Too often we talk about context driven testing as though "it depends" solves all the problems. I'm going to do my best to not do that here. Instead, I want to give you some reasons why being aware of the context can better inform your testing than not being aware or following a map to the letter.


Workshop #99: Take some time to apply the values and principles of Context-Driven testing, and call on them when it comes to determining of anything from these past 98 suggestions actually make sense to be used on what you are working on right now.


First, let's start with what are considered to be the guiding principles of context-driven testing (these are from context-driven-testing.com).


- The value of any practice depends on its context.

- There are good practices in context, but there are no best practices.

- People, working together, are the most important part of any project’s context.

- Projects unfold over time in ways that are often not predictable.

- The product is a solution. If the problem isn’t solved, the product doesn’t work.

- Good software testing is a challenging intellectual process.

- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.


The most likely comparison you will hear is something on the polar oposite ends of the spectrum, i.e. the differences between testing a medical device like a pacemaker or the control software for the space shuttle vs. a video game app for an iPhone. On the surface, this should feel obvious. The scope of a project like a medical device and the repercussions of failure are huge. The outcome is literally life or death for some people. When it comes to a video app or an inexpensive game, it hardly warrants comparison. 


With that in mind, let's try something a little more direct in comparison. What level of testing should go into the actual control software for a pacemaker vs. a monitoring application that resides on a computer for a pacemaker? The pat answer doesn't work as well any longer, but there is still a question here that's not trivial. Are there differences in testing? The answer is yes. The pacemaker controller itself would still be going through much greater levels of scrutiny that the monitoring software would. In the event of system failure, the monitoring system can be rebooted or the program turned on or off, with no effect at all on the pacemaker itself. If the monitoring software did cause the pacemaker to malfunction, that would not only be seen as catastrophic, it would also be seen as intrusive (and inappropriate). 

This opens up different vistas, and again, begs the question "how do we test each of the systems?". The first aspect is that the pacemaker is a very limited device. It has a very specific and essential functions. There's less to potentially go wrong, but its core responsibility has to be tested. In this case, the product absolutely has to work, or has to work at a astonishingly high level of comfort for those who will be using them. For them, this is not a math theorem, this is the power of life or death to them. The monitoring software is just that. It monitors the actions of the pacemaker, and while that's still important, it's of a far secondary level of importance compared to the actual device. 
  

This brings us back to our past 99 examples. advice I've given may work fine for your project(s), but in some cases, it may not be wise to use the approaches I gave to you. That is to be expected. I can't pretend I will know every variable you will need to deal with, and for you to say "well, that may be fine for your project, but my manager expects us to do…" and yes, there you go, that's exactly why we tend to not spell things out in black and white when it comes to context driven testing commentary. We need to look at our project first, our stakeholders next, and the needs of the project after that. IF we are planning our testing strategy without first taking those three things into account, we are missing the whole point of context-driven testing. 


Bottom Line:

In this last statement here, I' going to be borrowing from my own website's "What it's all about" section. In it, I share a quote from Seth Godin that says "Please stop waiting for a map. We reward those who draw maps, not those who follow them." In this final post, I want to make sure that that is the takeaway that this whole project gives. It would be so easy to just look at the 88 Things, assign the ideas to our work, and be done with it. I've strived to put my own world view, my own context, and write my own map in these posts. I may have succeeded, I may not have, but if there's any one thing I want to ask of anyone who has followed this is to not follow any of these ideas too closely. 


If these workshop ideas feel uncomfortable for what you are doing, don't get frustrated. Instead, focus on why they feel uncomfortable. What is different in your case? What could be modified? What approaches should be dropped altogether? It's entirely possible that there are better ways to do any and all of these suggestions than what I have spelled out here. I encourage you to find out for yourselves. I've drawn a map, but it may not be the best map for you. If it's not, please, sit down and draw your own map. the testing world is waiting to see where you will take it.

Test What Matters: 99 Ways Workshop #98


The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #98: Test what matters - Rosie Sherry

Ahhh, it all comes down to that, doesn't it? It's simple, elegant, really easy to understand, and yet, try as we might, it's so very hard to actually do (no, seriously, it is). See, what matters is tremendously subjective. Who are we talking about? Are we talking about our end users? Our management team? Our co-workers? Our shareholders? 


We'd love to believe that each and every one of those groups are aligned in purpose and intention, that they would all want the same things, and that what maters to one matters to all. Sadly, that's not true. Thus, to make sense of what matters, we first have to make a solid determination as to "who" matters.


Workshop #98: Get a feel for the five biggest customers that your organization wants to keep happy. Hint: they may not be end users of your product. Once you find them, get to the heart of the matter and discover what really matters to them (ask informally if you can't get direct answers from the one's who call the shots). Then structure your testing regimen to focus on what matters to those people (hint: they will not always be aligned).


A famous phrase that many of us have heard over the years is "Quality is value to someone who matters" (thank you Jerry Weinberg). Therefore, what matters is what we can identify that is important to the person or people that matter. Those people can shift, and they can have significantly different goals. Therefore, what should we do? Do we skip around from person to person and find out what matters most to them, and make sure that we deliver it to them? We could, but I would also hazard that it would make us look schizophrenic, and quite possibly untrustworthy. Quality is, indeed, "value to someone who matters" and the "who matters" part can be hard to pick out at times.


Therefore, rather than a fragmented and manic rush to figure out what is most valuable to any one person at any given time, I'd much prefer to go at it from another route, which is to provide information that will help those people that matter make the best decisions they can. I'm not there to "check off the list that makes my manager happy" or "work on the story that makes the director of development look good" or " deliver under cost or ahead of schedule so that we can maximize sales ahead of the upcoming holiday season". Those things are all valuable, and they are all, in their sphere, important. If I choose to chase any one of those, I will be doing a disservice to everyone else who matters. So what should we do?


It comes down to what I feel is the fundamental thing that testers do, and it's not find bugs, or prove that software is "fit for use". Instead, it's to provide information about the state of the product in ways that are meaningful, and to let those in other positions in the organization make the best decisions that they can based on what we have aggregated, analyzed and synthesized. In the end, the development team really doesn't care how many test cases I ran if I didn't find the issue that is most embarrassing to them. The CEO doesn't care that I was meticulous and covered multiple testing scenarios if, when they stand up and give the demo to customers, the program crashes. The customer doesn't care how many features were delivered if the one that they actually care about still doesn't work. 


We need to be more focused than that, and we need to contribute to more than just working in our predefined box and testing what we are told to test. If we are information providers, than we need to be bold and brave enough to provide information. Even when it isn't convenient. Even when it may embarrass some people. Even if it may mean we have to announce a delay. If we try to please all entities, we will end up pleasing none of them. If we are honest, and show integrity, we may still not make a whole lot of people happy, but we will do one thing for certain… we will be providing the key information for all parties to make the best decision possible. If we truly believe that is what matters, then that is what we need to deliver. That information, the kind that helps make an informed decision.

  
Bottom Line:

So much of what we do is laced with politics, cronyism, and what I often refer to as a "perverse reward system" that tends to honor the short term benefits over long term health. If we focus too much on the short term goals, we can win many battles, but ultimately lose the war. We can paint ourselves into a corner, and have no way to get out without causing a mess. Pick whatever metaphor you want to, but realize that what will please one person may royally irritate someone else. Quality works the same way, and playing sides will ultimately win you few friends. Instead, pledge to make the story, the whole story, the most important thing that you can deliver. By doing so, you can make sure that you are delivering something of real value, and value that will last. Ultimately, that is what really matters, so go forth and do likewise :).

Friday, October 25, 2013

You Won’t Catch All the Bugs, and Not all the Bugs You Raise Will Get Fixed: 99 Ways Workshop #96 & 97

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 


Suggestion #96: Be prepared, you won’t catch all the bugs, but keep trying - Mauri Edo
Suggestion #97: Be prepared, all the bugs you raise won’t get fixed - Rosie Sherry


This is really two sides of the same coin, so it pays to focus on them together. In the world of testing, nothing rolls downhill faster than "blame". If everything goes great, the programmers are brilliant. If there are problems in the field, then the testers are marched in and demands made to know why we didn't find "that bug". Sound familiar? I'm willing to bet it does, and if it doesn't sound familiar, then count yourself very lucky. 


This comes down to the fact that, often, an unrealistic expectation is made of software testers. We are seen as the superhuman element that will magically make everything better because we will stop all problems from getting out. Show of hands, has that ever happened for anyone reading this? (crickets, crickets) … yeah, that's what I thought.


No, this isn't going to be about being a better shield, or making a better strategy. Yes, this is going to be about advocacy, but maybe not in the way that we've discussed previously. In short, it's time for a different discussion with your entire organization around what "quality" actually is and who is responsible for it.

Workshop #96 & #97: Focus on ways to get the organization to discuss where quality happens and where it doesn't. Try to encourage an escape from the last minute tester heroics, and instead focus on a culture where quality is an attribute endorsed and focused on from day one. Get used to the idea that the bug that makes its way out to the public is equally the fault of the programmer(s) who put it there, as it is the testers(s) who didn't find it. Focus on maximizing the focus of quality around the areas that matter the most to the organization and the customers. Lobby to be the voice of that customer if its not coming through loud and clear already.

Put simply, even if we were to be able to catch every single bug that could be found, there would not be enough time in the days we had to fix every single one of them (and I promise, the number of possible bugs is way higher than even a rough estimate could give). The fact of the matter is, bugs are subjective. Critical crash bugs are easy to hit home. Hopefully, those are very few and far between if the programmers are using appropriate steps to write tests for their code, use build servers that take advantage of Continuous Integration, and practice proper versioning. 


There are a lot of ways that a team can take steps to make for better and more sable code very early in the development process. Contrary to popular belief, this will not negate the need for testers, but it will help to make sure that testers focus on issues that are more interesting than install errors or items that should be caught in classic smoke tests.


Automation helps a lot with repetitive tasks, or with areas that require configuration and setup, but remember, automated tests are mostly checks to make sure a state has been achieved, they are less likely to help determine if something in the system is "right" or "wrong". Automated tasks don't make judgment calls. They look at quantifiable aspects and based on values, determine if a should happen, or if something else should. That's it. Real human beings have to make decisions based on the outcomes, so don't think that a lot of automated "testing will make you less necessary. They will just take care of the areas that a machine can sort through. Things that require greater cognitive ability will not be handled by computers. That's a blessing, and a curse.


Many issues are going to be state specific; running automated tests may or may not trigger errors to surface, or at least, they may not do so in a way that will make sense. Randomizing tests and keeping them atomic can help with the ability to run tests in a random order, but that doesn't mean that the state that will be met when the 7,543rd configuration of that value on a system is met, or when the 324th concurrent connection is made, or when the access logs record over 1 million unique hits in a 12 hour period. The point here is, you will not find everything, and you will not think up every potential scenario. You just won't! To believe you can is foolish, and to believe anyone else can is wishful thinking on steroids. 


Instead, let's have a different discussion.

- What are ways that we can identify testing processes that can be done early as possible? 
- Can we test the requirements?
- Can we create tests for the initial code that is developed (yes, I am a fan of TDD, ATDD and BDD processes)?
- Can we determine quickly if we have introduced an instability (CI servers like Jenkins do a pretty good job of this, I must say)?
- Can we create environments that will help us parallelize our tests so we know more quickly of we have created an instability (oh, cloud virtualization, you really can be amazing at times)?
- Can we create a battery of repetitive and data driven checks that will help us see if we have an end to end problem (answer is yes, but likely not on the cheap. It will take real effort, time and coding chops to pull it off, and it will need to be maintained)?
- Can we follow along and put our eyes into areas we might not think to go on our own in interesting states (yes, we create scripts that allow us to do exactly this, they are referred to as "taxis" or "slideshows", but again, they take time and effort to produce)?
- Can we set up sessions where we can create specific charters for exploration (answer is yes, absolutely we can)?
- Are there numerous "ilities" we can look at (such as usability, accessibility, connect-ability, secure-ability)?
- Can we consider load, performance, security, negative, environmental, and other aspects that frequently get the short end of things?


Even with all of that, and even with the most dedicated, mindful, enthusiastic, exploratory minded testers that you can find, we still won't ferret out everything. Having said that, if we actually do focus on all these things early on, and we actually do involve the entire development team, then I think we will be amazed at what we do find and how we deal with them. It will take a team firing on all cylinders, and it will also take focus and determination, a willingness to work through what will likely be frustrating setbacks, lots of discoveries and a reality that, no matter how hard we try, we can't fix all issues and still remain viable in the market. 


We have to pick and choose, we have to be cautious in what we take on and what we promise, and we have to realize that time and money slide in opposite directions. We can save time by spending money, and we can save money by spending time. In both circumstances, opportunity will not sit still, and we have to do all we can to somehow hit a moving target. We can advocate for what we feel is important, sure, but we can't "have it all" No one can. No one ever has. We have to make tradeoffs all the time, and sometimes, we have to know which areas are "good enough" or which areas we can punt on and fight another day.


Bottom Line:


No matter how hard we try, no matter how much we put into it, we will not find everything that needs to be found, and we will never fix everything that needs to be fixed. We will have plenty to keep us busy and focused even with those realities, so the best suggestion I can make is "make the issues we find count, and maximize the odds that they will be seen as important". Use the methods I suggested many posts back as relates to RIMGEA, and try to see if many small issues might add up to one really big issue. Beyond that, like Mauri said at the beginning, just keep trying, and just keep getting better.