Tuesday, April 30, 2013

Becoming a "Certified" BBST Instructor

Disclaimer: this message is mainly being posted for the benefit of members of the Association for Software Testing. So why am I posting it on my blog, and not on AST's site? Well, because I want to do a double appeal, and this has to work on two levels.

First, I want to encourage those out there with a passion to teach about software testing to join AST.

Second, I want to encourage those people out there to become instructors in their own right and help us teach what I personally consider the "colossus of all software testing classes", the Black Box Software Testing courses, aka BBST.

I'm going to come right out and say it. BBST is huge. It's demanding. It's a massive time commitment. It has a high attrition rate of those who take it, mainly because the work load is tremendous. There's a lot to learn, and a lot to do. I also think it's one of the most powerful groups of software testing education classes currently available. Hearing the responses of participants that complete these courses solidifies my belief in that statement after every run.

Can anyone teach these classes? Truthfully, I'll have to say "no". Instructors have to be patient, they have to be willing to take a back seat and let other participants learn from each other as well as instruct. They need to be coaches and mentors. In fact, I think that is the primary role that they really perform. The participants tend to do best when they teach themselves. A BBST Instructor needs to know when to help, and when not to help. When to give a hand, and when to get out of the way. When to give someone encouragement, and sometimes, to break the news to someone that they have been caught cheating, and cannot continue in the courses. Yes, I've had to do that. No, I won't mention names.

So, if I haven't scared you off yet, you may just be wondering "so, Michael, how can I create an exciting, thrilling and lucrative career as a BBST Instructor?" Well, first, if you think that third adjective applies, you're probably not a good candidate. All kidding aside, though it can be fun, and it can be extremely educational, it's a lot of work and as of now, we do not pay our instructors. They are all volunteers.

"Um, yeah, OK, so why would I want to do this again ?"

You can be rewarded for your efforts in a variety of ways. First, you can be rewarded in the fact that, even as instructors, you learn something new every time you teach. Seriously, every single time I teach a class I learn something new based on the insights of the participants and what they bring to the table. Second, it's really cool to be in a position to see people let go of antiquated, so called "best practices" and see them embrace "good practices for the appropriate context". For some, just that is huge!

All right, so now that I've mentioned all of this, what does it take? Glad you asked.

If you want to become a full fledged Lead Instructor (as defined by AST for teaching BBST classes):

1. Each Instructor has to have successfully completed the class they are interested in being an Instructor for (Foundations, Bug Advocacy, Test Design).  If you've completed a course, and feel you would like to be an Instructor for that course, let us know. We will do what we can to have you assist in an upcoming course. We have many spots for Assistant Instructors, and we often have classes with two to four Assistant Instructors participating.

2. Take the online Instructor's Course, so that we can show you the avenues that we use and the methods to help instruct and coach participants. This class is currently only offered once a year, but if we were to get enough interested Instructors, we would be happy to double that amount to help more Instructors get the training they need and not have to wait. Also, while it is strongly encouraged to have Assistant Instructors take the Instructor's Course before teaching, it's not mandatory to do so. If you want  to assist but haven't taken the Instructor's Course, it's left up to each individual Lead Instructor to decide if they want to take you on for that particular class and mentor you. Successfully completing the Instructor's Course clears that hurdle somewhat. If you want to be a Lead Instructor, the Instructors Course is mandatory.

3. Assist in at least two courses for the class that you would like to be a Lead Instructor for, and have good evaluations from your Lead and other Assistant Instructors who follow and "shadow" the courses. If the reviews are good, and we feel you have the temperament to be a Lead Instructor, then we will set up the opportunity for you to lead a course (note: you can only lead courses that you have already participated as an Assistant Instructor at least twice).

4. Each Lead Instructor gets a "shadow" lead for the first time they lead a course. That shadow lead evaluates their performance, and makes recommendations if they should lead future courses. If the reviews are favorable, then they get a chance to "fly solo" with a class. If they need a little more practice, we can offer to shadow them a second time. For most, two shadow runs is usually sufficient.

5. After an Instructor has been the Lead for two classes, then they can be "certified" for that course. This certification is a collection of reviews, both from fellow instructors as well as participants. The certification process is where AST states that we feel this person has the skills and the temperament to deliver a quality instruction experience. Also, the BBST certification allows that instructor to take the BBST materials and teach them anywhere they want to, with AST's approval. While that prospect may only appeal to a handful of people, the abilities that are reflected in being able to teach and mentor participants through such a dense and multi-layered class series shows a solid commitment to being a good mentor and instructor, and should impress just about anyone who takes training seriously.

For those who have been keeping track, that means every "certified" Lead Instructor for any of the courses offered have, at bare minimum, been through the material a minimum of five times (once as a participant, at least twice as an Assistant Instructor, and at least twice as a Lead Instructor).

So, if I haven''t scared you off yet, there it is. In a nutshell, that's the BBST approach to developing and "certifying" Lead Instructors. My goal is to see a way to teach solid software testing skills to as many people as possible, without diluting or dumbing down the material. That's going to require a dedicated group of instructors, and those instructors are, quite frankly, those of you that care enough to want to see that level of instruction grow and thrive.

In other words, those of you actually reading this right now :).

Monday, April 29, 2013

TESTHEAD REDUX: Aikido and the Role of Certification

When I wrote the original post for this back in 2010, I think it was the first time I was willing to break out and say "I don't even know what I'm hoping to find with this". It was prompted by my looking at a variety of "certification" options out in the testing market at the time. Most of them I had just started to hear about, many of them were somewhat nebulous, all of them made me feel somewhat uneasy. At the time I said the following (note, I used my experiences with the martial art Aikido as an analog to my understanding of the certification landscape at the time):

In my mind, this is the big thing that is missing from most of the certification paths that I have seen to date. There is a lot of emphasis on passing a multiple choice test, but little emphasis on solving real world problems or proving that you are actually able to do the work listed on the exam, or that you genuinely possess the skills required to test effectively.  

The other issue that I have with this is that, just like in an actual real world confrontation, some of the best practitioners of Aikido may not be the best at articulating each and every step, but my goodness they are whirlwinds on the mat and on the street! This is because they are instinctive, and their training has been less on the intellectual explanation and more on the raw “doing”! 

The reason I mention these details is that I still, all these years later, have yet to find a true certification that actually leads to the goals I am after and desire, but I also have found several examples of exactly what I want to see certification become. In short, I want to see a certification that really lives up to the principles of Aikido. I want to see testing as a martial art in its own right (with perhaps a de-emphasizing of the "martial" aspect. Perhaps a better phrase would be a "philosophical art").

Before I get too far into this, I will say already that the three things I am going to suggest need to be taken with a very large grain of salt. Why? Because I have a vested interest in all of them, but not for the reasons you may be thinking. A disclaimer... I make no money from any of these endeavors. In fact, in some ways, I forego earning money in other ways so that I can champion them. If I wanted to make myself the equivalent of the impoverished warrior monk, or a Zatoichi, I may have found the perfect recipe in these three examples ;). Nevertheless, I do them because of the value that I believe they provide, and from the anecdotal value that others come back to me and say that they offer.


BBST - BBST is the Black Box Software Testing courses offered by the Association for Software Testing and others. Note, you can get very close to 100% of the benefits of BBST without ever taking a class. All of the materials (the lectures, the course notes, the readings, etc) are available online. What's not readily available online is the course quizzes and exams, and the ability to be coached by other testers who help instruct the course. There is a cost associated with it ($125 for the Foundations course, $200 for the Bug Advocacy and Test Design courses), but the costs are used to pay for the hosting of the instances of the class, the servers, and administrative overhead. At this point in time, every Instructor for BBST is a volunteer, i.e. we don't get paid to do what we do. 

Weekend Testing - while BBST is one of the best direct trainings out there, Weekend Testing is, IMO, one of the best organized skills workshops held on a regular basis for testers to sharpen their swords on a variety of topics. Weekend Testing works on a variety of levels. It has much to offer the beginner who wants to learn how to test. It has much to offer the intermediate tester who wants to mentor newer testers, and likewise learn more themselves. It has much to offer advanced testers who can work to develop their skills as leaders by facilitating sessions and designing interesting and unique content to talk about, learn from, and make for a positive influence in the broader testing community. Also, unique to Weekend Testing is the fact that every session is archived. If you would like to show someone just how much you contributed, it's there in black and white, for all the world to see.

Weekend Testing has several chapters that are in various stages of operation. Chapters have been formed in India, Australia/New Zealand, Europe and the Americas. Currently, the India and the Americas chapters are the most active, but all it takes is willing folk to facilitate and have a strong desire to help testers improve their craft for more chapters to open up where they are needed (hint, I would have no problem seeing a South America Chapter develop, and we have yet to see a chapter develop in Africa. So there's plenty of room to grow, as far as I can see :) ).

The Miagi-do School of Software Testing - this is the one that probably means the most to me, and yet it's the one that will, guaranteed, never make me rich. Well, none of them will, but unlike BBST and Weekend Testing, which could be used as a marketable option or product, Miagi-do cannot. Actually, I should say it will not, as long as the founders have anything to say about it. It's not a not-for-profit. It's a ZERO-profit. It's also a ZERO-income enterprise. No money ever changes hands, and likely, no money ever will. People have to seek out the school, have to show they are willing, have to face multiple testing challenges, and actually put in a lot of work that leads to the betterment at large of the software testing community. 

Everyone's path is different, but my path was through signing up with AST and learning about the BBST classes, taking them as a participant, and then offering to teach them over the past three years. It included my producing a podcast dedicated to software testing topics, and frequently researching and presenting my own findings in episodes where I was features as a guest or a panelist. It involved my getting into Weekend Testing as then offered in Europe, and making enough of a commitment to it to be considered knowledgeable enough to facilitate, and then bring Weekend Testing to the Americas, where I have fostered its growth and development (along with several others, to be fair) for the past two and a half years. It involved writing in many areas, including the book "How to Reduce the Cost of Software Testing", as well as multiple articles for other distribution channels (ST&QA, Techwell, Tea Time for Testers, The Testing Planet, plus guest blogs for numerous companies and outlets). 

In between all of this, I have sat for and failed, and then later passed, several testing challenges, each one with the idea that I would demonstrate to my testing peers what I knew how to do, and what I didn't know. The Black Belt/Instructor level that I hold in Miagi-do may be laughed at by some. What does it mean, really? On paper, and in the eyes of HR departments, probably less than nothing. If, however, you and others out there feel that the talks I have given, the sessions I have facilitated, the courses I have taught, the articles I have written, and the podcasts I have recorded have helped, in some small way, to the improvement and betterment of the software testing community, then my black belt speaks volumes. In fact, it's my hope that everything else I have done, and will continue to do, makes the mention of a black belt completely irrelevant. 

In short, I am not my "certification". I am the ideas and the experiences that went into it. The fact that my certification is one that I have made for myself, surrounded with like minded people that I respect and admire, means more to me than any certification that can be given to me by any "officially sanctioning body" and yes, I'll include my Bachelors Degree in that list of "inferior certifications".

While a "certification" may carry someone into a second round interview, I will much frankly prefer to see my fellow Miag-do ka, BBST instructors and Weekend Testing facilitators on any project I would hope to lead and own. Why? Because I already know what they can do. I've seen it multiple times. In a dark alley situation,  I already know they can fight, and I also know they won't run away :)!

Thursday, April 25, 2013

Final Day of #STPCON (Live Blog)

Today is bittersweet. It's the last day of STP-CON, and the last day of what has proved to be an intense, interesting, fascinating, and thoroughly enjoyable few days. I love the opportunity to get together with people from different industries, experiences, and world views and learning about what works and doesn't work for them, as well as getting advice about what is and isn't working for me. Sleep tends to become a very limited resource, but that's because the time spent after talks and workshops conferring with others is what makes these events so valuable.

If I can make any one piece of advice for anyone who attends a conference, make sure that your emphasis is on the "conferring", and understand that most of the real learning, breakthrough moments and epiphanies will not be in Q&A periods in the closing minutes of a presentation, they will be in laughter and discussion over Red Sauced Pork Barbecoa at a wonderful Mexican restaurant, or watching your conference mates look at you with both exasperation and understanding as they fall prey to another round of "The Pen Test".

Yep, this morning will be the closer, and more to the point, I will be the closing speaker. Well, OK, not exactly the closing speaker, but I'm in the last track session slot along with four other topics. Part of the dynamic today is going to be that the back of the room will be filled with suitcases and duffel bags and people who may be asking me subtly with their eyes "ummm, are you gonna' wrap up a little early by chance, because I've got a flight to catch!" My answer is "I'll do my best, but I hope that I can offer something worthwhile enough to make you want to stick around at least until the end of the session. For obvious reasons, I will not be able to live blog my own session (maybe someone else will help me do that via Twitter and I'll pull in their comments, but I will post a synopsis after I'm done.

With that, I need to go and check out, as well as get set up for the closing three speakers I will be attending. Lynn McKee and Matt Heusser, y'all best get ready :).

Oh, a shout out to Mike Himelstein, a new friend from Atlanta. He's been drawing little sketches of the attendees and speakers, and he shared his "vision of me" while I've been here...



I think I have a new avatar :).



-----

And we're back. Breakfast is over and we are now getting underway with Lynn McKee. For those who don't know Lynn, I'm happy to count her as both a terrific colleague and a great griend. Lynn invited me up to facilitate the POST 2012 peer conference in Calgary, and additionally, she helped me complete a bucket list item by taking me snowboarding at Banff. Oh, and she's also an excellent Agile Coach and team mentor, which is part of why she's bale to speak here and now. I'm excited that Lynn is getting a Keynote, though I must say I'm slightly sad that her "wonder twin" isn't here to see it (Nancy Kelln, we miss you!).


Change is easy, right? Sure it is. Oh, you mean effective change, one that is internalized by your team and organization. Yeah, sorry, that's hard. And frequently unsuccessful. Why? Often, it's because there are conflicting goals and visions. Often it's the blind leading the blind. "We don't know where we're going, but we're making great progress!

Some people have an appetite for change. People who attend conferences, talk at them, stay up late talking about testing and gadgets, these are all likely early adopters or at least have an understanding or appreciation for change. Others have a different appetite or desire for change, ranging from totally willing to completely unwilling.

For change to happen, we have to create a sense of urgency. For those of us who are testers, what are we willing to do to create that sense of urgency? Ultimately, we need to show the business value to the organization as to why testing not only matters, but is vital. We also need to show what we can bring to the change and what our value truly is.

Additionally, there needs to be a coalition of the willing, and perhaps even the fool-hearty. There needs to be expertise in this group so that the change can happen. there also needs to be credibility, and someone willing to run up San Juan Hill or be one of the Light Brigade (hopefully with the results closer to the former rather than the latter). 

Testers can be transformative by daring to place themselves in the cross hairs. We may need to prove we are willing and able to do what it takes to gain trust, or if nothing else, scare everyone else so bad that the treat us like Oda Nobunaga's solders who walk through the forest with their match locks lit (Pete, that one's for you ;) ).

Key influencers are important.  They don't have to necessarily be the visible ones, but if you can get those in the organization to believe in your view, and if they believe that what you are striving to offer will bring real value to the organization, that could be the catalyst to make it all happen.

Making testing great is a wonderful goal, it's a terrific vision, but what doe it ultimately offer to the organization? If we cannot answer that question, then we are not likely to make headway, even if they can agree that "better testing" is a wonderful goal. Better testing for what? If it's just for the sake of kudos, or for team cohesion, neat goal, but maybe not compelling enough. We're a cost center. Tester's don't make money for an organization. Sorry, but unless you are selling test services as your product, no testers at a company actually make cash for the company. What we do provide, however, is a hedge. We can safeguard revenue earned, and we can prevent revenue erosion. Make no mistake, that's huge! What is the value of a thoroughly tested product? It's hard to quantify, but there's no question what the value of a 1 or a 2 rating in a app store is. THAT is something that the business can understand!

So what are the obstacles that can get in the way of change? People can get in the way. Sometimes WE are those people. Processes can get in the way. Sometimes they are well intentioned but pointless. Sometimes other people's perceptions of us get in the way. Sometimes the technology stack we use can be an impediment (Rails is a great framework, but if your entire infrastructure and product is developed for Win32, that may be a real problem to change.

Buzzwords abound, and often the buzzwords start flying with little understanding as to what they mean. Avoid this. If you use a buzzword, make sure that you a clear on what it is, what it means, and what it's really going to provide. More to the point, what does the buzz word add to the bottom line? Also, if you are going to use mnemonics, make sure that you have spent time to show what they are, what they mean and how they are used. Heuristics are valuable, but if we cannot communicate what they help us do, no one will care or invest the time to make it work. Slogans, jargon, call it what you may, make sure that you are clear as to what they are, what they mean, and what they do.

One of the beautiful tools that Lynn mentions, and I believe strongly in them as well, is the "quick win". Erasing Technical Debt is a lot like erasing Financial Debt. It's a big challenge, and it's a long hard slog. To make things happen, and to build heart and morale, there needs to be early wins and quick wins. Pick of a small problem and solve it before taking on the Colossus. those of us who play video games understand we battle small fry first to level up so we can take on the level bosses later. Short term wins give us strength, flex out muscles, and give us confidence to take on bigger problems.

Trim the fat where you can. Don't focus on processes that hinder you, work on the activities that get you results. Learn where "good enough" really is. Snazzy looking docs are nice, but if you are spending all your time making snazzy looking docs, that is time you could be doing real and valuable testing. Learn what really matters for reporting, and provide just that. I dare you! See what would happen if you gave a trim, slim, solid summary of what you have done an what you have found. See what happens. Will you get yelled at because you didn't attach the cover page to your TPS report? maybe the first time, but if you do it and show that your testing is happening and you are finding really great insights for the team, I'll bet you that the process will change (and yes, I am willing to take that bet, at least for most organizations).

There is risk in every project, and there is risk in change. How much and to what level varies in organizations, but nothing is fool-proof. We all need to focus on and show that w understand the risks. We have to give the information that will either tell people that we are looking good or we are in significant trouble, or something in between. It's possible we may stop the train. It's possible the train will keep going. If we have made an impact and allow our executives to sleep well at night, then good going. It certainly beats the alternative.

Change and transformations are not easy. Not if it's going to stick. Not if it's going to really change hearts and minds. Not if it's going to clobber the bottom line. Not if the group doesn't have a long view. Some people will not get on board. Some people might actually leave over the choices made to change. The phrase "you can change your organization, or you can change your organization" really rings true. Sometimes, the change that is needed is that YOU may need to go elsewhere. Are you brave enough to do that? Do you have the will to do that? Are you willing to "fall on your own sword"? Some people are, but many are not. There is the process in forging steel for swords called "the refiners fire". To purify steel, you have to heat it, beat it, and then plunge it into water to cool and harden it. The steel goes through a lot, but the end result is a hard and strong metal ready to cut through anything. 

Lynn, thanks so much, you rocked this :).

-----


Now we are talking about "Where Do Bugs Come From" and Matt Heusser is going to be our ringleader. He's promised to make this different than all other presentations we've seen this week. Instead of a lecture, we're going to have a discussion that everyone can get into.

Often, when we go to conferences, we come back with lots of ideas and enthusiasm, but when we present our ideas, we often get blank stares, crossed arms, and reasons why we can't do that. that happens, so what can w do so that we don't get into that position.

The first thing we need to do is stop asking permission. Just do what you plan to do. Make an experiment out of it. Decide to implement whatever key item you want to do, and figure out how you can put in into play on the first day you get back to work. Don't ask permission. Just go for it. That's at least my plan ;).

It's easy to say we try to find bugs. Every program has bugs. Even "Hello World" has bugs, if you dig deep enough. So yea, finding bugs is important, but beyond the trivial, where do bugs really come from? In truth, they all come from us, i.e. people. They come from programmers, but that's really only a small part of the story. Hardware can actually cause problems, voltages can flip bits, actual insects can fly into relays (Grace Hopper's legendary "bug" was exactly that, a moth that got caught in the circuitry and short circuited something).  

Bugs are not just glitches in code. They can be glitches in requirements, glitches in behavior, glitches in emotions, and glitches in markets. Seems a bit over-reaching? Maybe, but really, what is a bug? It's something that doesn't work the way we want it to, or someone else wants it to. Can a product work exactly the way it's "intended" and still be a bug? Absolutely! If the CEO decides that the paragraphs in a legal disclaimer need to be reversed, even though it is written as intended, and they decide it needs to be changed, now, then yes. what was working as written can become a P1 bug by "royal decree". Actually, that's not the best characterization. The real reason it was a bug was because there was a hidden stakeholder that wasn't considered.


There are differences in the way that desktop, web and mobile display and process events. We have issues with usability and also with intuitiveness. There's also conditions that bring to light issues that you just won't find unless they are met. How does your app work with a nearly dad battery on a mobile device? What happens when you plug in the power cord to start recharging? how about while you are walking around between cell towers? Different environments can bring to light many interesting anomalies. If we are aware of the possibilities, we can consider them, and perchance find them.

Cultural assumptions often come into play. When localization becomes part of a product, then there's a real "lost in translation" issue that can come into play. Not lost in translation of requirements, but genuinely an incorrect translation. One of the most interesting things I've seen was when one of my blog posts was summarized on a site in Japan. When I translated the site from Japanese to English, were I not the one to have written the original blog post, I would not have been able to make much sense of what the original article was actually saying. The reason? Not that the Japanese article was wrong, but that the literal translation that was provided genuinely didn't make sense to me as an English speaker. The grammar was still Japanese rules of speech, which frankly just don't make sense in a direct word for word translation. Real localization efforts go deeper than that, of course, but it helps emphasize just how that can become an issue if we are not really doing our due diligence.

In addition to understanding where bugs come from, it's also important that we understand the risks that those bugs represent, and we then have to decide if we genuinely care about them. Not all bugs are created equal, and what bothers one group of people may not bother another group at all. It may be so esoteric that the odds of it ever being expressed is .0001%. Do I care with my facebook page? Of course not. Would I care with the computer that controls the landing gear on the airplane that's taking me home tonight? Absolutely!!!



OK, so we have risks. What can we do to mitigate those risks? There's lots of things. We can use prototypes to test ideas. We can manage expectations. We can iterate and examine stories in smaller increments. We can pair testers with devs. We can start early and test requirements. This leads into the "three amigos meeting" model (one that Socialtext uses actively with our kick-offs, I might add; Google "Three Amigos Meeting" and you'll find lots of stuff to look at :) ). The main takeaway for that is "bring the stakeholders together and make sure everyone agrees on the work to be done".


So some takeaways...

- Pick latest critical bugs in production 
- Map them to techniques to mitigate risk
- Discover what you are not doing enough of.

Oh, and just go and do this. Don't ask permission. You don't have to. Just delight them that you are doing it ;).

-----

And now it falls to me. The closer, the last man standing. I'm ready now to address the Lone Wolf, the Armys of One, the Lone Ranger... and hey, if you work with a team, this may still prove to be relevant, because all of us Lone Wolf it some of the time. What craziness will come out of my mouth over the next hour and fifteen minutes? I guess we will just have to see...





[Editors note: Michael is talking right now, but the written words coming your way from here are courtesy Chris Kenst].



Michael's talking about his background, about how he made the switch to an agile environment in January of 2011 and not so long ago moved away from being a Lone Wolf and now works with 5 other testers (for some reason he seems quite excited)!



He's been through the traditional waterfall approach, as a lone tester, and now as a part of a "wolf pack" he's working his way through an agile process. It's all about learning, exploring in small chunks and iterating. This involves Test Driven Development (TDD) to help design the code. The class is asked, does anyone think TDD is testing? No one raised their hands. It's not testing. Its a design tool. In fact testing (not TDD) starts at testing the requirements often during the kickoff meeting. During these 3 amigos meetings the testers ask questions about the requirements in order to understand how the developer interpreted it. 


In agile teams the whole team is responsible for quality. TDD during design to help identify the right design, acceptance tests are built, automation testing is done throughout the entire cycle. This can be a good and bad thing for the Lone Tester because it can dramatically help you cover the system but also requires a great amount of time. Always something to do.

Theres always a need for testers on an Agile team. Testing is always happening at all levels, which is a wonderful change (especially when used correctly), but there is no 'quality police' - especially not for the testers. This has changed the role for the testers, it requires a variety of skills like domain knowledge and technical competency to interact with the development team. According to Bret Pettichord: Agile Testing is/are the: "Headlights of the project - where are you now? Where are you headed?" It can "[p]rovide information to the team - allowing the team to make informed decisions." With the information we provide managers they can make the decisions they need to about the product / project.

The testing we do on projects is a hedge on the risk of the product. Sure testing is a cost center but its a hedge against loosing customers because companies don't know anything about the products they are shipping. In order to help with this hedge, Lone Testers in an Agile Development Team need to be agile themselves. There's plenty of room for "testing" but we need to broaden our toolkit - we (testers) need to adapt to different expectations. A good example of this is when you move from one software shop to another.

Lone Testers should automate where you can. If you don't have the skills that's ok. You can start small while you learn. You don't need to create huge amounts of automation, just something alone the lines of 'Sunshine tests' (or smoke tests) - a small set of tests that can help you look at the broad picture of the software. This is the perfect thing for a Lone Wolf to attack first. Michael's a fan of the 'when in rome strategy' which means you look at whatever languages, tools, your development team uses and you use the same. If they are using Junit, you can use Junit for your tests. Then you can share the results or problems with them and they'll be more likely to help you because they see the common connection.

The disadvantage to automation is it's passive "checking" as opposed to active testing. When you spend time building automation it means you aren't spending time exploring and/or testing. Tests can quickly become stale and you have to consistently upkeep those tests.

[Michael: Clarifying. I meant that having the machine just running the automated tests and accepting the passes at face value is passive checking. The process of developing the automation goers through lots of iterations, with its own debugging and learning, so to be clear, the creation process of automation involves a lot of direct active testing.].

Testers are more than people looking through tests, trying to break things. Testers are like anthropologists:
  • Observe the development team
  • Look for feedback form actual users
  • Work with content developers
  • Discover the underlying and unspoken culture
Being a lone tester requires that you are a good communicator. You've got to be able to build bridges in the company, talk with developers and product owners and this includes all the different languages that each speak. Its about playing well with others (think of pairing). Paring can be tester/developers, tester/support person, tester/customer, tester designer. Lone testers should participate with planning and standups with the development team and they should become a domain expert about the customer needs.

Lone testers need to develop your craft. This is hard to do when you are the lone person, you'll have to reach outside your organization. Mentors will come from other sources, sure they can be a manager or developer if they've got the time to work with you. Otherwise you can reach out to other Lone Testers in other organizations that you can learn from. Michael is reciting some of the values of Context Driven Testing school of testing. Good software testing is a challenging intellectual process. This requires judgement and skill exercised throughout the project.

Lone Wolves, you are not alone. (For some reason I hear the Michael Jackson song in my head now. Weird) You have many allies you just aren't aware of them. Take advantage of the meetups in your area. Beer, pizza and some similar challenges are the basis for many meet ups. In-company people like developers, support people and designers can also provide feedback and support. Remember everyone in the Agile team does testing - its not all on you.


... and with that, I'm back. My thanks to Chris Kenst for being "virtual me" for a bit. I'm done. Deep breath... and now the room is being taken apart. This is the oficial end of the "formal conference" but as we all know, the conference may be over, but the conferring can still happen... and that's where I'm going now. I have a lot of new friends to talk to and learn from.

See y'all later :).

Wednesday, April 24, 2013

Day 2 at STP-CON (Live Blog)

Good morning everyone, STP-CON is once again underway. Rich is onstage now and getting today's program in order. There's a full and action packed day in store, plus something interesting later today with Matt Heusser called Werewolf. It's limited to twenty people, and I'm going to help support the event, not specifically participate. For those at the conference that want to play Werewolf, come on out and participate at 5:30 p.m.

I want to give a shout out to the folks at the Sheraton that have been fantastic dealing with logistics, food, drink breaks, etc. Seriously, they have been fantastic.  Also, I want to thank everyone who has hung around for the amazing conversations that go on each night afterwards. The plus side, I learn so many interesting things from various perspectives. The down side is that I am going to bed way too late this week ;).

We're about to get ready for the second keynote, so I'll be back shortly.





Ever had a conversation with someone who says "I don't care where we go to eat" but when you actually make a suggestion, they always say "no"? If that annoys you, then the keynote speaker might actually ring a chord with you. In "Get What You Want With What You’ve Got", humorist Christine Cashen is in the middle of describing those people that always complain about the world around them, and how nothing works for them. How will they deal with situations that require doing more with less?

Christine is describing the personalities of people based on single words. The who people, the what people, the why people, the how people. If you want to get what you want, you have to realize that each of these people has their own language and their own way of dealing with things. If you want to get what you want from these people, you have to know what they need to hear and how that works for them.


One of the things I will already say about Christine is that she is directly engaging with individuals in the audience, and engaging them. I had to laugh when she did the partner exercise with the fist, and my immediate reaction was "oh, this is so Wood Badge!" However, it was fun to see the various reactions of the people in the audience.

One of the great tools that I use often, and I've found it to be greatly helpful comes from a phrase that James Bach said in an interview a couple of years back... "I have strong convictions, but they are lightly held." What does he mean by that? It means that he genuinely believes or has come to certain conclusions, and he will battle and fight for them, but if new information comes to light, we can modify our understanding and see things differently. That's an extremely valuable tool.


With humor, a bit of silliness, and a lot of heart, this was honestly a way better talk than I was expecting. By the way, for those who want a taste of what Christine is like, check this out:


-----

I have wanted to participate in Henrik Andersson's talk "Now, what's Your Plan?" several times, but I have either been speaking or otherwise engaged each time he's given it. When I saw he was doing it as a two hour block, I knew I had to get in on it. Thus, much of today will be focused on Henrik's presentation (and thanks to Leonidas Hepas for helping facilitate the session). I think this will be fun :).

First thing we all did was write down a working definition of "context". Simple right?







Hmmm... maybe not ;). Context is deceptively easy to define, but it's not as easy to come to an agreement on what it actually means. Henrik of course was not content to just have people consider a definition, we needed to internalize and understand it. When he pulled out the spider robots, I smiled and laughed... and then told my group that I would need to recuse myself from the exercise, since this exercise is the contents of the "What is Context?" module that is being used in the SummerQAmp curriculum. Still, it's cool to see how the groups are addressing the various challenges.






Without spoiling the exercise (some of you may want to do it later if Henrik offers this again, and I recommend you go through it if you get the chance), it's interesting to see how many different scenarios and contexts can be created for what is essentially the same device.

As each team has gone through each round, changes in the requirements and the mission are introduced. Each change requires a rethinking and a re-evaluation of what is needed, and what is appropriate. This is where "context" begins to be internalized, an the ability to pivot and make changes to our testing approach based on new information. It's stressful, it's maddening, and it really shows that not only is context a consideration for different projects, but it is also appropriate to consider there can be different contexts for the project you are actually working on, and the ability to change one's mind, ideas and goals mid-stream is a valuable skill to have.

What was interesting was to come back and see, based on this experience, whether or not the team's ideas of context had changed. We can look at context as to the way we test. We can look at context as to the use of the product. We can look at context base on the people that will use it. Several of the teams had come back to their initial definitions and decided to modify them. I could be a smart aleck right now and say that this is the moment that everyone comes out and says "It depends" ;).

So... what did our instructors/facilitators use to define context?  See for yourself:



The interesting thing is that, all of the definitions differ, but none of them contradict each each other. Here is why "best practices" are not helpful, because the context changes, not just between projects but often within a project. Hearing the different team's discuss their own experiences as to how they match up with the exercise, many teams see that context is a genuine struggle, even in groups that profess that they understand context. there is a tremendous variety in markets, needs, timelines and issues. Learning how to undertand those elements, and how to address them as they come up, will go a long way in helping to drive what you test, how you test, when you test, and what prioritization you place on what you test.



Again, a deceptively simple, yet complex issue, and a seriously fun session. Thanks to Henrik and Leo for their time and energy to make it possible.

------



Lunch was good, and and we are now into our afternoon Keynote. Matt Johnston from uTest is talking now about "the New Lifecycle for Successful Mobile Apps".  We talk a lot about tools, processes and other details about work and what we do. Matt started the talk with discussing about Companies vs. users. Back in the day, companies provided product to users. Today, because of the open and wide availability of applications in the app store, users now drive the conversation more than ever. A key think to realize is that testing is a means to an end. It's there to "provide information to our stakeholders so that they can make good decisions" (drink!).

Mobile is just getting started. We are seeing a transition away from desktop and laptops to mobile (all sources; tablets, phones, etc.). Mobile is poised to eclipse the number of desktop and laptop machines in the next three to five years. Mobile apps are also judged muh more harshly than their desktop or web equivalents were judged at their point in the product lifecycle. The court of public opinion is what really matters. App store ratings and social media will make or break an app, and it will do so in record time today.

Much of the testing approach we have used over the years has come from an outside in perspective. Mobile is requiring that our testing priorities invert, and that we focus on the inside-out approach, especially with mobile. What the user sees and feels trumps what the product actually does, fair or not.

The tools available to mobile developers and mobile testers is expanding, and the former paucity of tools is being addressed. More and more opportunities are available to check and automate mobile apps.  Analytics is growing to show us what the users of mobile devices are actually doing and see how and there they are voting with their feet (or their finger swipes, in this case ;) ).

A case study presented was for USA today, a company that supports Printed Paper, a website and 14 native mobile apps. While it's a very interesting model and great benefit to its users, it's a serious challenge to test. They can honestly say that they have more uniques and more pageviews on mobile than on the web. that means that their mobile testing strategy really matters, and they have to test not just domestically, but worldwide. The ability to adapt their initiative and efforts is critical. Even with this, they are a company that has earned regularly a 4.5 star app store rating for all of their apps.

If your head is spinning from some of that, you are not alone. Mobile isn't just a nice to have for many companies, it's now an essential component to their primary revenue streams.


-----


One of the unfortunate things that can happen with conferences is when a presenter has to drop out of a conference at the last minute. It happened to me for PNSQC 2011 because of my broken leg, and it happened to one of the presenters scheduled today. In his place, Mark Tomlinson stepped in to discuss  Performance Measurements and Metrics. The first thing that he demonstrate was the fact that we can measure a lot of stuff, and we can chew through a lot of data, but what that data actually represents, and where they fit in with other values, is the real art form and the place that we really want to place our efforts.

Part of the challenge we face when we measure performance is "what do we actually think we are measuring?" When a CPU is "pegged", i.e. showing 100% utilization, can we say for sure what that represents? In previous decades, we were more sure about what that 100% meant. Today, we're not so sure. Part of the challenge is to get clear the question "What is a processor?" We don't really deal with a single CPU any longer. we have multiple cores an ach core can create child virtualization instantiations. Where does one CPU reality end and where does another begin? See, not so easy, but not impossible to get a handle on.

Disk space is another beloved source of performance metric data. parking the data that you need in the place you need it in the optimal alignment is a big deal for certain apps. the speed of access and the feel of the system response to present data can be heavily influenced by how the bits are placed in the parking lot. Breaking up the data to find spot can be tremendously expensive (this is why defragmenting drives regularly can provide such a tremendous performance boost). Different types of servers have a different way they handle I/O (Apps, DB, Cacheing, etc.).

RAM (Memory) is another much treasured and frequently coveted performance metric. Sometimes it gets little thought, but when you find yourself using a lot of it, it can really mess up your performance if you run out of it. Like disk, if you reach 100% on RAM, that's it (well, there's page file, but really, you don't want to consider that as being any real benefit. This is called a swapping condition, and yeah, it sucks).

The area that I remember doing the most significant metric gathering on would be in the Network sphere. Networking is probably the most variable performance aspect, because now we're not just dealing with items inside of one machine. What another machine on the network does can greatly affect wat my own machine's network performance is.  Being able to monitor and keep track of what is happening on the network, including re-transmission, loss, throttling, etc. can be very important.

Some new metrics we are getting to be more interested in are:


  • Battery Power (for mobile)
  • Watts/hr (efficiency of power consumption in a data center, i.e. "green power")
  • Cooling in a data center
  • Cloud metrics (spun up computer unit costs /  hour)
  • Cloud storage bytes (Dropbox, Cloud Drive, etc.)
Other measures  that are being seen as interesting to a performance evaluation of systems are:

  • Time (end user response time, service response time, transaction response time)
  • Usage/Load (number of connections, number of active threads, number of users, etc.)
  • Multi-threading (number of threads, maximum threads, thread state, roles, time to get threads)
  • Queuing (logic, number of requests, processing time)
  • Asynchronous Transfer (disparate start/end, total events, latency)
At some point with all of this, you end up getting graphical representations of what you are seeing, and the most bandied about graph is the "knee in the curve". Everyone wants to know where the knee in the curve happens. Regardless of the value, the knee in the curve is the one hat people care about (second most important is where things go completely haywire and we max out, but the knee is the real interesting value.. by some definition of interesting ;) ).



Correlative graphing is also used to help us see what is going on with one or more measurements. A knee in the curve may be interesting, but wouldn't it be more interesting to see what other values might be contributing to it?

This fits into the first talk that Mark gave yesterday, and here's where the value of the first talk becomes very apparent. Much of the data we collect, if we just look at the values by themselves, don't really tell us much. Combining values and measuring them together gives us a clearer story of what is happening. Numbers are cool, but again, testers need to provide information that can drive decisions (drink!). Putting our graphs together in a meaningful way will greatly help with that process.

-----

What does it mean to push, or "jam" a story upstream? Jerry Welch is introducing an idea that, frankly, I never knew had a word before. His talk is dedicated to "anadromous testing", or more colloquially "upstream testing". How can we migrate testing upstream? We talk about the idea that "we should get into testing earlier".  Nice to say, but how do you actually do that?!


Let's think about the SDLC .We're all familiar with that, right? Sure. Have you heard of the STLC? The Software Testing Life Cycle? The idea is that just as there is a software development lifecycle, there is also a test lifecycle that works similar to, and in many ways synchronously with the SDLC. 

One of the key ways that a test team can make movement upstream is to make sure that their team understands what they are responsible for and what they deliver. Make an emphasis on training, and plan to have your development team interact with your test team, and do so in a fun way (make play dates with developers and testers. Sounds weird, but I like the spirit behind the idea a lot :) ). 

Testers have to make the commitment to transition over to being an extension of the development process. testers need to learn more about what is necessary to get a product covered. If a company hires full stack developers, likewise, full stack testers might be a valuable goal. While that doesn't mean that the testers need to become full stack developers themselves, they need to understand all of the moving parts that actually come into play. Without that knowledge, testing is much less effective. It's not going to happen overnight, but geting that skill level up for the testers will help them be able to get farther upstream with the testing process.

Along with learning about what to test, make a commitment to focus on estimating what you can really get done in a given sprint or cycle. Stop saying that you cannot get your testing done in a given sprint. Instead, get a real handle on what you can get done in a given test sprint. 

Every organization's STLC is going to be a bit different. It will be based on the talent the team has and on the talent that they can develop. Just like a salmon swimming upstream, your team has to develop strength. More to the point, they need to be able to show what they know. Effective Bragging is a point that is emphasized, and if you have a wiki, use it to show what you have learned, what milestones you have met, etc. Another aspect is Battle Elegance, which addresses such areas as people vs. projects, customers or team members, and developing goals to keep the teting  team focused and moving forward (or swimming upstream).

I'm not sure I'm totally on board with this idea, but I admire its goals, and I think it's one of the few ways I've seen articulated to actually get thinking about this process. We all want to move upstream, or more to the point, we wish we were involved earlier. The metaphor of "swimming upstream" works for me. It's muscular, demanding, and exhausting, but you will get stronger if you keep swimming. Of course, I'm not so fond of where the metaphor ends. Think about it what happens to salmon when they finally get to the spawning grounds. They reproduce, then they become bird food. I like the reproduce idea. The bird food idea, not so much ;). I guess our real challenge is finding out how we can sync up so that we don't die at the end of the journey. 

-----


The last "official" activity for Wednesday (and I use that term loosely ;) ) is a group of testers getting together to play a game called "Werewolf". Think of this as "networking with a twist. The participants are all at a large perimeter of tables, and the goal is to determine who are villagers and who is the werewolf. The point of this game is to observe others around the table, and based on conversations, clues and details, see how quickly the werewolf can be identified, and also not make false identifications. this has been fun in the sense that everyone is both laughing and trying to see i they can get it right the first time without false identification. After the round of sessions today, a little levity is going a long way.

Tuesday, April 23, 2013

An STP-Con Bonus: Speed Geeking

This would otherwise be buried in the mix of everything going on today, so rather than having everyone backtrack for it, I'm posting it on its own.

The lunch time session is being made a little more interesting by the fact that several tables have what is called a "Speed Geeking" session. These are lightning talk style presentations, and they are held at individual tables, and when the session ends, the participants moved to different tables.

Using an iPhone app called "Voice Recorder HD" I wanted to see if I could capture the essence of the discussions. It's up to you to determine if I succeeded:

Why Executives Often See Testing as an #EpicFail (Scott Barber)

The Top Ten Performance Testing Types ( Mark Tomlinson, James Pulley)

The Death Star Was An Inside Job (Matt Heusser)

Note: there were three time slots, and several more sessions were being held, but these were the three I actually attended.

STP-CON in Semi-Real Time (Live Blog)

Good morning everyone. Today is Tuesday, April 23, 2013, and I am sitting in a somewhat large ball room waiting for a community of testers to arrive. Some I know, some I just met, and at the moment, the majority of faces I am seeing are completely new to me. I hope by the end of this three day excursion that I can change that.

We're starting out with breakfast at the moment (don't worry, I'll not post what I'm eating, other than to say it's real food, and for that I am appreciative. I'm currently having a chat with Becky Fiedler, and we've shared some horror stories and had a laugh about crises in classes and how we've had to deal with getting things together in less than ideal circumstances. I've been dealing with the fact that two BBST courses are running simultaneously (one of which I am leading, while I am here. Yes, it's been an entertaining few days ;) ). Henke Andersson, Andy Tinkham, Griffin Jones and a few others have joined us and we've been chatting about my frustrations and things that I've learned going through a crash course in accessibility testing (yes, hilarity has ensued).



We're getting ready for the first Keynote, which is going to be a panel discussion between Rex Black, Bob Galen, and Cem Kaner, which will be about "What Will 2013 Mean for Software Testers?" So if you'd like to see what that might be about, I suggest you come back and see.




Rex Black started out the conversation about the challenging environment testers face, many of which are technology specific, and many others are cultural. With the advent of cloud computing and the ability to create unique instances for a period of time is radically changing the framework that testers are used to. The days of being able to say "well, we only have so many spaces to test these things", using a metaphor that he described as the "cubic meter of devices", is no longer an issue, but it opens up a new challenge. Now, every configuration is available, and the old excuses of 'we don't have access to that" now becomes "do we have the time to do that?"

One theme that Rex mentioned was that 2013, if there was a term to consider, it would be "retooling". With the explosion of open source and freeware tools in the space, we no longer can step back and say "oh, the tools re too expensive, therefore I can't be part of that. We also need to stop thinking about "what tool would you buy to solve X problem" and instead get an understanding of what we actually want to do. For me, Selenium is great, and I spent  lot of time talking about it. Not because it's an application that is the be all and end al, but because it fits a certain need and allows a lot of different interfaces to interact with it.

Rex quipped that, if someone wants to become insanely rich, if they can figure out how to anonymize large amounts of customer data, from multiple disparate repositories, and still retain its value as test data and provide genuine business value through its use, that might be a good problem to solve.

In a nutshell, 2013 will not provide any silver bullets.



Cem Kaner started out with a compelling thought, specific to the fact that he comes from working with students, and he laments the fact that one of our biggest challenges for students is that we don't seem to provide anything of genuine lasting value for them in (fill in the blank) career. Also, let's stop talking about agile as som hip new thing. It's part of the culture, it's been here for 13 years, and it's another methodology.

One interesting factor, whether some of us want to admit it, is that the U.S.A. is no longer the undisputed center of the universe center of the technological universe. Many of us have seen this coming for decades, and with that, we have had to come to grips with the fact that "the old order" while not completely gone, has been seriously disrupted. With this, new paradigms are now available to us, and new ways of thinking, and organizing, that will change even more as we move forward through 2013 and beyond.

The development of virtualization to the effect that models like EC2 and other options available in the cloud now make the possibility of massively parallel automated testing is something that only the most wealthy companies could perform available to just about everyone. The bigger question, though, is whether or not the testing being performed is doing much more than just running a lot of tests a lot quicker. Quantitatively, yes, we are running more tests faster than ever. What we still don't know is whether or not the tests that we are performing are actually providing a real specific business value.

Education is another avenue that is facing a major disruption as well. People are currently paying a tremendous amount of money to go to school, and many individuals who have gone through this process have found themselves second guessing the value proposition. Is it wise to be educated? Yes. Is it wise to take on a huge amount of debt to get a job that will not allow you to pay back what you have borrowed? That is an individual question. The real question is "do we have people out there who can actually do the work that is needed, and is the educational experience helping them actually do that?" Complex problems are not really covered in schools, and because of that, people are looking elsewhere to learn. Online classes are exploding, and some people are getting great benefit from this new model. Some, however, cannot really learn in this new manner. Ultimately, it is possible that we may see a de-coupling of the educational process and the credentialing process. Could we see a world where someone works through all of the coursework on Coursera.com, and then go to an independent body to validate and prove that they deserved to be credentialed with a degree? Cem thinks that may become a very real possibility.


Bob Galen says that he has about ten hopes and observations he sees not just for 2013 but for the near and not so near future.

Do less testing (or emphasize the testing that is most important). Testing doesn't automatically add value. The information you provide that allows stakeholders to make good decisions is much more important.

Become comfortable with asking for an giving slack time to think and innovate. Testers should take advantage of creative periods and adapt and get creative as well.

Critically think about risk. Bring valuation thinking and focusing on the right amount of risk. testers are stakeholders and customer surrogates, so let's do what we can to internalize this.

Customer experience is critical, deliver and measure what the customer wants, and be able to pivot based on the feedback you receive.

Lone Wolfing it is becoming passe. Testing and the value that it offers is less and less just the domain of one person (as a recent Lone Tester, I can attest that this is finally taking hold in some organizations). It's no longer "why didn't test catch this". Instead, it's now "why didn't our team (the whole team) find this issue?"

It's about people. Let people ask for help, let people admit their ignorance. Allow yourselves to be vulnerable and allow for experimentation.. failure need to be allowed... no really, not just say it's OK, but mean it, as long as people can spring forward and learn from it. Failing shouldn't be a sin. Repeated failure in the same places should be.

Testers need to ask questions and help partner with product owners to really help to understand if we are solving the right problems.

It's not about the tools. Analyse the problems. Don't lead with tools. People and data lead, tools follow. Switching the order causes heartache and frustration.

Courage is going to matter. Courage in all interactions. Challenge your tram to work better. Challenge your product owner to think about what actually solves a customer's problem. Challenge yourself to learn more. Have courage to talk transparently and be honest, no matter how painful that might prove to be (and yes, it certainly can be).

That's a pretty good list, and it's very actionable, not just for Agile teams but for any team.



The Q&A period started with, of course, the question "What do they think about the current meme of "testing is dead?" The three believe that the comment is coming from a "bomb thrower", but they also feel that the bomb thrower is doing a service. Testing is evolving, and if we do not adapt to that evolution, then yes, testing that cannot adapt and will not adapt is dying and deserves to die. To say testing is dead implies that experiments are never run, and that we never learn from them. We can all say that that is ridiculous.

Another questioner asked about what skills were required to be effective in this "brave new world". The big myth is that we have the find the "golden tester", that person who has all of the neat buzzwords at a high enough level. The simple fact is, no jack of al trades who is a rock star in all areas exists. Developing a team that allows for specialization and distributing skills among multiple people is the same reasoning as goes into an investment portfolio. the key in the financial portfolio is diversification.  Likewise, to develop a team that is resilient and not headed for the evolutionary scrap heap, evolve the skills for the entire team. Niches are good, as long as there is an understanding of the niches and so that they don't exist in silos.

Another question dealt with testers having to become intimately familiar with the code, as well as the growth of the SDET, the focus being programmer testing. The three agree that the skills are important, but there may be too much emphasis on the code, an not enough about what the users wil experience. To a user, the code itself is almost irrelevant. They don't care what the code is, they care what the tool can do for them and help them solve a problem or perform a task. Bob recommends a simple approach... pair more. Not continuously, not always, but be inquisitive, pair with developers, look at code, and try to understand the perspectives of the programmer and what they are testing.

The last question was focused on Exploratory/Rapid Software Testing and whether or not it's a fad. It's an activity, it's a tool, and it has its place. It has a benefit, it is effective, and it should be used by many teams. Exploratory testing is not a new thing. It's been around for decades, though the phrase is a recent development. The fact it, any time we investigate to learn about something, we are exploring. Does it make sense to drive all projects with Exploratory testing? It's possible, but there is a balance between free form exploration and verifiable accountability and reporting. It's not this or that, all or nothing, it's using the right tool, at the right time, in the right context.

And with that, the first keynote is over. Now you'll just have to follow along with me and see if you like my choices. I hope they prove to be of some value to others :).


The first session I decided was most relevant to me and what I came to get answers for had to cover performance. That and the fact that I've conversed and worked quite a bit with Mark Tomlinson via TWiST podcasts and listening to his own PerfBytes podcast.

Some people struggle with trying to understand the importance of software testing. We know it's important, we've been told it's important. We can at a gut level internalize that it matters... but why? What aspects of it, when you get right down to it, actually matter? The aspects of how are pretty well understood. The point to this presentation was to step away from the how, and focus on the why's. Capitalizing Performance allows us to "capitalize on Performance".

Here's a question... how can we find new ways to amplify the message about the benefits of performance testing? How can we make what we are doing relevant to the disparate stakeholders? How can we make the case for the business value of what we are doing?  OK, that's multiple questions, but they are all related.


Performance doesn't happen in just one place, and performance tests have different value aspects in different places. Database performance is different than front end interface performance, and they have different audiences. 

To often, we flood people with facts and figures, but we don't really explain what it is they are looking at. Pictures are helpful, but without context of what they are looking at, much can be lost. More to the point, results can be hard to conceptualize if we don't understand what group or stakeholder may actually care about it. Benchmarks are great, but can we really explain what the benchmarks actually
represent?

A key technique that can help make this more clear is to not look at values by themselves. Comparing two or three different data points and overlaying them can help make sure that the message that is important is conveyed. If there is a spike in a graph, will it make any sense by itself, or will it make more sense to see it with another variable? CPU cycles by themselves may be interesting, but we probably care a lot more about how many people are accessing a site at the time that those CPU cycles are being recorded.

Where do things like this matter? Imagine you are selling tickets to a concert. the site can perform very well 99% of the time, but the period that really matters is the first few minutes that an item goes on sale. In a spike, what can happen to a site? It can get clobbered, and if the site isn't set up and ready to take on that immediate spike, the 99% of the time the site ran beautifully doesn't really matter. Put financial information into the data where applicable. Why? People at the top of the company food chain don't really care about the esoteric details of the performance data. They care if they are making money, or losing money. That's a message they can understand. If the performance details show that we can make more money if we can keep our transaction time to less than 6 seconds, and roughly quantify it, now we can show where we need to be and why? Money talks, oh yes it does ;).

Color is important when used correctly. A blue line that steps up at a particular point in the graph may mean something technical, and be statistically interesting, but people outside of the tech details won't really understand what you are looking at. A spectrum graph that shows a lot of red, and a little green, without any additional information, can help communicate a lot:



Without going into details about the graph above, the point is that that steadily climbing region of red communicates volumes. We may not know exactly what we are looking at, but we can see that as the graph progresses, the increased red means "not good" and not good is geting more prominent. Simple psychology, but it's effective.


When at all possible, you should use a copy of prod data, or at least a very large representation of that data. There's a lot of pressure to prevent people from being able to use data in this capacity, which then means that we have to scrub the data and change values to make it so that the data cannot be used nefariously. Sounds good on the surface, but as was mentioned in the keynote, that scrub very often causes the data to be very much less effective, as well as not being able to get a true representation of the data. Instead, consider scrambling the data, which will preserve the cardinality of the data, but randomize the actual characters. That data then remains the same size and with the same parameters as the original data, and that helps us determine if the performance we are seeing is genuinely representative.

Other key things to help make the case for this approach to performance testing, is the use of os simulating genuine, real world tests. Do the pages created persist for any respective amount of time? Are we actually looking at the real data values that matter? Can we partition are data in a way to help determine which data sets get the most demand?

Ultimately, we want to make sure our tests are convincing to our stakeholders. How do we do that? Let's make sure we can answer these questions:

-- is our test credible?
-- is it relevant?
-- is it meaningful?
-- can our test help them?
-- are our results accurate?

Ultimately, we need to make sure that these data points that we provide and present can be communicated in a way that everyday people can understand. Everyday people understand time and volume. The physical details such as cpu, disk, memory, networking, power and cooling are great for us geeks. To help people outside of the data center actually make connections with and internalize what is going on, we need to communicate things in time and volume. In short, the details need to make a compelling story as to "why" something is going on.




I had a great dinner last night with Lee Henson (see my comments yesterday about getting bags of crayfish shipped ;) ), so when I saw that he was speaking today on "Persona Testing and Risk", I decided this would be a fun place to spend the second session.

For those who follow my blog regularly, you're probably saying to yourself "but Michael, isn't persona based testing something you are already intimately familiar with?" Well, yeas, but even areas I know a lot about, I can learn something new and interesting from different perspectives. I teach BBST Foundations regularly, and I learn something new every single time I teach it.

The sad fact is that, while we want to believe that we "care" about quality, and that we want to provide the maximum amount of testing, getting something out into people's hands so that they can use it and so that the company can make money trumps howe much we care about quality. there's no such thing as complete, fully tested or error free. Nothing is. Nowhere. The fact is, we really have to approach anything we do with a sense of what is good enough, complete enough, and fast enough, to be delivered in a way to be effective, and what risks are we willing to take to meet that point.

The earlier we can get together and talk about what the requirements are and how they can be expressed, the more likely we will be able to identify the areas that have the greatest risk. Key to that is understanding the roles that people play. We sometimes use people in the equation, and truthfully, that's the wrong way to look at user interaction. We know that people are going to use the site, but more importantly, we care more about what that person does and how they would interact with the system.

Personas help us define better what someone is doing, not just who they are. It's one thing to say "I am a middle aged woman". It's another to say "I am a middle aged woman who is very anxious about my personal information appearing in an application". That's much more specific, and that kind of a persona will be much more informative when it comes to how we want to interact with that person, and what options we provide to them.

When we say "as a user", we make a very nebulous and squishy entity. There's so much that a "user" can do, and much of it isn't helpful as to how important it is. If we can identify people, not just by role, but by behavior, preferences, and personality. When we have this information, now we can narrow down on who we are actually making our product for. Even more important, we can actually test the aspects that are important to that individual. Not a nebulous amorphous person, but someone we have actually named and may know as well as our next door neighbor (OK, yes, that's a weird analogy, but work with me here ;) ).

There are negligible risks, and then there are serious risks. As a way to drive this home, imagine you fly every week on a particular airline. Would you be happy to know that the pilots of this airline land their planes safely 89% of the time? If you are going in for open heart surgery, would you feel comfortable knowing that your open heart surgeon has had 75% success rate with his operations? Of course not, to us in these circumstances, anything less than 99.99% would probably be too risky. Do we then provide this same metric for a web site that plays videos? Again, of course not. It's easy and fun to joke about the fact that some risks are ridiculously applied, but risks are not ridiculous when looked at in the appropriate context (if you're playing the TESTHEAD live blog drinking game, go ahead and take a drink ;) ) .

When we make changes in an organization, Lee talks about "calling the COPS"? Huh? COPS is an acronym for:

Cultural change
Organizational understanding
Process adjustment
Strategic readiness

Every change goes through these steps, and everyone in the organization has to be on board with the COPS to make a change actually stick. The expectations need to be clear, but they also have to be reasonable.

No organizations is going to test everything, cover everything, and be everything to everyone. They can provide good test coverage for the people that matter the most. User personals will help you find out who those people are, and what those people actually do.




The lunch time session is being made a little more interesting by the fact that several tables have what is called a "Speed Geeking" session. These are lightning talk style presentations, and they are held at individual tables, and when the session ends, the participants moved to different tables. 

After perusing, I've decided that I need to participate in:

The Death Star Was An Inside Job (Matt Heusser)

The Top Ten Performance Testing Types ( Mark Tomlinson,  James Pulley)

Why Executives Often See Testing as an #EpicFail (Scott Barber)

I'm trying a new technique in recording, so we shall see how well these work out (I'll post these later tonight if it works as I hope it will).



Jane Fraser is someone I've talked with and been part of TWiST podcasts with in the past. Additionally, a good number of my friends have worked either directly or peripherally with her, so I was very interested in seeing what her presentation, "Becoming and Influential Tester" would be about.

Jane has spent much of her career in games, and has recently moved into a company that focuses on robotics. As such, Jane has had to demonstrate and show her influence over many years, and how to make it possible to get organizations to look at testers and testing in a proper light and with the respect it deserves.

Influence and leadership are somewhat intertwined. You don't have to be in a high profile position to be a leader or have influence.  People respond to people based on the level of influence that they have. Those who try to influence by force tend to not get much in the way of buy in from their team mates. They may do the work, but they will not give their all. Intimidation likewise gets confused for leadership, but is not. It rarely makes for people willing to follow your lead. Manipulation is also not very good for long term influence, though it can work in certain circumstances. The position we hold can provide some level of influence, but again, that still is seen on the negative side on the ledger.

So what goes on the positive side? When we Exchange information with others, that works in our favor. Trying to Persuade others to understand our views works on the positive side. Giving Respect often gets Respect back. Timing is important, and early is better than late, always. If we see issues or potential challenges, when we can announce things early enough for organizations to adapt, that will help build credibility and show that we are aiming to be effective and show that we know what we are doing. If we do all of the positive aspects mentioned previously, we can build the most important influencer, which is Trust. Trust requires that we have initiates enough influence over time to demonstrate that we are worthy of and deserve the confidence that has been given to us. Bringing Teamwork into the equation, for many people, being a part of the team can either be for the moment in a company, or a permanent situation between people that span jobs and careers.

Influence requires that we are responsible for the stewardships of our efforts. We are responsible for ourselves and others, and while influence can be hard won and slowly gained, it can be lost very quickly. When we make efforts to provide a positive influence, and when we are working to add value, trust is likely to rise. If you are negative, and if you damage other people's efforts, it's likely taht your own influence will dwindle.

Integrity is critical. If you are transparent, do  what you say you will do and delivering, then your strength of integrity will grow. Do you have the courage to tell your CEO that you have messed up on something? Are you willing to take the fall for a mistake, and not assign blame? Chances are, if you do mess up, and you learn from those issues, and don't make the same mistakes, then you are likely to recover well.

Jane uses the acronym LADDER to help us increase our influence. We can increase our influence when we Listen and do the following:

Look at speaker
Ask questions
Don't interrupt
Don't Change Subject
Emotion
Responsive listening

Jane recommends that we focus on Listening, Learning and Understanding. Being brave and asking questions may be hard, but it's easier than walking away not asking questions, and then having to go back later and re-ask the same questions. Jane also recommends a process of 360 degree leadership, in a process she calls leading up (helping may your boss or manager successful), leading across (working directly with and leading effectively with your peers, and leading down (being a mentor and providing the compas for junior team members so that they can be successful). It's important to understand what our co-workers, team members and direct reports need and value. Setting an example is huge.

Learning, and helping others learn, is hugely important. If your team members see that you are looking to keep learning and keep growing, and use your knowledge to help others learn, that will help you to communicate better with your team, and it shows that you respect their efforts and want to be able to communicate effectively with them. With learning comes growth, and as you grow in knowledge and skills, and you help your team grow and learn, again, you start building trust among your team.

Being an effective navigator helps a lot in the process of influencing others and giving others the confidence to see your leadership and influence grow. If you can help clear road blocks for others, they will look to you in the future. Understanding the history of where things went well, as well as where things went poorly, and demonstrating that you have learned from your experiences helps develop more credibility.

Lots of cool ideas and information. I definitely enjoyed this presentation :).



When I saw that Anna Royzman was teaching on Context, I knew I had to be there. Anna started off the session with James Bach's tester's version of "The Towering Inferno". If you haven't seen it, do a  search, it's both funny and educational.

The point that James was making in the clip (and what Anna is making in the course) is the idea that the  tester is the Steve McQueen character. He's the one asking the questions, and explaining why he's asking the questions he is asking.

Ann a walked us through an example where we had to try to determine how to test an item where we couldn't get any information about the product. Three of us (yours truly included) tried to see if we could penetrate the wall and find something out about the product that we could learn so that we could do some kind of effective testing.

Ultimately, testing comes down to three questions:

Huh?

Really?

So?

Those who have taken or have read through the Rapid Software Testing materials may recognize this. It may seem flip, but really these three questions are *very* important.  Why does something work (or not work)? Is what we understand actually true? Why does it matter?

There's a decision map that the U.S. Army uses called the OODA Loop. OODA stands for:

Observe
Orient
Decide
Act



The fact is, there are lots of reasons and issues that can drive different stakeholders into making the decisions that they do. There can be politics, there can be fear, there can be mistrust or a low level of overall communication. Thus, to be able to move forward and be effective, one of the things we need to do is ask questions that are relevant to the stakeholders and their individual spheres. We will ask different questions, and in different ways, if we are talking to someone in Sales, Administration, or Operations than we will if we are talking to a software developer or an architect. Each has a specific language, and each has their own concerns. We re much more likely to be able to determine how to test effectively if we can articulate our goals and our mission in a way that makes sense to diverse stakeholders. Sometimes we have to build trust in places where there isn't any trust.

To close out this session Anna asked Andy, Griffin and me to come up with a product with a "hidden agenda" and let the rest of the participants ask us questions to try to see if they could determine what it was that we were developing. I will say that, while we did provide a lot of clarity of the project, the "hidden agenda" was not revealed until the very end, and it wasn't ferreted out by the questions asked. An additional item in this process, a book I think I need to get, called "Game Storming" (Dave Gray, Sunni Brown, James Macanufo, published by O'Reilly).

And with that, it's 6:00 p.m. and I'm hungry. Catch you all later!