Showing posts with label terminology. Show all posts
Showing posts with label terminology. Show all posts

Thursday, August 25, 2011

Weekend Testing Americas: Developing Effective Charters with James Bach

This Saturday, Weekend Testing Americas is having a special guest. During last week's session, I had James Bach and Michael Bolton both attend and participate with the group (talk about feeling like a T.A. and then having the professors walk in (LOL!).

Actually, I'm glad they did, because they helped me realize something I've been struggling with for some time. It'[s one thing to make a test charter or a test mission, but how can we make them more effective? More to the point, how can we craft them so that they are appropriate for the testing session at that time?

To help explain this, imagine you have a group of people you are going to send out to test something. They have one hour. they have to maximize that time, and therefore they need to be able to hit the ground running as fast as humanly possible. Can you adequately describe what needs to be done so that everyone in the group can do that? What if we had a whole group of testers specifically trained to be able to do that, not just for Weekend Testing sessions, but all the time?

This is the skill that James Bach will be working with us to develop and improve this Saturday. Normally, I would be cautious to not over-hype or over-promote these sessions because we might be overrun, but in this case, I think it will prove worth it, so I'm breaking my "limited distribution" promotion (Twitter and direct email) to talk about and announce this session.

So do you want in? If so here's what you need to know and what you've gotta' do:

Date: Saturday, August 27, 2011
Time: 11:00 a.m. - 1:00 p.m. PDT (2:00 p.m. - 4:00 p.m. EDT)

To join the session:


1. Please send an email message to wtamericas@gmail.com with the subject line "WTA18" and a confirmation that you would like to attend the session.


2. Please add "weekendtestersamericas" to your Skype ID list if you have not already done so. Please ask us to add you to our contact list.

3. On Saturday, August 27th, approx. 20 minutes before the start of the session, I will set up the group. If you ping me on Skype at that time, I will add you to the session. If you have replied via email and stated that you will be attending the session, if you are online at that time, I will automatically add you.

Remember, these sessions are "chat" only, no call in is necessary, but you have to be on Skype to participate.

Look forward to seeing you there :).



Saturday, May 7, 2011

Foundations is Over, and the Class Survived :)!!!

So last Saturday, I had the chance to bring to a close my first AST BBST Foundations class with me as the Lead Instructor. This was a cool experience in the sense that, while I've been part of this class for five iterations, this was the first time where I was setting the pace and the expectations.

It's a strange feeling. On one hand, after five times through Foundations, I figured this would be easy to do, but the truth is, there's a lot to keep track of and try to keep everything on track. For those who've taken the class, this may seem obvious, but for those who haven't, this is not an easy class. There's a lot to learn, to accomplish, and to encourage other students to work together over a surprisingly short period of time. It's a month long, but that month flies by very fast.

Each class is different, in the sense that each group of participants brings their own experiences to the course, and as such, each sees things a little bit differently. Each class, it seems a different set of questions develop as people determine the answers for the questions in the class. Oftentimes, I feel inadequate to answer those questions. Not because I don't understand the questions, but because I understand the questions as they relate to me and my experiences. My breakthroughs are mine, and other people's breakthroughs are theirs and come in their own time. My frustration is that, try as hard as I might, I can't teach someone anything they themselves aren't ready or willing to learn.

I remember as a kid reading Guitar Player magazine. Back in 1980, when I was 12 years old, I read an interview with Lesie West (legendary blues rock guitar player in the 70's who went on to teach guitar in New York years later). Leslie said something that I have been thinking a lot about lately... he said "I can't teach you how to play guitar. I can show you how to play guitar, but that's it. I can teach you how to teach yourself!" I had that experience during this class.

Now, don't get me wrong, it's not like we had a bunch of people who were not already good at testing (yes, there were many different levels of experience, some long time practitioners balanced out by some junior testers, as always). What was clear, though, was that the class offers a framework for testing and understanding testing, and each person approached that framework a little bit differently. Were I to take the idea that I would teach them, then I would be teaching them my understanding of it. That's not the point, though, as the challenges I face are not the same ones they will face. The tools and the approaches are context driven, and understanding the context (I believe) is important.

I deal with a podcast each week, and literally each one I produce, I learn some new trick or method that helps make that one better than the week before (subjectively speaking, of course). After more than 40 podcasts, I find it interesting that I still learn some new trick each time I do it, often when I'm doing something I've done dozens of times. Why is that? Is it because I've suddenly become aware of something that was obvious before, or is it because I experimented with something, saw the results, and finally put 1+1+1+1+1 together? I think it's more of the latter, because we are learning at our rate, and often that rate will be different for different people, because they will focus on what is relevant to them over everything else. The other details are nice and interesting, but they will not be at the forefront of my mind if I'm not actually doing something with it beyond curiosity.

This is a long winded way for me to say that I've had a great time being the Lead Instructor for Foundations, and I'm happy that my participants enjoyed and learned through the experience. Whether it was because of me or in spite of me will remain to be seen (and is ultimately irrelevant, really). I think you all were awesome, and it was a pleasure and a privilege to lead your class. I hope to see you in future classes, and thanks for teaching me a lot more than I probably taught you :).

Friday, December 10, 2010

TWiST #23 with Mark Crowther




Is it noticeable? The new mic went into effect for this episode, so for the first time in 15 shows I had to re-record the intro and outro. Felt a little jarring at first; I’d gotten used to the other intros and their cadence. I also have to adjust to the sensitivity of the standalone mic; it’s much more live, so every pop, click and exhale is amplified and articulated way more distinctly. I even pulled out my old Pop Screen attachment I used to use when I recorded vocals in my home studio. It proved to be very helpful.


For today’s episode, Matt talks with Marc Crowther, a QA/Test Manager and tester who has had experiences in both the software side of things and the actual manufacturing side. Much of the conversation was based around software development and software testing trying to shoe-horn standardization and auditing tools designed for manufacturing into the development and test space, and how that often becomes a frustrating path (and perhaps not even a very lucrative one for time and energy invested). I liked his discussion at the end of the piece about decreasing unnecessary and repetitive artifacts, and I found myself really appreciating his perspective on this. If you’d like to listen along, please go here to listen to Episode 23.


Standard disclaimer:


Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.


TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.


Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).

Wednesday, August 25, 2010

Wednesday Book Review: Lessons Learned in Software Testing


It's funny, I've had this book on my short list for a long time, but for some reason never got around to reading it. It was always one of those "yeah, I need to get that one and read it" but other books came and went, and I kept putting this one on the back burner. I think part of the reason is that, when I got involved with the Association for Software Testing and participated in their classes, both as a student and as an instructor, I felt it would better to get some distance from this book until I got my bearings in the class. Of course, my concern was that I wouldn't be able to give a fully objective review after knowing I was effectively teaching the very structure and mission of this book whenever I help teach the Black Box Software Testing Foundations class (or at least that particular component of it). Nevertheless, I will be impartial, honest and fair-headed about giving a truly un-biased review.

 
I LOVE THIS BOOK!!! …and Cem Kaner, James Bach and Brett Pettichord did not pay me anything to say that.

 
OK, with that out of the way…

 
If there was ever a book that could lay the claim to being the "Software Testing Bathroom Reader", this is most definitely it. The book is organized into various chapters and each chapter has a number of individual lessons related to the topic chapter. Each item is meant to be looked at as a standalone area to ponder, and not rush through as though simply reading each chapter. What I like best about this book is the fact that the reader can skip around to whatever section they want to ponder on, and there are likely several lessons that directly impact what they might need to consider.

 
This is not a "How-To" book per se. Rather, it is a collection of ideas that the authors have personally gone through and used. Many of the titles of the chapters and many of the ideas and lessons focus on the idea of context in testing. One lesson can counter-act and discredit another, simply because different situations require different approaches.
Each chapter has nuggets of wisdom relevant to its scope. Some of the lessons are brief, and some are more in-depth. The book's sibtitle is "A Context-Driven Approach" and that particular testing philosophy in seen on every page. For some, the lack of "absolutes" will be possibly frustrating, but again, I like the combination of well thought out commentary and personal experience and the recognition and understanding that, yes, in some cases you will need to do B instead of A. What's more, there are lessons where the footnotes will showcase contradictory views, proving the point that many of the methods described may work but they just as likely in some cases not. Som may find that hypocritical and double-speaking. I call that reality and accept that, at times, that is exactly what happens in testing projects.
Below is the list of chapters included:


Chapter 1. The Role of the Tester
Chapter 2. Thinking Like a Tester
Chapter 3. Testing Techniques
Chapter 4. Bug Advocacy
Chapter 5. Automating Testing
Chapter 6. Documenting Testing
Chapter 7. Interacting with Programmers
Chapter 8. Managing the Testing Project
Chapter 9. Managing the Testing Group
Chapter 10. Your Career in Software Testing
Chapter 11. Planning the Testing Strategy
[Appendix] The Context-Driven Approach to Software Testing 


In addition, there is an extensive bibliography at the end of the book, and several of the lessons include the relevant reading areas right along with the text. The bibliography alone, as far as the possible jumping off that can be accomplished and new subjects explored, is worth checking the book out of the library for. Actually reading through and "pondering" each of the topics, however, will yield the true value of this book. Some may find this comparison strange, but I think it's very apropos; "Lessons Learned" is like Scripture (put down the pitchforks, people, I'm not saying it *is* Scripture). How can I make that comparison? Just like Scriptural texts, they can be read multiple times and a different lesson is learned each time, even when you are reading the same words. Does the book morph and change? No, but your own experiences do, and at different stages of a project, or a career, or an initiative, certain lessons are going to be directly relevant and others are not. Reading "Lessons Learned" like a linear book will be of limited use, but find yourself in a situation where you are looking at a specific issue and have specific questions, it's a good bet you'll find a method or an approach somewhere in the nearly 300 lessons listed in the book.


Bottom Line:
I'm already a fan boy of the authors of this book. I consider myself an advocate for the context-driven school of testing, and actually spend my time learning and teaching others many of the ideas espoused in this book. On that basis alone, I find it tremendously valuable, but even if I didn't, I would still recommend it for the fact that it is filled with practical wisdom and little to no sales pitch. It's also nice to see just how relevant the areas in this book are almost ten years after it was published. While some new techniques have been developed and have moved onto the center stage, many of the ideas in this book remain today as excellent strategies to test and improve one's testing methods. Not everything will be relevant immediately, and the reader may have to choose which areas and which lessons are relevant for that immediate point in time. For those looking for the "one true way", this isn't the title for you. For those who realize there is no such thing as "one true way" in software testing, but lots of potentially good ways depending on what the project and process needs, you'll find a good friend in this book. Like Testing Computer Software, I can see myself turning to this book ten years from now for fresh insights, and I'm willing to bet I will get them then as well.

Wednesday, August 18, 2010

Wednesday Book Review: Who Killed Homer?


Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.


This is a bit of a departure, but I found it to be fascinating and, actually, to have a lot to do with the testing profession, more so than I anticipated when I first picked it up as a vague interest in why Classical education was on the decline and what that might mean. "Who Killed Homer?" By Victor Davis Hanson and John Heath, was a book I picked up, not because I'm in any way a Classicist, but because I'm a fan of the Classics (having been turned on to that distinction by Dan Carlin). I never had a true Liberal Classic education (in fact, I don't think most people have in the last 75 years at this stage), but I've always been fascinated by the Greek and Roman heritage and the development of the hallmarks of Western Civilization, both the good parts and the bad. I've also been curious about what made a classical education the hallmark of a truly educated person over the centuries, the knowledge of Greek and Latin, the ability to read Homer, Aeschylus, Xenophon, Virgil and Ovid in their original vernacular, the study of the trivium and the quadrivium… realize that none of these is part of my everyday experience (I can recognize a few words in Greek and Latin and follow the gist, but I'm laughably far away from having even a child's grasp of either language for extended reading). Nevertheless, I'm nerdy enough to enjoy the Iliad and Odyssey, Aeneid, Trojan Women and Lysistrata, and I have a great fascination with and a genuine joy in knowing about and reading about the Hellenic and Roman cultures that developed, and how they morphed together and helped to shape Western Culture today.

 
OK, yeah, that's great and all, but what does that have to do with testing? A lot, I think.


Chapter 1: Homer is Dead
Hansen and Heath are Classicists. They are true believers, and this rings out clearly. These are not dispassionate and theoretical wonks, but people who truly love the craft, history and passion that came out of Greece, and how that civil order and world view, both brilliant and dark, both beneficent and monstrous, had a huge hand in developing the world view of the West (the West in this case meaning the Greco-Roman West, and ultimately the Christianized West, basically everything from Central Europe to Spain, Portugal, the U.K, and their offshoots like the U.S., Canada, Latin America, Australia, etc.). In this world view, Homer and the classics of antiquity were seen as vital to the development of the human mind, the study of the literature, sciences, philosophy, ethics, geography and history all worked together to help create an inductive body to help reason and work through challenging issues, and develop a mind and body of understanding that allowed anyone who studied it to have the tools to reason through and learn any discipline. Hanson and Heath make the case that we have lost that, and that the college curriculum and the classicist professors themselves are ultimately to blame for its loss.

Chapter 2: Thinking Like a Greek

Hansen and Heath break down the attributes that they see as being inherently Greek and Western, such as an ability to see man as being made in God's image (or God in man's image, depending on the particular writer), that the polis, or people, were the pre-eminent institution, that three classes instead of two was vital for civic development, that yeomen with an equal responsibility for their fields, their homes, their political seats and their armed forces was essential (the Greek Hoplite owned his own land, hoed his owned fields, passed laws in his assembly, and work his own armor out to fight wars with his other fellows), the ability to write and create philosophy separate from government and religious interference), to reason and come up with solutions based on data and empirical evidence, not just the whims of a ruler or a priest, and a version of equality and egalitarianism that first starts in Greece (not a perfect egalitarianism, women and slaves certainly would disagree) but much closer to our current world view than any other culture of their time. Hanson and Heath show that many of the hallmarks and attributes we take for granted in such things as political discourse, law, public relations and community first appear in the Greek world, and develop and spread through the Latin age.


Chapter 3: Who Killed Homer…and Why?
Here's where Hansen and Heath draw their daggers and go in for the kill… they say it's an inside job, and that colleges and academia is the murderer. This is the section that I think testers will find to be very interesting. Why do I say that? Because so much of the infighting described reminds me of the testing wars that we are currently seeing today. I could easily transpose much of this text, and remove the name Homer and replace it with Deming or Kaner (sorry, Cem, I know you are not dead, and certainly hope you will not be for a long time :) ). We would do well to heed the warning this chapter tells of, otherwise, we might well see sometime in the future the warnings of "testing education is dead" and that would be a shame, because I think it's just now, finally, starting to get a true breath of sustaining life.


Chapter 4: Teaching Greek Is Not Easy
This chapter goes into the challenges that Classicists face, and I must admit, it was a daunting chapter, and yet, it felt familiar, in the sense that it could just as easily replaced all of the terminology of Greek syntax with the terminology of Automation, or of Combinatory and Orthogonal methods, ET and RST or any other testing discipline we hope to see others learn. Perhaps the challenge we now face with testing and its true growth and development is the fact that we are realizing that testing is hard work, when it is done correctly, and it is the passionate few that really drive that learning along… how many of our profession get along and just do what they do without ever seeking out new ideas or new understanding, or even looking to the past to see what has come before? I know from my own experiences that the numbers are higher than we want to admit, and we're a young discipline all told.

Chapter 5: What We Could Do
Hansen and Heath create a broad and prescriptive approach that they feel would help cure the declining and ailing world of Classical education. They said in 1998 that it's not too late, but it is definitely on life support. It's interesting to note that the prescriptive suggestions that they made have not, to date, been applied, but they definitely remind me a lot of what we are seeing happening in the world of testing education (and some of the infighting) today. Do we care more about certifications and accolades than we do about competency and effectiveness? Do we care more about speaking engagements and conferences and personal aggrandizement than we do in perfecting our craft and creating an environment where people actually learn? Do we sit comfortable in the ideas that we have our standards and our best practices, or do we shake up the system and really go to find better ways, always, to improve and develop our people?
The book ends with an appendix of suggested readings, ten titles from the ancient Greeks and ten titles about the ancient Greeks, with commentary about why they have value and what they can teach us about the Greeks then and ourselves today.

Bottom Line:

It may be fairly said that likening "Who Killed Homer?" to the state of testing education and practice is a bit of a stretch, and yes, I can also say that there is much about classic education that may not turn a lot of people on (and for that matter, there's a large population that's probably not all that enthusiastic about Greece and Rome to begin with, or Western Civilization, at that. Be that as it may, there's much to find interesting and intriguing in this book, and if you play the mental game of replacing "Classics" with "Testing", a lot of relevancies pop out at you, in my opinion. Again, I come from this not as a classicist, but as a fan of the classics. There was much to promote thought and reflection on what has become of classical education, and it is my desire to not see the same thing happen to our developing testing educations that I suggest giving Who Killed Homer?" a read. If nothing else, it's neat to see those passionate about their endeavor make the case to try to save it. For those who are striving to see testing education gain a foothold in our Universities, this book makes the case for what we may wish to never have happen to our discipline.

Friday, May 14, 2010

What Does Q.A. Mean To Me?

I think in the testing world, this is the most bandied about question that I have heard discussed, debated, and argued. Since I purport to have a blog dedicated to talking about testing, it’s only fair that I go on the record with my thoughts on this.


First and foremost, Quality Assurance is a nebulous description for testers, and is in many ways not helpful. I am opposed to the idea of a “Quality Assurance Team” that is separate from development (put down the pitchforks, people, lemme’ ‘splain!). Quality Assurance is an empty promise; we cannot “ensure” quality. All that we can do is point out issues and find issues with a product and call into question its quality. That’s it. We cannot magically bake quality into a product. We cannot wave a magic want and exorcise bugs from a program. We can point out to developers issues that we find when we test.


Quality Assurance is not just my team’s job. Rather, it has to be the mission of the entire company and a dedication to making sure that we all spent the time and the energy to make sure that there is as few issues in a product to be released as possible. Testers provide an indication as to how well the company is achieving that goal. Rather than a gate (or my favorite overly used and abused metaphor, the “bug shield”), we are more closely aligned with the function of a gauge. Instead of looking at software as buggy data that drops into QA as though it were a function, and that we magically cleanse the code and bug free software comes out the other side, we can tell the story of what we have seen and give the company and development team information that says “here is where we are at”. The tester tells a story, and gives information to show the state of the application. From there, the developers can then decide what they want to do based on the information (using a GPS as an example, they can stop, turn around, and make changes to continue forward, or they can just keep moving forward).


Regardless of my personal feelings as to what my role is and how I would like to see myself in that role, the truth is, whether I like it or not, most other people in an organization do look at the QA tester or the QA team as “the last tackle on the field”. In my current environment, yes, that is the case, and it requires me to be very strategic and creative. While I may not be the one who put a problem in, I will certainly catch a fair share of the heat if a customer discovers the problem. Thus I have to embrace the fact that, whether or not I like or appreciate the “bug shield” metaphor, it’s the role that others see me playing, and I cannot just abandon it.


So what can we do? What is our mission, our real value to the organization? What’s the bottom line of what we offer? In general, my answer to this is that “I save the company money”. Every bug that I find, whether it is major or minor, has a hand in helping to determine whether or not a customer stays a customer, or talks about our product in good or bad terms, or purchases another seat for their company or “makes do” for the time being. It can be tricky to measure, and it’s not a hard and fast as a sale vs. no sale, but it does help to make clear what we as an organization provide (and in this case the we means me; remember, I’m a lone gun at the moment, but I have hopes that may change at some point). How about you? Where do you see yourself in the Q.A. picture?

Tuesday, April 27, 2010

The Dialect of Testing


Yesterday I had an interesting experience. I was talking with a co-worker who has a friend who is a recruiter. The recruiter was looking at a number of resumes they had received for testers, and she was trying to determine if the person in question would be a good fit for the job. My co-worker asked me really quick if I could review the job description and see if I could give any suggestions as to how to narrow down the list. On one side, I was able to do so, but on the other, I noticed that there was a vagueness as to the original description. In the description, they are asking for people with UI experience, but they do not spell out if they mean experience with developing and designing user interfaces, or with testing user interfaces. Likewise, they made a request for familiarity with test scripts. I explained that, with the vagueness of the description, they could be looking at resumes for an entirely black box tester who writes literal test scripts (enter value A into input B, expect C=Pass, else FAIL) to test user interfaces, or they could be referring to a white box tester who understands user interface development, and hence can write unit tests to test functions and procedures. I gave him some suggestions to give as feedback to the recruiter to make sure that they are specific about the technologies, methods, and language that they us when they are describing testing, because, to quote Indigo Montoya from "The Princess Bride"... "you keep using that word... I do not think it means what you think it means!"

This has been something I’ve started to notice more and more. Developers and testers tend to think that they speak the same language, but there are many examples where testing phrases and concepts that are well understood by testers are either less understood or otherwise totally foreign. As an example, many testers are familiar with the concept of “pairwise testing”, where the tester creates a matrix of test options that identifies all discrete combinations of parameters, as opposed to testing every parameter exhaustively. The phrase “pair wise testing”, however, seems to be one of those “test dialect” statements, as when I have spoken with software developers and stated I was going to use  pairwise testing to help bring down the total number of test cases, I have received a few blank stares and an inquiry as to “pairwise testing? What’s that?”. When I describe the process, I often get a comment back like “oh, you are referring to Combinatorial Software Testing”. Well, yes and no, pairwise testing is a method of combinatorial software testing, but it is not in and of itself combinatorial testing, as it’s not an exhaustive process, but rather a way to identify the pairs of interactions that would be most specific and, ideally, the most beneficial to spotting issues.

Another testing technique that seems to have gotten a few heads scratching is when I mention “fuzzing” or “fuzz testing”. The idea behind fuzz testing is that a user (be it a live human or an application program or test script) provides unexpected or random data to a program’s inputs. If the program produces error code that corresponds to the input and appropriately flags it as being invalid, that’s a pass, whereas performing these steps and causing the program to crash, or present an error that doesn’t make any sense, would be a fail. Again, when I’ve talked to software developers, and brought up the notion of “fuzz testing”, they have looked at me like I’ve spoken a foreign language. When I’ve explained the process, again, I’ve been offered a corollary that developer’s use in their everyday speech (“syntax verification” has been a frequently used term; I’m not sure if that’s representative, but it’s what I’ve heard).

So what’s my point? Do testers have a distinct dialect? If so, how did we get here? And now that we are here, what should we do going forward? Also, have you noticed this in your own interactions? How many out there have had these challenges, and what has been your experience with clearing up the communication gaps?