Yesterday I had an interesting experience. I was talking with a co-worker who has a friend who is a recruiter. The recruiter was looking at a number of resumes they had received for testers, and she was trying to determine if the person in question would be a good fit for the job. My co-worker asked me really quick if I could review the job description and see if I could give any suggestions as to how to narrow down the list. On one side, I was able to do so, but on the other, I noticed that there was a vagueness as to the original description. In the description, they are asking for people with UI experience, but they do not spell out if they mean experience with developing and designing user interfaces, or with testing user interfaces. Likewise, they made a request for familiarity with test scripts. I explained that, with the vagueness of the description, they could be looking at resumes for an entirely black box tester who writes literal test scripts (enter value A into input B, expect C=Pass, else FAIL) to test user interfaces, or they could be referring to a white box tester who understands user interface development, and hence can write unit tests to test functions and procedures. I gave him some suggestions to give as feedback to the recruiter to make sure that they are specific about the technologies, methods, and language that they us when they are describing testing, because, to quote Indigo Montoya from "The Princess Bride"... "you keep using that word... I do not think it means what you think it means!"
This has been something I’ve started to notice more and more. Developers and testers tend to think that they speak the same language, but there are many examples where testing phrases and concepts that are well understood by testers are either less understood or otherwise totally foreign. As an example, many testers are familiar with the concept of “pairwise testing”, where the tester creates a matrix of test options that identifies all discrete combinations of parameters, as opposed to testing every parameter exhaustively. The phrase “pair wise testing”, however, seems to be one of those “test dialect” statements, as when I have spoken with software developers and stated I was going to use pairwise testing to help bring down the total number of test cases, I have received a few blank stares and an inquiry as to “pairwise testing? What’s that?”. When I describe the process, I often get a comment back like “oh, you are referring to Combinatorial Software Testing”. Well, yes and no, pairwise testing is a method of combinatorial software testing, but it is not in and of itself combinatorial testing, as it’s not an exhaustive process, but rather a way to identify the pairs of interactions that would be most specific and, ideally, the most beneficial to spotting issues.
Another testing technique that seems to have gotten a few heads scratching is when I mention “fuzzing” or “fuzz testing”. The idea behind fuzz testing is that a user (be it a live human or an application program or test script) provides unexpected or random data to a program’s inputs. If the program produces error code that corresponds to the input and appropriately flags it as being invalid, that’s a pass, whereas performing these steps and causing the program to crash, or present an error that doesn’t make any sense, would be a fail. Again, when I’ve talked to software developers, and brought up the notion of “fuzz testing”, they have looked at me like I’ve spoken a foreign language. When I’ve explained the process, again, I’ve been offered a corollary that developer’s use in their everyday speech (“syntax verification” has been a frequently used term; I’m not sure if that’s representative, but it’s what I’ve heard).
So what’s my point? Do testers have a distinct dialect? If so, how did we get here? And now that we are here, what should we do going forward? Also, have you noticed this in your own interactions? How many out there have had these challenges, and what has been your experience with clearing up the communication gaps?