It's that time again. Welcome to the madness that will be the TESTHEAD Live Blog experience. I'm going to be a bit harried today seeing as I have not one, but two presentations to do today. First will be my talk, and then the game that Bill Opsal and I will be presenting today during lunch. However, you all don't really care about my trials and tribulations, you want to hear what the speakers have to say and my "hot take" on what they have to say. At least I'm guessing that's why you are reading his. It's possible you just want to see what dang fool thing I say next. I guess that's OK, too ;).
First out the gate is Angie Jones, and seriously, I don't think I have to tell anyone who will be reading this who she is. If you don't know, just follow her on Twitter and prepare to be amazed :).
Angie used an example of a campaign focused on women who are into "Space and Technology" and men who are into Art.
Does something sound odd there? Yeah, that's the point. Imagine having to test that campaign. How do you target those two groups? Is there an overlap? Are you actually getting the message to the people you intend to? How do you know you are targeting the right people? Are you even clear on who the right people are?!
What we are looking at here is an example of what Angie is calling "Testing the Untestable". This resonates with me because my own talk is based around testability (more on that later, I promise ;) ) but here would be a great example of where I might very actively be asking, "How in the world am I supposed to test this?" If it's advertising, odds are I'm not going to be able to test this in some quasi-staging environment. Advertising needs real eyeballs and real interactions. Is it possible to target literal individuals? Are there limitations to that approach?
Angie shared a few different examples of how real-world issues get in the way of actual testing. Minimum users may be required. Users that are too new may not be selectable. How do we get around these areas? More to the point, do we even understand how the application works so that we can confirm something is behaving as expected? Oh, does this resonate with me (LOL!). There are so many instances I can recall where I was sure that I understood what was going on and then realized that I actually had no idea what was going on. Again, this is where asking all of those wonderfully annoying questions can at least get us on a similar page.
So often the greatest enemy to testing isn't desire, or understanding, or technology. So often, it comes down to time. We can look at so many instances where we wanted to do good testing but time just wasn't on our side. Again, I'm so happy to be hearing Angie talking about this because it makes me feel like my talk is maybe as coherent as I believe it to be (jury's out on that, I have a couple of hours to find that out for sure). I will say that this is an example of wanting to make sure that we understand the problem and the testability issues as early as possible, even if it's just to get everyone on the same page of "oh yeah, this is totally testable" or "oh wow, this is going to be kind of ugly!"
Ultimately, we have to be able to get granular enough and get down to hypotheses that can be tested. It's not enough to say "my part works" or "that's outside of the components I need to care about". The entire team needs to have a philosophy of testability and capability of verifying hypotheses. Simple? Yes. Easy? Hardly. Ultimately, the team(s) need to work together and have a shared focus to be successful. It won't necessarily end untestable situations but it should help to minimize them.