Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.
CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.
This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.
Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.
The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).
James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.
The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).
James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.
James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.
Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.
This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.
In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.
One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.
During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.
One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.
The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".
So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).
When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too". In this environment, testers and programmers both win.
Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.
We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.
Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.
We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).
Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.
OK, that's all cool, but what does this have to do with testing?
The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.
So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really don't want to know ;).
A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!