Wednesday, April 3, 2019

One Thing Testers Can Learn from Machine Learning: an #STPCon Live Blog Entry


Today's keynote is coming to us courtesy of Mary Thorn. Mary's been in the game about twenty years including a stint as a COBOL programmer relative to the Y2K changes that were happening at the time. Mary is also going to be doing a second talk a little later, so she's going to be in this blog twice today.

All right, let's get a show of hands. How many people have seen a marked increase in hearing the term Machine Learning? How many people feel that they understand what that is? It's OK if you are not completely sure. I feel less sure each time I see a talk on this topic. Let's start with a definition. Arthur Samuel defined it as: “Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.”  The key here is that the machine can learn and can then execute actions based on that learning. This includes a couple of terms, supervised and unsupervised learning. Supervised learning is the process of learning something that maps an input to an output based on example input-output pairs. Each example is a pair consisting of an input and the desired output. Unsupervised learning groups unlabeled and unclassified data and by using cluster analysis identifies commonalities in the data and reacts based on the presence/absence of commonalities.



OK, that's fascinating but what does this actually mean? What it means is that our systems are not just the dumb input/output systems we tend to look at them as. There's a change in the skills we need to have the ways that we focus on our testing efforts. More to the point, there's a variety of newer skills that testers need to develop. It may not be necessary to learn all of them but testers (or at least test teams) would be well suited to develop levels of understanding around automation, performance, dev ops, exploratory testing, and pipeline CI/CD skills. Those skills may or may not reside in the same team member but they definitely need to reside in the team.

There are a couple of things that are helpful to make sure that this process leads to the best results.  The first is that there is a specific goal or set of goals to be pointing towards. In the process, we need to look at the outputs of our processes and examine what the data tells us and following it where it leads. Be sure, we may learn things we don't really want to know. Do we have defined tests for key areas? How are they being used? Do they matter in the first place? What are the interesting things that jump out to use? How can we help to determine if there is a cluster of issues? This is where exploratory testing can be a big help. Automation can help us consolidate the busywork and gather things together in areas. From there, we as individual testers can look at the data output and look for patterns. Additionally, and this is apropos to the 30 Dys of Testability focus I jumped through in March, we can use the data we have received and analyzed to help us determine the testability of an application. Once we determine areas where testability might be lacking, we should do what we can to emphasize and increase the overall testability of our applications.

Analytics are hugely helpful here. By examining the analytics we can look at areas where we can determine what platforms are used, what features actually matter, and what interactions are the most important. In short, let's make sure we learn what our app is actually doing, not just what we think or want them to do.




No comments: