Sunday, October 9, 2016

The Humans In the Machine - Talking Machine Learning

This weekend was one of the more interesting Weekend Testing Americas sessions I've hosted. Hurricane Matthew was making itself known and people were dealing with getting in and out of a broad section of the Southeastern United States on Saturday, as well as looking at whether or not their homes were OK. Under those circumstances, I can understand getting together to talk testing may not have been a high priority. We had several people ask to attend, but by the time it started, there were just two of us, Anna Royzman and myself. Anna and I both decided "hey, we're here, Anna's never done a Weekend Testing session before, let's make the most of it" and so we did :).

Our topic this go around was a chance to look at a new feature of LoseIt called SnapIt. The purpose of SnapIt is to take pictures of food items, and based on what the app thinks the picture is, select the food item in question and get a macronutrient breakdown. This is a new feature, so I anticipated that there may well be some gaps in the database, or that we might get some interesting tags to appear. We were not disappointed. In many of the pictures, well known food items were easy to identify (apples, bananas, etc.) and some a little less so (a small pear variety that had a darkish green skin was flagged as Guacamole, which isn't really too far of a stretch, since I could see it interpreting it as a small avocado):





Complex and packaged foods it struggled a little more with, but in those cases, if it has a bar code, most of the time, reading the bar code would deliver the information we needed, so SnapIt was less important in that setting, but it was interesting to see what it flagged things like granola bars or shelled walnuts as.





During the session, we did discover one interesting bug. On my iPhone, if a user takes two pictures, and discards them, and tries to take a third picture, the camera button on the screen appears as a half circle. The bottom of the button is missing. Exit the camera and open it again, and the camera button appears complete again. Shoot two pictures and throw out the two, you will get the half button on the third try.



Outside of the actual testing of SnapIt, we had a pretty good discussion of machine learning in general and the idea that many of the algorithms used for these processes are pretty good, but can often have unintended consequences. The past few weeks, I've been listening to a number of podcasts that have featured Carina C. Zona and her talk "Consequences of an Insightful Algorithm" (talk and slides). She has appeared on both the Code Newbie podcast and the Ruby Rogues podcast, and both treatments made me want to explore this topic further. One comment from Carina's presentation that stood out to me is the idea that "math (algorithms) are objective conceptually, but their implementation hardly ever is, because it's people who create them, and we create algorithms with our prejudices, biases and fallacies intact. In short, we do not see our algorithms for that they are, we see them for who we are (paraphrase of Anais Nin, but you get the point, I hope ;) ).

I'd encourage anyone who wants to get a better understanding of the potential dangers of relying too heavily on machine learning, as well as the human aspects that we need to bring to both coding and testing those algorithms, please check out Carina's talk. For those who want to see some of the terrain that Anna and I riffed on, please feel free to read the chat transcript of the WTA session.

No comments: