Tuesday, October 9, 2018

Rise of the Machines - a #pnsqc live blog


All right, it's day 2, the last day of the technical program and we are starting off with Tariq King's talk "Rise of the Machines". The subtext of this talk is "Can Artificial Intelligence Terminate Manual Testing?" In many ways, the answer is "well, kind of..."

In a lot of ways, we are looking at Machine learning and AI through a lens that Hollywood has conditioned us to. Our fears and apprehensions about robotic technology outstripping humanity has been a part of our common lore for the past 100 years or so. Counter to that is the idea that computers are extremely patient rocks that will only do exactly what we tell them to. My personal opinion, as far as anyone might care, is somewhere in between. It's not a technological problem, it's an economic one. We are already watching a world develop where machines have taken the place of people. Yes, there are still people maintaining and taking care of the grooming and feeding of these machines, but it's a much smaller percentage of people that were doing that work as little as ten years ago.

Recently, we have seen articles about software developers that have automated themselves out of their jobs. What does that tell us? Does that mean that we are ultimately reaching a point where our software is outstripping our ability to contribute? I don't think so. I think in many cases we might have reached a point to where a machine can replace a person who has ceased to look for broader and greater questions. Likewise, is it possible for machines to replace all manual testing? The answer is yes if we are just looking at the grunt work of repetition. The answer is more nuanced if we ask "will computers be able to anticipate the exploration sense and think of new ways to look for more interesting problems?" Personally, I would say "not yet, but don't count the technology out". It may be a few decades, maybe more, but ultimately we will be replaced if we stop looking for wider and more interesting problems to solve.

We focus on Deep Blue beating the grand master of Chess, Alpha Go beating the grand master of Go, and Watson beating Ken Jennings in Jeopardy (not just beating but being so much faster to get to the buzzer that Ken never got the chance to answer). Still, is that learning, or is that brute force and speed? I'd argue that, that this point, it's the latter, but make no mistake, that's still an amazing accomplishment. If machines can truly learn from their experience and become even more autonomous in their solutions, then yes, this can get to be to be very interesting.

Machine Learning is in the process of re-inventing how we view the way that cars are driven and how effective they can be. Yes, we still hear about the accidents and they are capitalized on, but in that process, we forget about the 99% of the time that these cars are driving adequately or exceptionally, and in many cases, better than the humans they are being compared to. In short, this is an example of a complex problem that machines are tackling, and they are making significant strides.

So how does this relates to those of us who are software testers? What does this have to do with us? It means that, in a literal brute force manner, it is possible for machines to do what we do. Machines could, theoretically, do exhaustive testing in ways that we as human beings can't. Actually, let me rephrase that... in ways that human being's wont. 

The Dora Project is an example of bots doing testing using very similar methods as humans. Granted, Dora is still quite a ways away from being a replacement for human-present testing. Make no mistake, though, she is catching up and learning as she goes. Dora goes through the processes of planning, exploring, learning, modeling, inferencing, experimenting, and applying what is learned on future actions. If that sounds like what we do, that's no accident. Again, I don't want to be an alarmist here, and I don't think Tariq is trying to be an alarmist here either. He's not saying that testers will be obsoleted. He's saying that people who are not willing to or aren't interested in jumping forward and trying to find those next bigger problems, those are the people that probably should be concerned. If we find that we are those people, then yes, we very well probably should be concerned.

No comments: