Monday, October 12, 2020

PNSQC 2020 Live Blog: Of Machines and Men with Iryna Suprun

As is often the case at PNSQC, several of the talks are from people I have not seen speak before. Iryna Surpum is focusing her talk on areas of AI and Machine Learning. As we start her talk, we look at the fact that there are few tools available where AI and Machine Learning are prominent and prevalent for individual users. Some hallmarks of AI-based tools and what is being marketed are Codeless script generation, the ability to self heal with changes to the environment, meaning the script can collect data about elements of the application itself, and the ability of Natural Language Processing to be able to convert the documentation to actual tests (this is a new one to me, so hey, I'm intrigued). 

Iryna Suprun
Comparison of Visual Output and the expected design is becoming more sophisticated. More tools are supporting these features and additional levels of comparison are being applied (not just pixel to pixel comparison these days). 

So while we have these changes coming (or already here, how can we leverage these tools or learn how to use them in the first place?

Example tools to try out for these comparisons were, Testim, Mabl, and TestCraft. What did they provide? All three allowed for a Quick Start so that they could learn and be able to automate the same basic initial test case. All of the tools had recording implemented, which allows for initial set cases to be created (testCraft had a few extra steps to be set up and utilized so not quite as easily started as the other two). Modifying and inserting/deleting steps was relatively fast.
So what challenges were discovered/associated with these tools? as could be expected, the Codeless Script Generation (Recording) is good to get started but its usefulness diminishes the more complex the test cases become. This is to be expected, IMO, as this has been the same issue with most automation tools that promise an easy entry. It's a place to start but getting further will require proficiency and experience beyond what the recorder can provide. Self-healing is a useful feature but we are still at a point where we have to be somewhat explicit as to what is actually being healed. thus calling it self-healing may still be a misnomer, though that is the goal. So how about self-generated tests? what data is actually being used to create these self-generated cases? This didn't seem to be very self-evident (again, this is me listening, so I may be misinterpreting this). An example is to check to see that links work and are pointing to literally legitimate end links. that tests to see that a link exists and can be followed but that doesn't automatically mean that the link is useful to the workflow or that it will validate that the link is relevant. People still need to make sure that the links go somewhere that makes sense. 
So even though we keep hearing that AI And Machine learning are on the horizon and are even here changing the landscape, there's still a lot of underlying knowledge needed to make these tools work effectively. There's definitely a lot of promise here and there's an interesting future to look forward to but we do not have anything close toa magic want to wave yet. In other words, the idea that AI is going to replace human testers might be a possibility at some point but that promise/scare is not quite ready for prime time yet. Don't be complacent, take the time to learn about how these tools can help us and how we can then leverage out brains for more interesting testing work.
 

No comments: