Monday, October 8, 2018

Automating Next-Generation Interfaces - a #pnsqc Live Blog



Normally, I'm usually not all that interested in attending what I call "vendor talks" but I made an exception this time because the topic interested me and I've been curious about this topic for awhile.

In "How to Automate Testing for Next-Generation Interfaces" Andrew Morgan of Infostretch covers a variety of devices, both common and emerging. Those of us who are most comfortable with working with web and mobile apps need to consider that there are a variety of devices we are not even interacting with (think bots, watches, voice devices like Siri and Alexa. These require a fundamentally different approach to testing. Seriously, how does one automate testing of Siri short of recording my voice and playing it back to see how well it does?

Additionally, we have a broader range of communications (WiFi, Bluetooth, biometric sensing, etc.). Seriously, how would someone automate testing of my FitBit Surge? How do we test face detection? How about Virtual Reality?

To accomplish this, your project team must be able to successfully pair device hardware capabilities and intelligent software technologies such as location intelligence, biometric sensing, Bluetooth, etc. Testing these systems and interfaces is becoming an increasingly complex task. Traditional testing and automation processes simply don’t apply to next-generation interfaces.

OK, so that's a good list of questions but what's the specifics? What does a bug look like in these devices and interfaces? Many of these issues are around User Experience and the usefulness of the information. If you are chatting with a bot, how long does it take for the bot to figure out what it is you are talking about? Does it actually figure it out in a way that is useful to you? Amazon Alexa has a drop-in feature where you can log into an Alexa and you can interact with it. Can these be abused? Absolutely! What level of security do we need to be testing?

Other things to consider:

- How are we connecting?
- How are we processing images?
- How are we testing location-specific applications?
- Is the feature effectively dealing with the date and time well?
- How do we handle biometric information (am I testing a fingerprint or the interaction with that fingerprint?)

At this point, we are into an explanation of what Infostretch provides and some examples of how they are able to interact with these devices (they have dedicated libraries that can be accessed via REST). Key takeaway is that there are a lot of factors that are going to need to be tested and I'm intrigued at how to start addressing these new systems.



No comments: