Tuesday, October 12, 2021

GUI Test Automation for EDA Software with @rituwalia20 (#PNSQC2021 Live Blog)

 


Photo of Ritu Walia


We are already at the last track talk of the conference. Wow, that went quickly. It's funny how fast things go when you are typing incessantly. I should probably say my fellow conference attendees probably appreciate me not pounding away on my keyboard (I confess, my keystrokes often sound like howitzers ;) ).

Chances are, unless we are lucky or work with some middleware component (hehe heh, that's my current active area) we are going to work with a GUI. In the case of what Ritu is talking about, she is describing Electronic Design Automation tools (EDA).

EDA fascinates me as it feels very much like the analog of the CDC machines used to create items in the physical space (I'm most familiar with musical instrument manufacture now using these tools but the neat thing is that these tools have some amazing capabilities and some very complex software processes. I can only imagine EDA tools fit a similar space.




This is an example of a product very much outside of my wheelhouse and my immediate question would be "How in the world would I test the software that does this stuff?" More to the point, how does someone work on Automating this? Apparently, the answer is "the same way we automate any other software with a front end". EDA software is complex, sure, but it still has a front end and that front end can be interacted with using a mouse and keyboard. Thus, that means that there is a way to interact with that mouse and keyboard and automate those actions like any other application.

To that end, there are a variety of areas that are similar to any other software application and can be driven.



From the talk, areas that need to be considered in EDA software are very similar to what we might see in any other software environment to automate:

• Error identification: determine the common user mistakes likely to be made when using the GUI
• Character risk level: determine which characters may create problems and when/where (e.g., using reserved characters incorrectly)
• Operation usage: determine if/when operations are used incorrectly in the application (e.g., loading an invalid rule file)
• Element relationships: determine if/when different settings or combinations of related elements create problems
• Limitations and boundary values: determine what issues are created when limits are exceeded, or boundary values not observed
• Performance and stress testing: typically observing time and memory consumption performance under extreme conditions
• Smoke testing: finding fundamental instability within builds, to prevent superfluous testing
• Real-world data: using actual data (e.g., customer data) that is not refined or limited, to ensure adequate coverage of customer-critical issues
• Exploratory testing: when bugs are found, performing random testing in the general area, or of elements created by the same developer, to look for additional bugs.
• Efficient bug reporting: giving back a clear bug report that can drive the efficiency of the bug fix



Also,  hearing that they use Tcl/Tk brought a very nostalgic smile to my face as I used to use Tcl/Tk back when I was at Cisco in the 90s. It's neat to hear that framework and language are still being used. I wonder if they use Expect, too :)?
 

No comments: