I realize I've been quiet most of today but that was because outside of my own talk, I was moderating the Process and Tools Track today. However, I'm here for the final talk today to hang out with Señor Performo (aka Leandro Melendez).
For those who are familiar with Leandro, you know he has made a name for himself in the performance space and if there's one thing he wants to make sure people understand, Performance goes way beyond load testing. Performance means that systems can withstand various stresses, handle numerous concurrent connections and perhaps most important, hold their functionality as the system scales. The performance also indicates that the availability and speed of transactions are important. In other words, we want to be able to keep our users engaged at a maximal level.
What are we looking for? We want to assure great response times, efficient and stable interactions, and do so in all situations. Big ask? Sure. A better question is, how do we actually achieve this?
Well, we may want to stop thinking about performance the way we always have. We need to step away from just doing load tests. We need to step away from performance testing being done at the end of development as an afterthought. We also need to perform multiple-step tests and actually perform tests that represent real-world traffic and considerations. In other words, we need to be able to see what a real production environment will need to work through. That means that freezing environments and playing hands-off with systems don't tell the whole story. There really is no such thing as pristine performance tests. Systems have to survive all interactions, not just those in sterile and theoretical environments.
Ultimately, Leandro thinks we need to change our perspective on our approach to performance testing. We need to be doing performance testing in a similar way to how we are doing our releases. At least if we are using an agile or DevOps style approach of CI/CD. Instead of big heavy tests on everything, we should be doing small but targeted tests on the actual areas that have changed. In short, we want and need to be able to get a reading on our performance issues not just at the end of a project or release cycle, but as we make each incremental change. In addition, there are many more areas we need to focus on, such as instrumentation, telemetry, and active monitoring. We don't want to just know when the system falls under the weight, we want to be able to see when and where Performance hits are occurring, well before we get into danger territory.
What Leandro is making a point to focus on is that Performance Engineering requires Renaissance Persons to perform. There needs to be a growth and development of skills for performance and rather than looking for one super performance testing unicorn, we should be creating teams of performance (I loved the comparison to a mariachi band :) ). In a true appreciation of the Shift-Left philosophy, we need to see performance be a goal and focus as early in the development process as possible, as every aspect of development (languages, tools, databases, infrastructure, network, cloud speed, and access) all can have an impact on the overall performance of the system. Additionally, every change and modification can have an effect on the system's performance.
On the whole, this sounds like an exciting premise and focus for modern performance testing. Sounds like a lot of work but could also be quite exciting and interesting to implement.