Listen to this article

Take a situation where performance test teams need to understand from client stakeholders, as well as from their end-customers about the performance issues. Here patience is a virtue! Teams must gather the information on the performance issues in detail, exhaustively! This could also involve analysing production support tickets complaining about performance. In such scenarios, it would be more fruitful to focus on the most commonly occurring issues/observed patterns, which can give better returns for the effort invested to improve performance.

In this case of existing systems, it is important to understand the noise that happens along with the core system. Understanding what associated system operations run on the same infrastructure, and how it impacts the overall performance of the system can be very helpful in deducing some concrete approaches to performance improvement.

Once the requirements are made available, you should factor in the anticipated business usage patterns encompassing different roles, transaction flows, APIs, batch jobs that might be involved.

The other type is to measure each and every operation at a granular level right from APIs, UI screen loadings, user transaction times, build baselines for small, isolated bases to larger numbers – getting measurements separately for APIs, UI transactions separately based on UI workload model and batch jobs. Once the baseline has been built, they can look forward to performing a combined workload model that will simulate real life usage encompassing APIs, UI and Batch Jobs. This is more of the ground up approach of helping the development from the performance perspective.

Your test approach should clearly call out prerequisites/ entry criteria like functionally stable and vetted out build to be deployed, performance test environment needs like production like scale, production like data volume including factoring for next 3 years’, year-on-year growth to be made available/created, appropriate permissions on the cloud between load generators and application in fray, and availability and access to monitoring solutions, etc. In addition to the above, your test strategy should clearly call out the performance test exit criteria like types of performance test completed, results analysis, performance bottleneck identified, SLA, KPIs measured, etc.

Your performance test strategy should cover the tools being used for simulation, performance measurements, monitoring, how these would be setup and accessed by different stakeholders.

Finally, one key point to ensure that you have covered every performance goal/requirement, is to have a mapping section after the test approach, which will show which performance test goal/requirement is being addressed in which section of the performance test strategy document. This will not only ensure that all aspects of performance test objectives have been covered, it would also give the reviewers/stakeholders who are signing off the test strategy relevant answers for all their queries on performance tests.