Listen to this article

JMeter is a popular open source performance testing solution today, and it’s being widely used by many organizations to simulate high performance scenarios. Being a Java based solution, it has its own advantages and disadvantages – having mentioned that, the free and open source nature of the solution outweighs most disadvantages. With the wide acceptance today, it is fair to say that JMeter has perhaps beaten some of the commercial competition hands down. Being a performance testing practitioner for the last few years, I thought it’s time to briefly pen down the top 5 best practices while using JMeter.

Workload modelling – Many a times, the performance is done at individual API level and not at a complete end to end scenario level (by defining a workload). While an API may be able to handle a good amount of concurrent workload when tested in a standalone fashion, same cannot be said when the entire system is validated using multiple threads with each thread making distinct API calls. So, it’s always ideal to have performance tests replicating the real time usage of the system.

Thread group selection – There are plenty of thread groups available in JMeter, some of them come from the extensions. While a banking application may have a uniform load throughout the day and probably lesser load during the night, same won’t be the case with e-commerce application that rolls out some offer every hour. From a plain vanilla thread group, there are stepping, ultimate thread groups etc. that offer different capabilities depending on the concurrent thread loading model.

Think time – This is an often ignored item, especially while using JMeter. It’s important to define the right think time between requests to ensure that the request frequency is closer to the real world. Too little think time may overload the system and may demand higher amount of resources be it in the form of memory or processing power or storage.

Test data – Using same user id or same test data often does not push the system to the limits. For instance, in a heavily cached DBMS system, running the same query with same parameters may not even load the DB adequately. It’s important to have diverse test data that is closer to reality.

Monitoring – JMeter offers some amount of client side monitoring capabilities. However, there are plugins like “PerfMon metrics collector” that can be used for collecting the server side metrics and correlating with the client side load. There are solutions like Blazemeter that integrate with New Relic, Amazon Cloudwatch, Dyatrace, AppDynamicsetc and can collect server side data. Leveraging them would help in identifying the bottlenecks much easily.