Listen to this article

In today’s digital age, the performance of digital applications is critical for customer satisfaction.
Companies now prioritize customer experience (CX) like never before to keep customers loyal and
prevent them from flocking to competitors. With Gen Z being quick to switch to other options if left
unsatisfied, nailing application performance has never been more important.
Performance testers faces immense pressure to execute flawless tests and detect major issues
before go-live, providing a seamless user experience from the start. This becomes even more
challenging as go-lives are often rushed to stay ahead of the competition.
Here are a few fail-safe pointers to implement in performance testing, that helps ensure that your
application is a success.

  1. Test Planning/ Test Strategy
  2. Scripting (JMeter Checklist)
  3. Test Execution and Analysis

A performance tester should have information of all the below checklist items & should seek to get inputs & follow these diligently for successful performance testing 

  1. Test Planning/ Test Strategy

Checkpoints

Entry Criteria

  • Have the Project release timelines, scope been communicated? If not pls seek and get the details of project scope, types of tests to be covered, test case/user traversal flows to covered as part of scripting and execution timelines
  • Have the test cases/user traversal flows been identified from the Requirement Document and is the screen flow for critical transactions available for the same?
  • Is the production mix data available?
  • Is the application architecture, AppServer, WebServer, database details available? Are architecture (logical, physical for both prod and test environments), deployment diagrams available?
  • Test Plan or Strategy Document Contents – Checkpoints
  • Have the release, project(s), project manager(s) details been updated?
  • Has the Revision History section been updated with the latest version#, Date, Originator and Modification details?
  • Are all the locations in the reference documentation section (Requirement Document) links to the latest version of the respective documents?
  • Has the Project Roles & Responsibilities section been updated with all the project managers and lead engineer’s names and phone numbers?
  • Does the Document Approvals / Signoff section include the QA/Client managers/relevant stakeholders involved in the project for the current Project release?
  • Is the Introduction Section (Purpose, Project Summary) updated with details from the Requirement Documents as applicable to the current Project release?
  • Is the correct System Architecture Diagram in place in the Architecture Overview Section?
  • Does your Test Strategy section cover all the different types of tests that are to be executed for the Project/Release?
  • Have all the test scenarios to be scripted been detailed out in the Test Cases section?
  • Is the Test Cases section in format discussed?
  • Is the scope of testing in-sync with the project’s scope for the project release (with respect to functionality, system upgrades, etc)?
  • Has the Assumptions/Constraints/Risks section been updated with proper data and ownership
  • Are the dates in the Deliverable section reflective of the current project release schedule and is the right link provided to the test reports?
  • Does the Project Requirements section have the right link to the baseline reports?
  • Are the TPS requirements given accurately (2x,3x,4x) for the text sections (Capacity, Longevity, Max TPS etc)?
  • Has the Test Dependencies section been update with the latest Production Mix data and TPS values?
  • Has the Overall Production Mix % split been specified with accurate weight age in a multiple scenario case?
  • Are the data requirements (User ID etc) given for all the applicable scenarios?
  • Check whether there is a need for consolidated(transaction) form of the response time of the modules, if not avoid using it in the script, else Manual work will be required on the final report and the total row will need to calculate freshly/manually. 
  • Is the scripting tool & monitoring tool specified with correct version in the Test Tools sections?
  • Has the appserver, webserver, DB Server etc with sizing details been provided in the Environment/Server List section as applicable?
  • Has the Logs and Configurations section been updated with application logs, webserver logs and config file locations?
  • Are all the tests mentioned in the test strategy section, explained in the Performance Tests section with details about purpose, Objectives etc
  • Are the estimated/actual milestone dates for each phase of the project with the performance test deliverables mentioned accurately in the Performance Testing Schedule?
  • Have the dates for builds schedule been updated in the Environment Schedule, if applicable?

Formatting Checks 

  • Is the TOC updated to reflect the correct page numbers/section names and are the links on the TOC working correctly?
  • Are the header/footer appropriately placed as specified?
  • Has a spell check been performed?
  • Is the indentation uniform throughout the document?
  • Have page breaks been appropriately given to the starting of new sections, if applicable?
  • Is the font uniform throughout the document?
  • Have the section headers been formatted appropriately so as to reflect properly in the TOC?
  • Is the Test Plan named appropriately according to conventions, if applicable?
  • Have the properties settings been modified?
  • Is the Alignment proper throughout the document?
  • Has a Self-Review been performed before sharing the Test Plan?
  1. Scripting (JMeter Checklist)

Some general standards to follow in JMeter

  • Keep scripts modular: Create separate scripts for different functionalities or test scenarios. This will make it easier to manage and maintain the scripts.
  • Use variables: Use variables for any test data that will change, such as user IDs, passwords, and URLs. This will make it easier to update the test data later on.
  • Use Assertions: Use assertions to verify that the response of the server is correct only for debugging purpose. This will help you catch any errors that occur during testing.
  • Use correlation: Correlate any dynamic values that are returned by the server, such as session IDs or CSRF tokens. This will ensure that subsequent requests are valid.
  • Use realistic load profiles: Use realistic load profiles to simulate the expected user behaviour. This will help you identify any performance issues before they occur in production.
  • Use timers: Use timers to simulate user think time and pacing between requests. This will help you create more accurate load profiles. Choose the right combination of timers (constant, Gaussian Random Timer depending on arrival rates to be formulated.
  • Use CSV data sets: Use CSV data sets to read test data from external files. This will make it easier to manage and maintain large amounts of test data. Ensure all CSV data set profiles are kept in the bin folder of the JMeter instance and no full path’s should be used. This helps in moving across Windows/Mac/Linux instances.
  • Use non-GUI mode: Run JMeter in non-GUI mode to reduce the overhead on the system and improve performance.
  • Use distributed testing: Use distributed testing to simulate larger loads and distribute the load across multiple machines.
  • Use version control: Use version control to manage and maintain scripts over time. This will help you track changes and revert to previous versions if needed
  • Enable – “Delay Thread Creation until needed in Thread” in Thread Group
  • Use WorkBench for rough activity, debugging, recording so that it’s not executed 
  • Use Patterns to Exclude. Avoid using “Use Patterns to Include” {Check this while recording in JMeter Patterns to Include gives a better control in recording capture}. Always use the Recoding Template, capture the request response in XML files in View Results Tree. Do this recording at least twice to ensure no request response captures are missed.
  • Avoid GUI mode for large load. Don’t use listeners that are heavy
  • Avoid view result tree and table while execution test. Use it only for debugging
  • Avoid the use of Assertions in JMeter, as it is likely to be a memory hogger.
  • Use loop controllers for same samplers
  • Use dynamic test data with CSV
  • For Saving Listener output use CSV instead of XML.  Disable XML is considered heavy
  • Ensure all protocol, server, port number entries are cleared in the subrequests. Only the ‘HTTP Request Defaults’ should have this. Note ‘Http Requests Defaults’ should have the variables in them and no hard coding should be permitted in there. This helps in when changing the environments during the tests.
  • Ensure ‘Header Manager’ too have no hard coded values in them

Checkpoints

Recording Options 

  • Decide on the appropriate sampler for scripting.
  • Uncheck the Auto Correlation option, if it’s checked. Sometimes this may be required and is left to the judgment of the scripter {Auto Correlation is not ideally recommended}
  • Add Appropriate comments while recording or after recording which explains the function of every transaction Always add elaborate description in the comments column, so that it is easy for anyone to follow or maintain in the future.
  • Keep all the Values to be inputted with field name ready in another document (notepad/WordPad/etc) which would prevent re-scripting the flow due to wrong input or forgetting the values (in case of complex/very large workflows).
  • Decide the script name for the flow in advance.

 Scripting Content 

  • Check whether the script starts with an appropriate Header which contains the following info?
  1. Copyright notice
  2. Script Name
  3. Author
  4. Version information
  5. Script Version
  6. Modified Log
  7. Associated files
  8. Script Overview
  9. Function TOC
  10. [URL TOC]

User proper Hungarian notation for variable names.

  • All the transactions should have the check points to verify page/function accuracy
  • The check points must be unique between pages and must distinguish one page from the other.
  • All action and transaction names should follow a set of standard format:  ApplicationName_FunctionalityName_FunctionalityNumber_PageNumber_PageName/Transaction Name

For Eg: Application name = PI

5 scripts exist 

Associate, Supervisor, Workflow, QC and Index

PI_Associate_01_01_Process

PI_01_PreLogin

PI_02_Login

PI_03_Logout

PI_QC_04_02_WorkTab

From the above you must follow those common transactions across the application like login, logout etc have a common transaction name across all scripts.

  • Always parameterize URL, UID, and PWD. 
  • PWD may need to be encrypted.
  • Parameterize the following values in the script depending upon the requirement.
  1. Input Data (Like User ID, Password, Date, Timings etc )
  2. Login server URL
  3. Application Server URL, Content
  4. Think Time
  5. Other Values applicable
  • Correlate the Dynamic values in the script using manually co-relation through Regular expression extractor
  • The best way to figure out co relatable values is by scripting 2 times with different inputs (including uid, pwd and any inputs that you feed into system,) and the comparing what changes. Anything that changes in the request and is not input by you requires co-relation. 
  • There should not be any orphan requests in the script
  • Ensure that if multiple Vusers cannot login with same id, then place “unique” as the uid/pwd data file setting.
  • Run the script for multi-iteration with multiple data values between each iteration
  • Maintain a tabular structure that will help you analyze what went wrong while trying to replay went wrong while trying to replay.  

A script is COMPLETED only when the script passes below conditions with all test data, standardizations, response handling, error handling etc are incorporated

ScriptSingle User Single IterationSingle User Multi IterationMulti User Multi Iteration
S1 PassedFailedFailed
S2PassedFailedFailed
S3

Here multi user refers multiple Vusers.

Finally, all scripts should have passed status in all the columns of the previous table

  • Understand the nature of users getting into the system and choose appropriate controllers for implementing login. For example, internet facing end users are more likely to login, perform operations and logout. In this kind of scenario, use a simple controller for the login. In the case of back office users, they are likely to login at their shift start time, perform operations and finally log out when shift ends. In this kind of scenario, use a once only controller and then rest of it could be in loop controller.
  • Always use conditional controller like ‘If’ controllers to move to the next step, only if the previous step variables are available and populated properly. This will help minimize erroneous requests being sent when there is no proper data available in the previous response.
  • For if controllers, ensure “Interpret Condition as Variable Expression?” is deselected
  • Note that whenever as series of “If Controllers” are used, see to that they are to nested as deemed necessary for the situation.
  • Post the use of Normal Thread Group, once multiple users tests pass, make use of Ultimate Thread Group as needed.
  • Trial the controls with dummy samplers, in terms of the flow of in each request in different flows in case it is involving throughput controller etc.
  • Avoid using any dummy sampler in the script in the test runs, dummy sampler response time will add up to the overall response time and mess up the result. Try using a Beanshell processor always, in the worst-case scenario use a Beanshell sampler.
  • Before starting any scripting work, keep the JMeter heap size to 80% of your physical memory size to avoid any issues while running higher users.
  • Always take a copy of your recordings and then work on the copied set, keep all original sets in a secured place till the project ends.

3. Test Execution & Analysis

Entry Criteria 

  • Test plan/Strategy should be completed and signed-off.
  • Test Data set up should be completed.
  • Make sure that the application is functionally stable. {ideally before the start of scripting}
  • Make sure that the scripts/scenarios have been “tested” to ensure no data / application / script issues exist.
  • The Scripts have been reviewed and signed off.
  • Environment is available independently for performance testing/ or as discussed in the Test Plan

Execution 

  • Check the duration of the Test Run is setup as per requirement.
  • Check the runtime settings have been set up as per requirement. 
  • Confirm the system utilization before commencement of the test; the utilization should be within the limits specified in the test plan.
  • Confirm all the log levels set at ERROR/INFO/etc as per scenario specifications.
  • If the test requires specific configuration changes, ensure that the changes are made and documented.
  • Make sure that the run time settings set appropriately and reviewed by another person
  • Check whether is there enough disk space in the server file systems and Database.
  • Confirm whether the load generator(s) and machine where the results folder is set have adequate disk space
  • Configure AppDynamics/  NewRelic/WilyIntroscope/ AWS cloudwatch etc. any of the monitoring tool to collect the data during the test run
  • For Java applications use JVisualVM to monitor heap, GC.  You need to work with dev team to open the ports for remote connection as a pre-requiste
  • Monitor the required process like prstat, vmstat, GC, netstat and collect the data from server when test is running (if required & if you have access to get them)
  • Take the thread dump if it’s required. Also kill the process in case of any issue with the application
  • Monitor the test properly and check whether the requirements have been met or not
  • Before actual execution do a dummy run to see that all scripts are working and there are no issues with User Ramp up
  • Once the Scenario (for test run) is ready save it locally and also takes a backup of the same on another machine.
  • Check the Load generator connectivity to application URL (whether any ports need to be opened) and network utilization before commencement of Test.
  • Remove the log dump on the server (if required)
  • Check if there are any unnecessary processes running on Controller and Load generator which are/may occupy memory during the test.
  • Ensure that there are no batch jobs or processes running on the server which may lead to incorrect results.
  • Do not open any application on the Controller or Load generator when the test is running
  • Do not try to add to many vusers when the test is running, this leads to LR crash.
  • Turn on the snapshot log level for time consuming scenarios (Ex: endurance)
  • Look for any Cold Starts.
  • Take one API and request the same on a 10, 15 & 20 min interval for 5 to 6 times and observe the latency. Every time we should obtain the same response time.

Analysis 

  • Analyze the application, web services logs for potential issues (connections, memory, cpu etc.) 
  • Analyze the Garbage Collection logs for Memory leaks / excessive collections
  • Create a template to save time when opening the Analysis file in analyser (Like the percentile value, Not including think time and graphs required)
  • Set the Granularity of graph to improve readability
  • Change the X-axis and Y axis values for graphs generated by analyser (if required)
  • Confirm that the applications processes have returned to normal after the test (hanging issues in case any) 
  • Check for the transaction having response time(s) beyond acceptable limits.
  • Provide the DB bottleneck by providing the top queries which are most time/resource consuming.
  • Check the overall TPS achieved for constant duration.
  • Analyze the system and application-level process CPU utilization to be within permissible limits
  • If the error rate is greater than acceptable limits (5%?), then find out the problem into any one of these categories – environment, application or script
  • Save the Analysis file in .JTL/ .CSV  format to save time for future use.
  • Once the template is applied and required graphs generated save the report in HTML or Word format for reference