In our past experiences, where we have been engaged for performance validation, the most assumed aspect as start of performance testing is the infrastructure. That is largely due to the fact that a sizable portion of performance testing community is completely alien to infrastructure. Most of the performance testers start to look into performance issues purely from an application perspective, but not from the infrastructure stand point - they simply assume nothing can go wrong with infrastructure and that the application code or the database is the main culprit behind performance issue. On the other side, development teams have bounded to us stating that the code is working in top shape and all that was required from client was that they need to make some extra spend on infrastructure ( increased RAM, CPU Processing power, Cache, etc) so as to see ideal application performance.
Here’s our take on this. Performance validation has to be so that the testing has to focus on infrastructure- I/O, hardware configurations, software configurations, memory, CPU processing power - in order to be able to determine, if the right sizing has been made available then there would not be scalability issues from an infrastructure stand point. There have been extreme cases where in enterprises have invested in sizable hardware, while in some case too little has been spent on the infrastructure. There have been cases where the web, app & the data tiers are deployed in the same server and the application is validated for performance. In all these scenarios, wrong sizing has led to significant delay in the release calendar of the application, since all the concerned teams involved (QA, Architects, Developers) look at long cycles of improvisation for the desired performance.
It is largely recommended to take note of the following pointers, in any performance testing engagement.
[A] Find out the recommended hardware for each layer / tier as specified from the software vendor
[B] Find out the recommended software settings for each of the components involved like application server, web server, database server, adhere to best practices in the industry.
[C] Isolate the performance of the application by focussing on the I/O for single user, then scale this up further for multiple users.
[D] Isolate the performance across different hardware associated with the different tiers, so find the bottleneck point.
[E] Run performance tests iteratively, to measure for proper improvement.
Start looking for fundamental performance attributes before looking at larger complexities. Couple of examples like
[A] Frequent paging/swapping of processes is a resultant of inadequate physical memory or as a result of low I/O caching. It might be worthy to check for caching, as it would improve the application throughput and responsiveness.
[B] Curtailed throughput of application is due to insufficient CPU time limits or due to insufficient I/O, which results in poor scalability.