Step 1 - Identify Objectives
The purpose of this step is to identify and write the performance objectives of your application. The key question you should ask yourself is:
“How should my application behave under load?”
The main parameters we should consider are:
Response time- The time that would take the application to display a certain output or perform a certain calculation.
Example: the product catalog must be displayed in less than 3 seconds.
Throughput– The rate of successful message delivery over a communication channel.
Example: the system must support 100 requests per second.
Resource utilization- A frequently overlooked aspect, Resource Utilization defines how much resource your application is consuming, in terms of CPU, memory, disk I/O, and network I/O.
Maximum User Load- Determine how many users can run on your testing hardware configuration.
These are probably the most important step!
Step 2 - Identify Key Scenarios (or profiles)
What are scenarios?
Scenarios are anticipated user paths that generally incorporate multiple application activities.
How do you identify scenarios?
Key scenarios are those for which you have specific performance goals or those that have a significant performance impact. These scenarios represent business activity of users over time.
e.g. - Open 'About Window' will take less resource than perform 'Buy' action.
The “Buy” action, as opposed to “Open About Window”, will involve multiple actions like: SQL, Credit Card validation, IIS.
Step 3 - Identify the workload
Identify the distribution / ratio of the work - For each key scenario, identify the distribution / ratio of the work. The distribution is based on the number of users executing the scenario (according to their profile).
For an existing application this information can be provided from IIS log/counters as described in step 1.
For a new application this information can be based on market research, historical data, market trends and prototypes.
Calculate the users load per scenario - Based on the previous data; calculate the maximum possible concurrent users for the application. Using the work distribution for each scenario calculate the % user load per key scenario.
e.g. the distribution of load for key scenario could be similar to that shown in the following table.
Step 4 - Identify Metrics
Metrics are a derivative of your performance objectives. They are used to measure your application’s real time performance in comparison with your performance objectives. In addition, they also help you to identify problems and bottlenecks within your application.
Network-specific metrics: This set of metrics provides information about the overall health” and efficiency of your network, including routers, switches, and gateways.
System-related metrics: This set of metrics, help you identify the resource utilization on your server. The set includes CPU, memory, disk I/O, and network I/O metrics.
Platform-specific metrics: Platform-specific metrics are related to software that is used to host your application, such as the .NET Framework common language runtime and ASP.NET-related metrics.
Application-specific metrics: These include custom performance counters embedded in your application code that monitors the application’s “health”. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock or the number of requests queued to make an outbound call to a Web service.
Service level metrics: Service level metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios.
Step 5 – Pick Load Test Tool
Before writing your Tests we need to pick the right load-testing tool.
In order to select the proper tool for our application, and for us, we need to do a research. Load testing tools requires specific knowledge and each tool has his advantages and disadvantages.
Most tools have very good functionality and advanced features. One of the first parameters you would use for comparison of tools is the user limit (or lack thereof) and the scalability of that limit. As a rule of thumb try to avoid tools that cap the amount of users simulated via licensing (More users = more money).
Jmeter is one of the best open source tools to use but its performance starts deteriorating if the number of threads exceeds more than 1000 users in a single machine.
Step 6 - Create Test Cases
What is a test case?
- A group of activities involved in a scenario/user profile.
- The test cases are created based on the scenarios and the profile mix identified in the previous steps.
Each test case should include the expected results in such a way that each test case can be marked as a 'pass' or 'fail' after execution.
Test Case: Search phone owner
Create an automated test and set a specific load configuration for it.
Example: 400 users for 1 hour
And load test expected results for Test Case: Search phone owner:
Step 7 – Prepare\Understand Your Load Environment
- Our setup environment should duplicate the architecture of our production environment as closely as possible.
- It is important to simulate the load on environment similar to our production because even the smallest hardware or configuration difference can have a big impact on our results.
- Creating a testing environment which duplicates the production environment exactly can be hard and isn’t always feasible, however we need to do our best, because this environment will help us estimate the load results of our product.
- We must understand the hardware limits of our environment and find the bottlenecks before starting the Test.
Example: I have created a load environment with 2 computers.Which will run 10,000 concurrent users and ISP provides 1Mbps.Each user will open browser and perform a search for dynamic word in www.live.com.
Opening 5000 browsers on one machine?
I've never tried it but I'm sure that CPU usage will be 100% for a while.
When CPU usage is 100% the computer is not creating the needed load and we will get false information so this is our first bottleneck.
Do you think 1Mbps is enough bandwidth for 10,000 users?
No! 10,000 users cannot work together on 1Mbps bandwidth. This is the second bottleneck.
Step 8 – Run It Step by Step
• Begin load testing with small number of users distributed against the user profile, and then increase the load incrementally. It is important to have sufficient time between each step, so that the system has enough time to stabilize before the next set of user connections executes the test case.
• Incrementing the number of users slowly will make it easier to find the exact point/threshold where the system crashes or hangs due to load. However starting the tests with a large number of simulated users will prevent us from detecting that point/threshold efficiently.
• Run the load test in cycles. Each cycle should achieve a certain load increment, and should have analysis and fixing time in between. Check the metrics for each cycle, and document them, so you can show evidence that the loads was achieved already.
Step 9 - Run
After successfully implementing step 8, the system is considered stable, and we can run the full Load Test as mentioned in the pre-planned workload.
Before running the test we must make sure we are monitoring both the computer running the load test and the computer taking the load. This will help us find bottlenecks on both sides of the test.
Step 10 - Analyze and Evaluate the Results
After each run, and of course after the full run, we analyze the results and check against the metrics and make sure our objectives were achieved.
We save each run Results and compare with other run results.In this way we may notice improvement or deterioration of the performance and load of your application.
Make sure you can share your load and performance test results and evaluations - performance and load testing is a serious discipline, but it also needs to be understood by other key business disciplines such as business operations, senior management and maybe finance.
Try to simplify things so it will be understood by those specific populations: It helps to have some kind of graphical reporting ability (in the tool) that allows us to share our test results with other parts of our organization.