Skip to main content

11 posts tagged with "performance testing"

View All Tags

· 5 min read

Performance testing stands as a critical phase in the software development lifecycle, aiming to identify potential bottlenecks and ensure that the application meets its performance criteria under anticipated user loads. At the heart of performance testing lies the concept of the degradation curve, a powerful tool that provides insights into how an application's performance changes under various load conditions. This comprehensive guide delves into every facet of degradation curves in performance testing, equipping you with the knowledge to leverage this tool for enhancing application performance and user satisfaction.

Response Time Degradation Curve

What is a Degradation Curve?

A degradation curve, in the context of performance testing, plots the relationship between load (number of users) and response time for a system. It is pivotal in understanding how an application's performance degrades, or worsens, as the load increases. The curve typically features several key regions: the single-user region, performance plateau, stress region, and the knee in performance.

The Role of Degradation Curves in Performance Testing

Degradation curves serve multiple purposes in performance testing, including:

  • Identifying Performance Plateaus and Stress Areas: These curves help testers pinpoint the load levels at which an application maintains steady performance (performance plateau) and the points at which performance starts to degrade significantly (stress areas).
  • Determining "Good Enough" Performance Levels: By understanding where performance starts to degrade, teams can make informed decisions about acceptable performance levels for their applications.
  • Correlating Performance with User Experience: Degradation curves offer insights into how performance issues might affect end-user experience, helping teams prioritize performance improvements.

Analyzing Degradation Curves

Components of a Degradation Curve

  1. The Single-User Region: This part of the curve represents the response time when only a single user is accessing the system. It provides a baseline for optimal performance.
  2. The Performance Plateau: This region indicates the range of a user load under which the application performs optimally without significant degradation.
  3. The Stress Region: Here, the application begins to degrade gracefully under increasing load, marking the onset of performance issues.
  4. The Knee in Performance: This critical point signifies where performance degradation becomes severe, indicating the maximum load the application can handle before experiencing unacceptable performance.

Interpretation of Degradation Curves

Interpreting degradation curves requires understanding the nuances of each region:

  • Single-User Region: Ideal response times here set the expectation for the application's best-case performance.
  • Performance Plateau: Identifying this area helps in understanding the optimal load range and setting realistic performance benchmarks.
  • Stress Region and Knee in Performance: These indicate the limits of acceptable performance, guiding performance tuning efforts and capacity planning.

Building Performance-Degradation Curves

Creating a degradation curve involves a series of steps, starting with setting up the performance testing environment and culminating in the analysis of gathered data. Key tools and technologies for generating degradation curves include load testing tools like JMeter, LoadRunner, and Gatling. These tools simulate various user loads on the application and measure the response times at each load level. Step-by-Step Process for Creating a Degradation Curve

  1. Setting Up the Performance Testing Environment: This involves configuring the test environment to mimic the production environment as closely as possible.
  2. Executing the Test and Collecting Data: Tests are run at incremental load levels to gather data on response times and other relevant metrics.
  3. Plotting the Degradation Curve: Using the collected data, a curve is plotted with load levels on the x-axis and response times on the y-axis.

Complex Performance-Testing Scenarios

Understanding and analyzing degradation curves becomes even more critical when dealing with complex performance-testing scenarios. These scenarios might involve varying user behaviors, concurrent access patterns, or the introduction of new application features that could potentially alter performance dynamics.

Modeling User Behavior and Workload Distribution

Creating sophisticated models that simulate real-world user interactions with the application is key. By incorporating these models into performance testing, teams can generate more accurate degradation curves that reflect a wide range of user behaviors and workload distributions. This approach enables a deeper understanding of how different user types impact application performance.

Applying Degradation Curves to Complex Scenarios

In complex scenarios, degradation curves can illustrate how changes in user behavior or workload distribution affect application performance. For example, an increase in the number of users performing data-intensive operations might shift the performance plateau earlier in the curve, indicating a need for optimization in handling such operations.

Strategies for Performance Improvement

Once degradation curves have been analyzed, the next step involves using this data to guide performance improvement strategies. This might include identifying and addressing bottlenecks, optimizing code, or scaling infrastructure.

Degradation curves can highlight performance bottlenecks by showing where response times begin to degrade significantly. Identifying these bottlenecks is the first step toward implementing fixes, which might involve code optimization, database indexing, or enhancing server capacity.

The goal of performance tuning is often to shift the knee in the degradation curve to the right, thereby increasing the maximum load the application can handle before performance degrades ungracefully. This can be achieved through various strategies, including optimizing application code, improving database performance, and scaling out infrastructure.


Degradation curves are a powerful tool in the performance tester's arsenal, offering detailed insights into how applications behave under load. By understanding and applying the principles outlined in this guide, testing teams can enhance application performance, meet user expectations, and ultimately contribute to the success of their software projects.

Generate Degradation Curve With JtlReporter

Traditionally, creating the degradation curve was done in Excel or any other similar tool. This is indeed a very manual and not too scalable solution. As the test scenario, outcomes had to be copied from the test outputs from tools like JMeter, Locust, Gatling, etc. and copied into Excel. With every new test result, the procedure must be done again. With JtlReporter you get the degradation curve for each scenario out of the box, without any manual steps needed.

· 6 min read is a highly effective open-source performance testing tool designed to help developers ascertain how their systems will function under the stress of multiple users. By simulating simultaneous users, provides comprehensive insights into system performance and potential points of failure. It's Python-based and allows developers to write test scenarios in the form of Python scripts. This offers a significant degree of flexibility when it comes to generating specific user behaviors. The software is easy to use and offers efficient load-testing capabilities, including an informative HTML report feature. This article delves into how to generate and understand these HTML reports to make the most of for optimum system performance. HTML Report

Procedure to Generate HTML Report Using

Creating an HTML report in is a relatively straightforward process that delivers insights into your system's performance. Follow these steps to create your own HTML report:

The article assumes is installed on your machine, and you have an existing locust script.

  1. After writing test cases, run locust using the command line. Use the following command: locust -f --html=report.html. Replace "" with your file's name, and "report.html" should represent the name of your output file.
  2. Open the locust interface, typically running at http://localhost:8089. Set the number of total users to simulate and spawn rate, then start swarming to initiate the test.
  3. Once you have finished the test case, an HTML report will be automatically generated.

Generating these regular reports is vital for assessing performance over time, which allows developers to catch potential problems early and avoid system breakdown due to high loads. It aids in monitoring system behavior under various load patterns, aiding in detecting bottlenecks, capacity constraints, and possibilities for optimization. By comprehending these reports, one can better maintain system stability and ensure an excellent user experience.

Understanding the Report

Understanding's HTML report is crucial to extract useful insights about your system's performance. Testing with results in an HTML report with several data fields and sections. Here's how to interpret the key sections.

Statistic Table

The report opens with a statistic table that includes the number of requests made, their distribution, and frequency. Three key parameters here are:

  • Requests/sec: This is the number of completed requests per second.
  • Fails: This includes the counts and the percentage of failures.
  • Failures/sec: This is the number of failed requests per second
  • Median & Average Response Time: These figures indicate how long it took to process the requests, with the median being the middle value in the time set. But please note, that average is not the best metric to follow, actually they might be a misleading metric.

Distribution Stats

This table shows the distribution of response times, which is vital to understand the user experience at different percentiles of the load. The most often considered percentiles are p50(median), p90, p95 and p99. If the concept of percentile is new to you, please check Performance Testing Metric - Percentile, as this is metric is crucial in performance testing.


In the end, there are three different charts displaying the number of users, Response times, and Requests per second. These charts provide a visual reference for the system's performance over time. As you can see those are aggregated charts from all request, you cannot see there charts for individual requests.

These data points collectively provide a basic view of how the system performed under the simulated load. Those charts can provide some insights into application performance overtime, but won't reveal subtle nuances - such as performance drops in individual requests. Those drops can be easily hidden in the aggregated charts.

Key Metrics Measured by

Key metrics are fundamental to assessing how well your system performed under testing. Some of the crucial metrics measured by include the following.

  • Response Time: This is the time taken by a system to respond to a user request. It is provided in's report in various forms - percentiles, average, min and max response time. Lower response times generally indicate better performance. Unfortunately, does not provide us with standard deviation, which is helpful in system performance stability assessment. Instead, you could have a look at the difference between min response time and average. If the there is a big difference between them, it might indicate a performance bottleneck.
  • Error Rate: Represented as 'Fail' in the report, this measures the number and percentage of failed requests in relation to total requests made. In an ideal situation, the error rate should be zero; however, when performing intense load-testing, it's common to see some errors which can help identify potential weak points or bugs in the system.
  • Requests Per Second: This denotes the number of requests a system can handle per second. A higher number indicates better system performance. It plays a crucial role in determining if your system can handle high traffic while still providing decent response times. Please refer to our other article if you would like to know the difference between the number of virtual users and RPS.

These metrics, in conjunction with others provided in's HTML report, provide a basic overview of your system's performance under load. By regularly monitoring these metrics, developers can ensure their systems are always ready to handle actual user traffic.

Decoding Performance Metrics with & Glimpsing Beyond with Jtl Reporter

In conclusion, provides a robust and reasonably detailed approach to performance testing with its capacity to simulate thousands of users and generate insightful HTML reports. Its easy-to-understand report format allows developers to interpret key metrics such as response time, error rate, and requests per second effectively. Regular report generation is also vital to continually improve system performance and catch potential problems early.

However, while's HTML offers neat features, alternatives like JtlReporter offer more flexibility and features. JtlReporter can provide rich analytic features, supportive visual charts, and even storage options for test results. Its user-friendly interface and detailed analysis can provide a comprehensive overview of system performance, which can be a perfect fit for highly complex large-scale systems. Therefore, while utilizing for performance testing, give a JtlReporter a try.

· 4 min read

JMeter, a popular open-source software tool designed for load testing and performance measurement, provides a built-in reporting feature known as the 'Dashboard Report'. The Report gathers or collates the results of performance tests, depicting them in an easy-to-comprehend tabular format and graphs. In this article we will have a look at the "Statistics" table.

Although the detailed process of generating this report is beyond the scope of this article, we have another post where you can find out how to generate the JMeter Dashboard Report.

The Importance of the Statistics in JMeter Report

The Statistics table in JMeter Dashboard Report is an integral part of performance testing analysis due to its comprehensive view of test results. It presents summarized information, including the average, median, and percentiles of response times, error percentage, throughput, and more, all of which help identify bottlenecks in application performance. Understanding the Statistics Report is crucial as it provides valuable insights into application behavior under different load conditions; thus, it aids in determining scalability, reliability, and capacity planning. It forms the basis to uncover potential performance issues, optimize system performance, and ensure a seamless user experience.

JMeter Statistics in Dashboard Report

Detailed Analysis of the Aggregate Report

The detailed analysis of the Aggregate Report in JMeter involves examining various columns that provide information about the performance of the application. Key metrics include:

  • Label Name: name of a sampler.
  • Number of Samples: the total number of requests made.
  • Average, Min, Max, Median, 90th, 95th and 95th percentile: These indicate the various response times, respectively, providing a clear perspective on overall application performance.
  • Throughput: Number of requests per unit of time that your application can handle.
  • Number of failed requests and Error %: This presents the total number of failed requests and their rate as compared to the total requests, signaling issues if the value is high.
  • Network - Received and Sent: The amount of data being transferred in both directions, represented as KB/sec.

Each of these columns in the Statistics Report furnishes a different piece of the performance puzzle. They collectively give us a well-rounded view of the system's performance under assorted load conditions. Detailed analysis of these metrics helps to detect weak attributes and areas that need further improvement to ensure an optimized and seamless user experience. This analysis also helps us establish a foundational understanding of the system requirements, guiding strategic improvement plans and facilitating better performance.

Interpreting the Results From the Statistics Report

Interpreting results from the JMeter Statistics Report involves deciphering data from each column to gain insights into application performance. For instance, prolonged response times indicates potential performance hiccups, while variations in Min and Max response times could imply inconsistent performance. A high Error % could be a red flag reflecting issues with server capacity or backend programming. Low throughput value together with long response times, most likely means a bottleneck in the application or infrastructure. By correctly reading and interpreting this data, you can identify potential problem areas, such as system stress points, bottlenecks, or areas of inefficiency. These insights provide a useful foundation for defining corrective measures and performance optimization strategies.

It helps you develop a forward-looking perspective and create an action plan to enhance your performance strategy, ensuring a robust and seamless user experience.

Limitations of the Statistics Report

While the Statistics Report in JMeter Dashboard is indispensably beneficial, it possesses limitations. Primarily, it cannot display the values over time, for this, we need to have a look at the included graphs. For instance, the throughput could seem acceptable but by looking at the graph we could spot some drops in the performance that would be worth further investigation. This applies to most of the provided metrics - we need to have a look at the graphs to spot the patterns of potential performance hiccups. The Statistics table misses for instance a standard deviation, a measure of how much the data deviates from the mean or average value. It provides valuable insights into the consistency and reliability of a given metric. Another drawback is that finding the respective graph for a given label requires you to go to another tab and find the correct label among the others. Last, but not least, it's not very easy to compare those metrics with another report, for instance, you want to assess the new changes in your application and compare it with the state before those changes. That's where JtlReport could be handy. It addresses all the above-mentioned issues: easy test report comparison, configurable request statistics including the standard deviation, graphs integrated into request statistics table and much more.

· 5 min read

JMeter, also known as Apache JMeter, is a powerful open-source software that you can use to perform load testing, functional testing, and performance measurements on your application or website. It helps you understand how your application behaves under different levels of load and can reveal bottlenecks or issues in your system that could impact user experience.  This article will guide you on how to generate a JMeter Dashboard Report, ensuring that you utilize this critical tool productively and effectively for your application performance optimization.

Software Requirements

To generate a JMeter Dashboard Report, certain software prerequisites must be met. Firstly, Apache JMeter, the load-testing tool, should be installed on your system, with the latest stable release preferred. Secondly, given JMeter's Java base, you'll also need to install the Java Development Kit (JDK), preferably the latest version. Don't forget to set your JAVA_HOME environment variable to your JDK installation path. Lastly, depending on your testing needs, additional plugins or applications may be necessary for data analysis or software integration with JMeter.

Detailed Step-by-step Guide on How to Generate JMeter Dashboard Report

Setting up the Environment

First, confirm that JMeter and JDK are installed correctly. You can do this by opening a command prompt (or terminal in Linux/Mac) and typing jmeter -v and java -version. These commands should return the JMeter and JDK versions installed on your PC, respectively. Next, open JMeter application. Choose your preferred location to store the output. It should be a place where JMeter can generate results and graphs. Set up your test plan. A test plan specifies what to test and how to run the test. You can add a thread group to the test plan and configure the number of users, ramp-up period, and loop count, among other parameters.

Planning and Executing the Test

Add the necessary samplers to the thread group. Samplers tell JMeter to send requests to a server and wait for a response.  Now add listeners to your test plan. Listeners provide access to the data gathered by JMeter about the test cases as a sampler component of JMeter is executed. Execute your test plan. You can run your test by clicking the "Start" button (green triangle) on the JMeter tool's top bar.

Generating the Report

To create your Dashboard Report from the JTL file, go to the command line, navigate to your JMeter bin directory, and use the following command:

jmeter -g [path to JTL file] -o [folder where dashboard should be generated].

After running this command, JMeter generates a Dashboard Report in the specified output folder. This report includes various charts and tables that present a visual analysis of your performance test.

jmeter report summary

Understanding JMeter Dashboard Report

  1. Top Level (Summary): This section provides an overview of the test, including test duration, total requests, errors, throughput (requests per second), average response time, and more. 
  2. APDEX (Application Performance Index): This index measures user satisfaction based on the response times of your application.
  3. Graphical representation of Results: JMeter includes various charts such as throughput-over-time, response-time-over-time, active-threads-over-time, etc. Each of these graphs provides a visual representation of your test's metrics over different time spans.
  4. Request Summary: This table provides more detailed information for each sampler/request, such as median, min/max response times, error percentages, etc. jmeter report statistics

Key Metrics in the Report

Some of the essential metrics you will come across in a JMeter Dashboard report include:

  1. Error %: The percentage of requests with errors.
  2. Throughput: Number of requests per unit of time that your application can handle.
  3. Min / Max time: The least / maximum time taken to handle the requests.
  4. 90 % line: 90 percent of the response times are below this value.

Interpreting the Report

Interpreting the Dashboard Report involves looking at these metrics and evaluating whether they meet your application's performance requirements.

  1. The Error % should ideally be zero. Any non-zero value indicates problems in the tested application or the testing setup.
  2. High throughput with low response time indicates good performance. However, if response time increases with throughput, it might signal performance issues.
  3. The 90% line is often taken as the 'acceptable' response time. If most of the response times (90%) are within this limit, the performance is generally considered satisfactory.
  4. The APDEX score, ranging from 0 to 1, should ideally be close to 1. A value less than 0.7 indicates that the performance needs improvement. By understanding these key points, you can interpret JMeter Dashboard Report effectively, enabling you to draw conclusions about your application's performance and plan improvements accordingly.


The JMeter Dashboard Report is a powerful tool that provides insights into the performance of your website or application. This extensive and visual report allows you to ascertain the performance bottlenecks and potential room for optimization, thereby enabling you to enhance the end-user experience.

Alternatively, you can get performance testing reports with JtlReporter. With JtlReporter, you can quickly and easily create comprehensive and easy to understand performance test reports for your system with metrics, such as requests per second, various percentiles, error rate, and much more. Additionally, you can compare test runs side-by-side, create custom charts with any metrics available, and set up notifications for external services to be informed when a report is processed and more.

Try JtlReporter today and get detailed performance test reports with ease!

· 4 min read

In performance testing, it is integral to have detailed, accurate methods for trending and variability analysis. A useful visual tool that helps in this regard is the histogram. Using histograms in performance testing reports can aid in the breakdown of data distribution, revealing the shape and spreading of performance test data, thus enhancing understanding of test results.

Relationship Between Histograms and Standard Deviation in Performance Testing

A histogram complements other measures of data distribution, such as standard deviation. While standard deviation elucidates the degree of data dispersion from the mean, histograms visually portray data grouping in intervals, clearly highlighting the frequency of data occurring within these intervals. The marriage of histograms and standard deviation can lead to a more comprehensive understanding of data distribution.

The histogram uses its visual prowess to depict and highlight the position and range of all data points grouped into bins, while standard deviation uses its mathematical sharpness to assess the dispersion and deviation of the data. Collaboratively, they provide us nuanced way for understanding of the distribution of collected data samples.

Understanding Normal Distribution in Histograms

A critical concept represented by histograms in performance testing is normal distribution, which often emerges as a likely outcome. Normal distribution, depicted as a bell curve, signifies that most data points cluster near the mean, with the frequency gradually declining as they diverge from the center. This renders the normal distribution pattern symmetric.

In performance testing, such normal distribution might indicate a stable system where the majority of response times congregate around a central value. Observing an abnormally shaped histogram or one with significant skewness, conversely, may reveal system performance issues.

Hence, understanding normal distribution in the performance testing context is an essential skill for testers. It is a visual cue in the histogram that carries great weight. Moreover, it aids in forming informed expectations about system behavior. So, as testers, our eye on the histogram should always look out for the bell curve of normal distribution.

Utility and Relevance of Histograms in Performance Testing Reports

Given their ability to elucidate data patterns visually, histograms hold high utility in performance tests reports. They furnish testers with an easy-to-understand, intuitive breakdown of data distribution over a range of response times.

With bins on the horizontal axis representing data ranges and bars on the vertical axis signifying the frequency of data within these ranges, histograms aptly illustrate the concentration and dispersion of response times. Consequently, they yield important insights into system response behavior under different workloads.

Additionally, reviewing histograms over sequential test runs can help identify trends, and brief or sustained changes, aiding system fine-tuning and optimization.

The profound relevance of histograms encompasses providing insights into extreme values too. A sudden tall peak in a histogram could indicate an outlier identify, data points that deviate significantly from the mean, that needs to be investigated and analyzed to ensure the robustness and reliability of a system. Another example would be a histogram that is heavily skewed or has multiple peaks, for instance, may signify issues with system balance or the existence of multiple user groups with different behavior. A narrow histogram indicates a potentially healthier system with consistent response times. On the other hand, a broad, flat histogram may point toward a system with unpredictable response times, highlighting areas that need improvement for better performance consistency.

Histogram with outlier data

With the ability to illuminate the grey areas of data distribution, spotlight outliers, and, above all, present complicated data in an accessible, user-friendly format, histograms hold high relevance in the world of performance testing reports.

Closing Thoughts

In sum, histograms play a pivotal role in the domain of performance test reports. As a visualization tool, they work in tandem with measures like standard deviation to offer a comprehensive perspective on data distribution. By illustrating patterns such as normal distribution and highlighting outliers, they significantly assist in performance testing analytics and deepen the understanding of test results.

Histogram chart is available in JtlReporter for every label, so you can analyze every sampler individually, and it gives you also the ability to compare histograms of two different performance testing reports. Get started with JtlReporter today!

· 3 min read

Performance testing is a crucial element in software development, revolving around evaluating and validating efficiency, speed, scalability, stability, and responsiveness of a software application under a variety of workload conditions. Conducted in a controlled environment, performance testing is designed to simulate real-world load scenarios to anticipate application behavior and responsiveness in terms of cyber traffic or user actions.

What's a standard deviation?

Standard deviation is a commonly used statistical measure that is used to assess the variability or spread of a set of data points. It is a measure of how much the data deviates from the mean or average value. It provides valuable insights into the consistency and reliability of a given metric, which can be useful in spotting potential performance bottlenecks. A low standard deviation indicates that the data is tightly clustered around the mean, while a high standard deviation indicates that the data is spread out over a wider range.

Importance of Standard Deviation in Performance Testing

The role of standard deviation in performance testing is profound. It provides an objective measure of the variations in system performance, thus highlighting the stability of the software application. A higher standard deviation indicates a high variation in the performance results and could be symptomatic of inherent problems within the software, while a lower or consistent standard deviation reflects well on system stability.

Thus, the inclusion of standard deviation in performance testing is not just informative but also crucial for a focused and efficient optimization of system performance. It serves as a compass for test engineers, guiding their efforts towards areas that show significant deviations and require improvements. This makes the power of Standard Deviation indispensable when conducting performance testing.

Practical Examples of Standard Deviation in Performance Testing

For instance, if the software's response time observations have a lower standard deviation, it conveys consistency in the response times under variable loads. If there is a higher standard deviation, as a tester, you would need to delve further into performance analysis, pinpointing the potential bottlenecks. It essentially acts as a roadmap, directing you towards the performance-related fixes required to achieve an optimal-performing website or application. The standard deviation represents the data and its distribution pattern. If the standard deviation is greater than half of its mean, it most likely means that the data is not formed in a normal distribution pattern. The closer the data is to the normal distribution pattern (bell curve), the higher the changes that the measured data do not include any suspect behavior.

Incorporating Standard Deviation in Performance Testing Reports through JTL Reporter

In this digital era, leveraging the power of analytical tools to assess software performance has become essential. JTL Reporter is such a captivating platform that aids in recording, analyzing, and sharing the results of performance tests. This platform effectively integrates standard deviation measurement in performance testing, offering a holistic overview of system performance and stability and, thereby, proving invaluable in making informed testing decisions.

· 3 min read

Performance testing is a critical process that ensures the quality, reliability, and optimal performance of software applications under specific workloads, speed, and stability. One of the key metrics used in performance testing is "Percentile." This article aims to provide a detailed insight into percentiles and how they contrast with averages in the context of performance testing.

Understanding Percentiles

A percentile is a measure in statistics that indicates the value below which a given percentage of data falls. In performance testing, percentiles give testers an indication of the distribution characteristics of response times. It helps to quantitatively assess the load handling capacity, stability, and responsiveness of the system under testing. A 95th percentile, for instance, means that 95% of the observed data fall below that value.

How Percentiles are Used in Performance Testing

In performance testing, percentiles are used to provide a more nuanced picture of how a system performs across a range of loads. For instance, if in load testing, a system's 95th percentile response time is 2 seconds, it means that 95% of the users are experiencing response times of 2 seconds or less. This leaves 5% who experience more than 2 seconds.

In real-world usage we want to have more percentiles at our disposal - usually in performance testing reports 50th, 90th, 95th, and 99th percentiles are used. Very often percentiles are used to establish performance KPIs in performance testing.

Difference Between Percentiles and Averages

While percentiles and averages are both statistical measures used in performance testing, they depict different aspects of the data. The average, or mean, is the sum of all values divided by the number of values. It acts as the balance point of the data set, but it may not necessarily represent a "typical" user experience.

Percentiles, on the other hand, show the distribution across the range of responses. Comparatively, they are more useful for understanding the consistency of system performance. For instance, if a small number of server requests take a long time to complete, the average response time will increase even if most requests are completed quickly – potentially giving a misleading picture of overall performance, whereas, with percentiles, you can clearly see that most of the responses are quick, with only a few long ones. For this reason, the average is not a recommended metric to be used for KPIs.

By understanding and interpreting these statistical measures properly, organizations can enhance the quality, reliability, and usability of their software applications, leading to improved user experience and business productivity. Performance testing, backed by accurate data interpretation, is hence the key to deriving maximum value and efficiency from any software application.

· 3 min read

As a performance tester, one of the most important tasks is to correctly analyze the performance testing results. Although it might look like an easy task, the opposite is true here. When looking at the performance test report metrics and charts, there are many hidden traps. The biggest one is, the data you are looking at are aggregated. The problem with aggregated metrics is that they hide information from you (and averages are among the worst here), like very small spikes in response times. But those spikes still pose a performance bottleneck, that needs to be solved.

One of the most effective ways to visualize performance testing data is through scatter charts. It is particularly useful in performance testing because it can help you to identify patterns and trends in your raw data, as well as potential performance issues. Look at the following example of aggregated chart displaying an average response time of a web application:

Average Response Time

As you can see, the information we can get from this chart is limited. It does not show almost any pattern in the data. The only thing we can read from it, there was initially a spike in response times (still worth investigating further as this looks like a performance bottleneck), but besides that, that's all we can get from this chart. Now, let's look at the same data, but this time in a scatter chart:

Scatter Chart

The scatter is more informative than the average response time chart. It shows us that the response times are grouped into three clusters. The banding pattern is usually fine, but in this case, the spacing between the clusters seems to be bigger than desired - the clusters are roughly defined around 0-100ms, 1000-200ms, and 12000-2000ms. Another pattern we can see here is that on some occasions, the response times form almost a vertical line. This might signify a performance bottleneck in the application as something might be blocking the request processing. And last, but not least, we can see that there are some outliers in the data. The outliers are the points that are far away from the rest of the data. The question here is, are they outliers or do they have statistical significance? Again, here we would need to investigate further and run the test multiple times to see if the outliers are consistent.

In this quick introduction, we have learned how scatter charts can help us to analyze performance testing outputs, and reveal patterns and trends in the data, that are otherwise hidden in the aggregated charts. Luckily, the scatter chart is now included in the JtlReporter in the latest version, so you can get even more out of your performance testing data and make better decisions about your application performance.

· 4 min read

Performance testing is a crucial step in ensuring that a software application can perform optimally under stress. Taurus is an open-source performance testing tool that simplifies performance testing, offering developers and testers a complete performance testing environment. This tool supports different protocols such as HTTP, JMS, JDBC, MQTT, and others. In this article, we will look at Taurus, its features, and how to use it.

Features of Taurus

Taurus has numerous features that make it a great tool for performance testing. Below are some of its key features:

  1. Support for Multiple Protocols: Taurus supports various protocols, including HTTP, JMS, JDBC, MQTT, and others, making it a versatile tool.
  2. Easy Test Creation: With Taurus, creating a test script is easy. You can create your script using YAML or JSON format, or use existing scripts from popular performance testing tools like JMeter, Gatling, and Locust.
  3. Cloud Integration: Taurus supports integration with cloud-based testing platforms such as BlazeMeter. This feature allows you to run performance tests on the cloud, helping you save on hardware costs.
  4. Real-Time Results and Reporting: Taurus provides real-time results and reporting, allowing you to analyze your test results as they happen. This feature is critical in identifying performance issues quickly.
  5. Compatibility with CI/CD: Taurus is compatible with Continuous Integration/Continuous Delivery (CI/CD) systems such as Jenkins and Travis. This compatibility allows for easy integration with the development pipeline.

How to Use Taurus

Using Taurus is relatively easy and straightforward. Here's a step-by-step guide on how to use Taurus:

Step 1: Create a Test Scenario

To create a test scenario, you need to define a YAML file that contains the test configuration. A YAML file is a human-readable text file that uses indentation to indicate the structure of data. In the case of Taurus, YAML files define the test scenario, which includes the testing tool to be used, the location of the test script, and the test configuration parameters. Here's an example of a simple test scenario for testing a web application using the JMeter testing tool:

- concurrency: 10
ramp-up: 1m
hold-for: 5m
scenario: with_script

script: script:path/to/test_script.jmx

In the above example, the test scenario contains the JMeter test script located at path/to/test_script.jmx.

The test will be executed with a concurrency of 10 users, a ramp-up time of 1 minute, and a hold time of 5 minutes.

Step 2: Run the Test

To run the test, you need to execute the following command in the terminal:

bzt path/to/test_scenario.yml

Optionally, you can override any value from the YAML in the CLI command. Let's say we want to increase the concurrency:

bzt path/to/test_scenario.yml -o execution.concurrency=50

The test will be executed with a concurrency of 50 users now. This -o switch capability could be leveraged even leveraged in the CI, where we could easily parameterized the execution variables.

Step 3: Monitor the Test Results

Taurus provides real-time test results and reporting, allowing you to monitor the test results as they happen.

Step 4: Analyse the Test Results After the test is completed

Thanks to Taurus modularity, you have several reporting options at your disposal:

  1. Console Reporter - provides a nice in-terminal dashboard with live test stats and is enabled by default.
  2. BlazeMeter Reporter - allows you to upload test results to BlazeMeter application, that saves your data and generates interactive UI report with many metrics available. But its free version is very limited though.
  3. Final Stats Reporter - this rather simple reporter outputs a few basic metrics in the console log after test execution, such as number of requests and failures, various percentiles or latency.

Alternatively, you can integrate Taurus with JtlReporter. With JtlReporter, you can quickly and easily create comprehensive performance test reports for your system with metrics, such as requests per second, various percentiles, error rate, and much more. Additionally, you can compare test runs side-by-side, create custom charts with any metrics available, and set up notifications for external services to be informed when a report is processed and more.

· 5 min read

Load testing is an important part of software development, as it helps determine an application's performance, scalability, and reliability under normal and expected user loads. Locust is an open-source load testing tool that allows developers to quickly and easily create application performance tests.

Why Use is an excellent choice for load testing because it is easy to set up and use. It is also very flexible, allowing you to write your own custom test scripts. Furthermore, it can be run distributed across multiple machines, allowing you to simulate many user traffic and requests.

The main advantages of Locust are its scalability, flexibility, and ease of use. It is designed to be easy to learn and use, so developers can get up and running quickly. It also provides the ability to scale up the number of users and requests quickly and easily, making it an excellent choice for performance testing.

You'll need to install it on your system so you'll be able to start with Locust. Locust is available for all major operating systems, including Windows, Mac OS X, and Linux. Once installed, you can use the Locust command line interface to create a test simulation. This is where you will define the number of users and requests you want to simulate. You can also configure each user's behavior, such as the time between each request and the duration of each request.


For this example, we will use a simple FastAPI application and a Locust test script to simulate user traffic and requests.

FastAPI App

First, we will create a FastAPI application to serve as the system under test.

import uvicorn
from fastapi import FastAPI, Body

app = FastAPI()

def root():
return {'message': 'Hello, world!'}'/user')
def create_user(username: str = Body(...)):
return {'message': f'User {username} created!'}

if __name__ == '__main__':

Locust Test Script

Now, we will create a Locust test script to simulate user traffic and requests to the FastAPI application.

from locust import HttpUser, TaskSet, task

class MyUser(HttpUser):
def index(self):

def create_user(self):'/user', data={'username': 'test_user'})

This test script will simulate a single user making a request to the root URL of the FastAPI application and creating a user.

Test Script Execution

You execute the test script by the following command: locust -f <>

Locust will bring up a user interface available at: http://localhost:8089. From there you can execute the test script. But before the actual execution, you need to enter the number of users you want to simulate and the spawn rate. Spawn rate means how many users will be started per second. And last but not least, you will need to enter a host.

If everything was set up correctly, you just executed your first load test with! Awesome!

Alright, this was a very basic execution which won't be most likely enough for a real word load testing scenario. To get there we will need to generate a bigger load. And for this reason, we'll need to run locust in distributed mode.

Running Locust Distributed

Locust can also be run and distributed across one or multiple machines. This allows you to simulate a larger amount of user traffic and requests.

Locust's architecture consists of two nodes - the master and worker nodes. The master node collects statistics from the worker nodes and presents them in the web UI. The worker nodes are responsible for generating user traffic and requests.

To run the same test in the distributed mode we would use the following commands:

locust -f <> --master

locust -f <> --worker --master-host=localhost

The first command starts the locust master node, while the other connects the worker node to it. It's recommended to run one worker node per CPU core at max to avoid any issues. Of course, the worker nodes can be started from different machines, but be aware the test script must be available at all the machines.

For convenience, Locust can also be run in a Docker container. This allows you to spin up a distributed load test environment quickly, either using docker-compose or k8s.



Retrieving Test Statistics in CSV Format Once the test is complete, you can retrieve the test statistics in CSV format by running the following command:

locust -f <> --csv <output_file_name>

Once the test simulation is configured, you can start running the test. Locust will then run the test simulation and provide results about the application's performance. This includes metrics like average response time, requests per second, and error rates.

HTML Report

The HTML report can be downloaded from the locust UI during or after the test script execution. It can provide basic charts and request stats, that include metrics like requests per second, error rates, and various percentiles for response times. You can also use the results to identify bottlenecks in your application and make changes to improve performance.


Overall, Locust is an excellent choice for performance testing. It is easy to install and use and provides detailed performance metrics and debugging capabilities. It is also highly scalable to test applications with many users and requests.

Are you looking for an easy way to measure the performance of your application and create detailed performance test reports? Look no further than JtlReporter!

With JtlReporter, you can quickly and easily create comprehensive performance test reports for your system with metrics, such as requests per second, various percentiles, error rate, and much more. Additionally, you can compare test runs side-by-side, create custom charts with any metrics available, and set up notifications for external services to be informed when a report is processed.

JtlReporter is the perfect way to measure and analyze performance when load testing your application using Try JtlReporter today and get detailed performance test reports with ease!