Breaking Limits: Load Testing Simplified with Artillery

20 Nov 2024

What Is Load Testing? 

Load testing is a type of performance testing where a system, application, or software is subjected to a specific amount of simulated or actual user traffic to evaluate its behaviour and performance under expected or peak load conditions. It is a critical process for assessing the robustness, scalability, and reliability of a system. By mimicking real-world scenarios, load testing helps uncover performance issues before they impact end-users.

The goals of load testing include: 

Identifying Performance Bottlenecks 

  • Detecting areas of the system that slow down under heavy load, such as inefficient database queries, unoptimized code, or inadequate server configurations. 

Ensuring System Stability Under Heavy Traffic 

  • Verifying that the system can handle sudden or sustained increases in user traffic without crashing, freezing, or becoming unresponsive. 

Determining Capacity Thresholds 

  • Establishing the maximum number of concurrent users or transactions the system can support before performance degrades. 

Assessing Resource Utilization 

  • Monitoring the consumption of resources such as CPU, memory, disk I/O, and network bandwidth to ensure they are within acceptable limits under load. 

Validating Scalability 

  • Ensuring the system can scale up or down effectively, such as adding more servers to handle higher traffic or optimizing resources during lower demand periods. 

Ensuring User Experience Consistency 

  • Verifying that key functionalities, such as login, checkout, or data retrieval, remain responsive and efficient even under peak load conditions. 

Identifying System Breaking Points 

  • Pushing the system beyond its designed load limits to understand its failure point and determine how gracefully it handles overload scenarios. 

Testing Real-World Traffic Patterns 

  • Simulating traffic spikes, such as flash sales or special events, to mimic realistic usage scenarios and ensure readiness for such occurrences. 

Optimizing Load Balancing 

  • Evaluating the effectiveness of load balancers in distributing traffic evenly across servers or application instances. 

Reducing Downtime and Outages 

  • Identifying potential issues early to prevent unplanned downtimes that could negatively impact users and business operations. 

Load Testing: Server vs. Serverless

Load testing is crucial to ensure applications can handle real-world user traffic effectively. Broadly, there are two areas where load testing can be performed: Server-based and Serverless architectures. Both have distinct approaches and tools tailored to their unique characteristics.

1. Server Load Testing

Focuses on fixed server infrastructures with defined resources (CPU, memory, disk).

Purpose: Measures server performance under specific user loads to ensure stability and avoid downtime.

Tools:

  • JMeter

  •  Gatling

  • Locust

2. Serverless Load Testing

Targets cloud-managed platforms like AWS Lambda, Azure Functions, which scale dynamically with traffic.

Purpose: Evaluates scalability, latency, and cost efficiency of serverless systems.

Tools:

  • Artillery

  •  AWS Lambda Power Tuning

  • Serverless Framework

  •  Step Functions Workflows

Why Use Artillery? 

Artillery is a versatile and efficient tool for load testing APIs, web services, and real-time applications. Here’s why it stands out: 

  • Ease of Use: YAML-based configuration simplifies complex test scenarios. 

  • Versatility: Supports protocols like HTTP, WebSocket, and more. 

  • Extensibility: JavaScript hooks allow custom logic in test scenarios. 

  • Detailed Reporting: Provides insights into response times, throughput, and error rates. 

Setting Up Artillery 

Step 1: Installation 

Start by installing Artillery globally: 

Bash
npm install -g artillery

Step 2: Writing a Test Scenario 

Artillery tests are defined in YAML files.

Below is a simple example for testing an API: 

yaml

config:  

target: "https://example.com/api"

phases:    

 - duration: 60       
arrivalRate: 10       
rampTo: 50 
scenarios: 
 - flow:      
 - get:           
        url: "/endpoint"       
- post:           
     url: "/endpoint"          
 json:           
    key: "value"

Config Section

target: The base URL of the API you are testing. All requests in the test will be sent to this URL.

phases: Specifies how the test load is applied over time.

  • Duration: The length of the test phase in seconds.

  • arrivalRate: The number of users (virtual users) that will be started per second.

  • rampTo: The rate to which the arrival of virtual users will increase by the end of the phase.

Scenarios Section

scenarios: Describes the sequence of actions that virtual users will perform during the test.

  • flow: A list of steps (HTTP requests) that define the actions for a single user.

    get: Defines a GET request.

    url: The specific endpoint to which the GET request will be sent.

  • post: Defines a POST request.

    url: The specific endpoint to which the POST request will be sent.

    json: The payload data sent in the POST request, formatted as JSON.

Key Concepts

  • Load Phases: Define the intensity and progression of the test load over time.

  • Actions (GET, POST): Simulate the behavior of real users interacting with the API.

  • Payload (JSON): Enables you to send data with your requests, useful for testing POST endpoints.

Step 3: Running the Test 

Execute the test with the following command: 

bash 
artillery run test-config.yml

Step 4: Analysing Results 

Artillery generates a report with metrics like: 

  • Latency: How quickly your system responds. 

  • RPS (Requests Per Second): Successful requests handled per second. 

  • Error Rates: Percentage of failed requests. 

Real-World Examples of Load Testing with Artillery 

Example 1: E-Commerce Stress Test 

Description: E-commerce sites experience traffic surges during sales events. This test simulates a typical user journey: browsing products, adding items to the cart, and checking out. 
Configuration: 

yaml

config: 

target: "https://example.com"

 phases: 

- duration: 60 
      arrivalRate: 20

scenarios: 

 - flow: 
      - get: 
          url: "/home" 
      - get: 
          url: "/products" 
      - post:          
          url: "/cart"         
     json:             
                productId: 123       
- post:           
url: "/checkout"          
 json:            
cartId: "abc123"

Output:

Explanation of Key Metrics

Scenarios launched: Number of test scenarios initiated (total users simulated across the duration).

Scenarios completed: Total users who completed the full flow (GET /home, GET /products, POST /cart, POST /checkout).

  •  Requests completed: Total HTTP requests made (sum of all GET and POST requests).

  •  RPS sent: Requests per second sent during the test.

  •  Request latency:

    min: Fastest request latency observed.

    max: Slowest request latency observed.

    median: Midpoint of all latency measurements.

    p95/p99: 95th and 99th percentile latencies.

  • Codes: Breakdown of HTTP response codes.

    200: Success responses.

    201: Created (e.g., cart/checkout successful).

    400/500: Client or server-side errors.

  • Errors: Network or application errors encountered during the test (e.g., timeouts).

  • Objective: Validate the site’s performance under high user traffic.

  • Outcome: Metrics such as response times during cart and checkout operations.

Load testing, inclusive of Authorization:

The authorization header uses a Bearer token (a JWT) for API authentication. The JWT, encoded in Base64, verifies the client's identity and grants access to resources. It consists of three parts:

1.Header: Metadata (e.g., token type, algorithm).

2.Payload: User-related claims (e.g., roles, ID).

3.Signature: Ensures token integrity and authenticity.

Output:

Summary Analysis of the Load Test Result

Here’s a breakdown and analysis of the results you’ve provided from the Artillery load test.

1. HTTP Status Codes (http.codes.200)

  • Total 200 OK responses: 1

Description: Only one request was sent, and it received a successful response with a status code 200. This indicates that the target API endpoint /charging-sessions responded correctly.

2. HTTP Request Metrics

  • Total requests: 1

Description: Only one HTTP request was made during the test. This indicates that the test was minimal (perhaps due to configuration, e.g., arrival rate).

  •  Request rate: 1 request per second

Description: The system was configured to send 1 request per second, but since only one request was completed, the rate remains at 1/sec.

3. Response Time (http.response_time)

  • Min response time: 584 ms

  • Max response time: 584 ms

  • Mean response time: 584 ms

  • Median response time: 584.2 ms

  • p95 (95th percentile): 584.2 ms

  • p99 (99th percentile): 584.2 ms

Description:

  • All response times were identical for this single request at around 584 milliseconds.

  • This is a fairly quick response time for an API call, but with only one request, the metrics are limited.

  • The 95th and 99th percentiles are identical since only one response was received.

4. Response Time Breakdown (2xx responses)

  • Since the only response was a successful 200 status code, these values mirror the earlier response time breakdown.

  • 2xx responses: All responses were in the 2xx range, indicating success.

5. Virtual User (VU) Metrics

  • vusers.completed: 1

  Description: One virtual user completed the scenario during the test.

  • vusers.created: 1

   Description: One virtual user was created for the test.

  •   vusers.failed: 0

  Description: No virtual users failed during the test, which is good.

6. Virtual User Session Length (vusers.session_length)

  • Min session length: 758.4 seconds

  • Max session length: 758.4 seconds

  • Mean session length: 758.4 seconds

  • Median session length: 757.6 seconds

  • p95 session length: 757.6 seconds

  • p99 session length: 757.6 seconds

Description:

  • The session length measures how long the virtual user remained active. With only one virtual user, it appears the user’s session was very long (about 12.5 minutes).

  • The small variation between min, mean, median, p95, and p99 suggests that the user likely stayed in the session without exiting early.

Conclusion: Embrace Load Testing for Better Performance 

Load testing is critical for ensuring a system's reliability and scalability. Tools like Artillery simplify this process, providing actionable insights into your system's performance. Whether testing APIs, e-commerce platforms, or real-time applications, load testing ensures your system can meet user expectations—even during traffic surges. 

Start load testing today and make your systems bulletproof! 

Sandeep R

Sandeep R

Graduate Developer

Graduate Developer

Connect with me

Connect with me

We are located at

India

United Kingdom

Netherlands

Contact

talkto@steam-a.com

+91 99444 33392

+44 74034 56793

We are located at

India

United Kingdom

Netherlands

Contact

talkto@steam-a.com

+91 99444 33392

+44 74034 56793

We are located at

India

United Kingdom

Netherlands