...

/

Example 3 - Distributed Load Test Simulation

Example 3 - Distributed Load Test Simulation

Learn how to run the simulation against a service deployed on multiple hosts and the simulation script from multiple client machines.

Running the simulation against multiple hosts

In this simulation example, we will learn how to distribute the load equally against multiple target URLs or endpoints.

Gatling, out-of-the-box, provides this feature, assuming we have set an HTTP global configuration, which will be used by all the requests.

Consider the following simulation for a demonstration:

Press + to interact
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class SampleSimulation extends Simulation {
val protocols = http
.disableCaching // disable caching of response
.disableWarmUp // disable initial warmup before sending actual requests
.baseUrls("http://localhost:8080", "http://localhost:8080") // setting base URLs
.contentTypeHeader("application/json") // setting content-type header
.acceptHeader("application/json") // setting accept header
val scn = scenario("sample scenario")
.exec(http("fetch all the users")
.get("/api/users") // making a GET request to fetch all users
.check(
status.is(200), // assert http status code is 200
jsonPath("$.data[*].id").exists
// extract list of ids from response and check if they exist
)
)
// creating an user injection profile to execute the scenario at a
// constant rate of 2 users per second over a duration of 5 seconds
// with http global configurations as declared.
setUp(scn.inject(constantUsersPerSec(2) during (5 seconds))).protocols(protocols)
}

Most customer-facing URLs will have load balancers virtually behind them that distribute the load to the multiple instances of the application deployed on different hosts. This load balancing of requests to multiple hosts is what we will delegate to Gatling.

This simulation is no different ...