Peer-to-Peer Load Balancing

Learn how to scale applications using peer-to-peer load balancing.

Using a reverse proxy is almost a necessity when we want to expose a complex internal network architecture to a public network such as the Internet. It helps hide the complexity, providing a single access point that external applications can easily use and rely on. However, if we need to scale a service that’s for internal use only, we can have much more flexibility and control.

Let’s imagine having a service, Service A, that relies on Service B to implement its functionality. Service B is scaled across multiple machines and it’s available only in the internal network. What we’ve learned so far is that Service A will connect to Service B using a load balancer, which will distribute the traffic to all the servers implementing Service B.

However, there’s an alternative. We can remove the load balancer from the picture and distribute the requests directly from the client (Service A), which now becomes directly responsible for load balancing its requests across the various instances of Service B. This is possible only if Server A knows the details about the servers exposing Service B, and in an internal network, this is usually known information. With this approach, we’re essentially implementing peer-to-peer load balancing.

The illustration below compares the two alternatives we just described.

Get hands-on with 1400+ tech skills courses.