DEV Community

Cover image for What are the concepts of availability, load balancing, failover and session persistence
Grzegorz Piechnik
Grzegorz Piechnik

Posted on

What are the concepts of availability, load balancing, failover and session persistence

When working in performance testing or as a developer, we often use words whose meaning is not immediately understood. Such terms include availability, load balancing, failover or session persistence. In today's article, we will focus on explaining these terms as best we can.

Availability

Generally speaking, when talking about availability, we will often mean a metric that tells how long the application has been available for use. Being more specific, it will be about its percentage of time.

We should not confuse availability metrics with metrics from performance tests (for example, stability tests). Why? For two reasons:

  1. Not all errors can be seen in the performance testing tool.
  2. Response times and even the responses themselves in performance tests are not always indicative of whether a system is available.

Load Balancing

Load balancing is a complicated topic for which it would take a whole book to cover. We can talk about load balancing for cloud, DNS, HTTP, Layer 3, Layer 4 and discuss other topics such as its types. What we need to know at this point and what we will discuss is the general premise of load balancing.

Load balancing is that we distribute incoming network traffic evenly among a group of backend servers. These are also referred to as a farm or pool of servers. Why do we do this? One reason is that we are keen not to put 100% load on one of the servers at a time when the others are only 10% utilized. This would have a direct impact on application performance. In addition, the load balancer provides easy scalability of the application. Another argument is that when one server fails, traffic will be redirected evenly to the next servers, thus ensuring constant access to the application.

In summary, load balancer ensures application reliability, distributes requests to multiple servers simultaneously and provides flexibility in scaling applications.

Failover

Failover is called the ability of a system to automatically switch to a reliable system. For example, when a component or the entire system fails, redundancy should ensure that the system switches to failover mode and reduce the negative impact of the existing error on user experience as much as possible.

With failover, an important issue is the concept of failover testing. This is a technique that tests how quickly the system is able to back up, allocate additional resources and much more at the time of failure. At this stage, we also find out how vulnerable the system is to failure. You can learn more about failover testing from here.

Session Persistence

Session persistence (sometimes also called sticky session) is a term referring to the routing of user requests to the same backend server for the duration of the session. When users communicate with a load balancer, it redirects traffic (individual requests) to a specific server. Standard load balancing algorithms of load balancers direct traffic to unique servers each time - thus generating huge traffic that affects performance.

Some load balancers can instead maintain a persistent session (session persistence). This means directing traffic (all requests) to the same server. There may already be cached information on it, so that faster responses are returned to the user. In this way, we achieve greater efficiency.

The connection to a particular server is made through cookies or users' IP addresses, among other things.

Top comments (0)