DEV Community

DevGraph
DevGraph

Posted on • Edited on • Originally published at devgraph.com

Load Balancing Methods Explained

By Ravi Duddukuru

The internet seems like the most simple thing we have right now. If you want to watch a video and send it to various people, you just have to click a few buttons. But web applications are more complicated than just having some buttons. The reason you can watch hundreds of videos while other users are also accessing content is because of load balancing. To understand what load balancing is, we first have to take a look at a network system.

The Open Systems Interconnection (OSI) Reference Model is a framework that divides data communication into seven layers of networking, which we will describe briefly:

  1. Layer One (L1) – This is the Physical layer, responsible for how data is transported over a physical medium.
  2. Layer Two(L2) – Data Link, is the second layer that manages the raw data obtained from the physical layer. It can also be subdivided into two other layers: Logical Link Control and the Media Access Control.
  3. Layer Three (L3) – The Network layer is responsible for organizing data and transmitting it across multiple other networks.
  4. Layer Four (L4) – Transport, this layer provides delivery of data packets, error control, flow control, and congestion control to ensure the packets are delivered safely.
  5. Layer Five (L5) – The Session layer, as the name implies, ensures a user’s session between servers is kept in sync. This includes the opening, closing, and re-establishing session activities, and authentication and authorization between servers.
  6. Layer Six (L6) – The sixth layer, Presentation, makes sure that data can be read between servers. Responsibilities include data conversion, character code translation, data compression, encryption, and decryption.
  7. Layer Seven (L7) – The Application layer interacts directly with end-users. Some of the application layer protocols include File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), and Domain Name System (DNS).

Load balancing can be found on layers L4 – L7, as it’s used for handling incoming network loads. This is crucial for web applications and helps to make sure that the application server is at top performance explained here.

Importance of Load Balancing

There is a reason that load balancing can be found in all web applications. Load balancing is the load distribution of network traffic across multiple back-end servers. And a load balancer makes sure that no single server will overload. Because the application load is spread throughout different servers, this increases the responsiveness of web applications, this also makes for a better user experience.

A load balancer will manage incoming requests being sent between one server and the end-users device. This server could be on-premises, in a data center, or on a public cloud. Load balancers will also conduct continuous health checks on servers to ensure they can handle requests. If necessary, the load balancer removes unhealthy servers from the server farm until they are restored. Some load balancers even trigger the creation of new virtualized application servers to cope with increased demand.

There are some critical aspects of load balancing that help web applications in being stable and handle network traffic. Some of these critical tasks include: managing traffic spikes and preventing network load from overtaking one server, minimizing client request response time, and ensuring performance and reliability of computing resources.

Benefits of Load Balancing for Applications

These are some big advantages associated with load balancing.

  1. Scalability – Because a load balancer will spread the work evenly throughout available servers, this allows for increased scalability.
  2. Redundancy – When application traffic is sent to two or more web servers, and one server fails, then the load balancer will automatically transfer the traffic to the other working servers.
  3. Flexibility – With load balancing, one server will always be available to pick up different application loads, so admins are flexible in maintaining other servers. You have the flexibility of having a staggered maintenance system, where at least one server is always available to pick up the workload while others are undergoing maintenance, making sure the site’s users do not experience any outages at any time.
  4. Security – Load balancers can also act as an extra measure of security. An application load balancer can be used to prevent denial of service (DDoS) attacks. With an application load balancer, network and application traffic from the corporate server is “offloaded” to a public cloud server or provider, thus protecting the traffic from interference from dangerous cyber attacks.
  5. Session Persistence – This is the ability to make sure that a user’s session data goes to one server throughout the user’s session. If the server changes midway, it will cause performance issues and the data will not be saved. Being able to handle tons of data being saved is one huge benefit if you know how to.

Load Balancing Techniques & Optimizations

Load balancing algorithms take into account whether traffic is being routed on the network or the application layer by using the OSI model mentioned earlier. Traffic being routed on the network layer is found on Layer 4, while the application layer is found in Layer 7. This helps the load balancer to make a decision on which server will receive an incoming request.

Load Balancing Methods

Each load balancing method relies on a set of criteria, or algorithms, to determine which of the servers in a server farm gets the next request. Here are some of the most common load balancing methods:

Round Robin Method

This method relies on a rotation system to sort incoming requests, with the first server in the server pool fielding a request and then moving to the bottom of the line, where it awaits its turn to be called upon again. This helps in ensuring each server handles the same number of new connections.

Weighted Round Robin

As the name implies, with this method, each server’s weight is usually associated with the number of active connections that it contains. The higher the weight, the more requests it will receive.

Least Connections

The least connection method is an algorithm approach that directs traffic to whichever server has the least number of active connections. This method assumes all requests generate an equal amount of server load.

Weighted Least Connections

In this method, a weight is added to a server depending on its capacity. This weight is used with the least connection method to determine the load allocated to each server.

Source IP Hash

Source IP hash is a load balancing algorithm that combines source and destination IP addresses of the client and server to generate a unique hash key. The key is used to allocate the client to a particular server. As the key can be regenerated if the session is broken, the client request is directed to the same server it was using previously. This is useful if a client must connect to a session that is still active after a disconnection.

Least Response Time

In the least response time algorithm, the back-end server with the least number of active connections, and the least average response time is selected. Using this algorithm ensures quick response time for end-users.

Least Pending Request

The pending requests are monitored and efficiently distributed across the most available servers. It can adapt instantly to an abrupt inflow of new connections, equally monitoring the workload of all the connected servers.

Optimizations

If you want to make sure that your web application runs perfectly well with your load-balanced setup, you can make some optimizations:

Network & Application Layer Optimizations

As mentioned earlier, the load balancing methods base their decisions on the layer that the traffic is being routed to. This means that each method is also based on a specific layer, either network or application layer. With this in mind, you can make optimizations that go along with your chosen load balancing algorithm.

Network Load Balancing, or otherwise known as L4 load balancing, is the management of network traffic through Layer 4 and tends to be the more efficient option because it can be routed faster than Layer 7. This is because L7 has to inspect data from the application layer. Layer 4 optimizations work well with network layer algorithms like Round Robin, and Least Connections. But, there is always an exception and L4 load balancing can’t always be relied on.

L7 load balancing, or HTTP(S) Load Balancing, has access to HTTP requests, SSL session ID, uniform resource identifiers, and more to make routing decisions. The benefit here is that it uses buffering to offload slow connections from the upstream servers, which improves performance. L4 on the other hand can only make limited routing decisions by inspecting the first few packets in the TCP stream. Application layer algorithms like Least Pending Request go well with Layer 7 Load Balancing.

Session Persistence

Configuring your load balancer for session persistence is one of the many more efficient things you can do for your web applications. Least Connections works well with configurations that rely on Traffic Pinning and/or Session Persistence. So deciding to optimize your setup for this combo can be powerful.

SSL Decryption

Working with encrypted connections like HTTPS is hard, and so having your load balancer configured to handle such cases can come in handy. There are three types of common configurations: SSL Passthrough, Decryption Only, and Decryption & Re-encryption. SSL Passthrough tends to be the favorite because it requires less work from the load balancer, but it’s not always the best and you can see this with web applications that have L7 load balancing which requires data inspection.

Scale Arc

If you want a taste of what a load balancer can do, it doesn’t hurt to try out some of the leading companies who are bringing load balancing to the forefront. Among these is Scale Arc, which serves as a database load balancing software that provides continuous availability at high-performance levels for mission-critical database systems deployed at scale.

The ScaleArc software appliance is a database load balancer. It enables database administrators to create highly available, scalable — and easy-to-manage, maintain and migrate — database deployments. ScaleArc works with Microsoft SQL Server and MySQL as an on-premise solution, and within the cloud for corresponding PaaS and DBaaS solutions, including Amazon RDS or AzureSQL.

• Build highly available database environments

• Ensure zero downtime during database maintenance, and reduce risk of unplanned outages by automating failover processes and intelligently redirecting traffic to database replicas

• Effectively balance read and write traffic to improve overall database throughput dramatically

• Consolidate database analytics into a single platform allowing administrators and production support to make more efficient and intelligent decisions, thus saving time and money

• Seamlessly migrate to the cloud and between the platforms with zero downtime.

Try ScaleArc or read our whitepaper to find out if ScaleArc is for you.


Originally appeared on Devgraph.com

Top comments (1)

Collapse
 
anduser96 profile image
Andrei Gatej

Thanks for sharing!

I have a question. What happens when the load balancer reaches its limits too?
Assuming it can’t take in more than 1000 request at the same time, what happens when there are 3000 requests at the same time?
I assume that there might an approach where you “vertically scale” the LB, but at some point the limits will be reached as well.

Thanks!