You have to be more specific, why open source, you want to modify it? It is a corporation license restriction?
How much traffic it must handle and how distributed is the app?

For now, the best overall LB is HAProxy. But I presume it will be an overkill for your app, and I will suggest to use a Nginx in Docker with 2-3 docker instances with your app for a higher availability.

If you do not want to handle the sys-admin part (devops) I suggest use the cloud, Google Cloud has builtin load balancers (any service you use) and AWS has 3 solutions.

PS: all of them have easy setups, 1 config file or 1 web page form for the AWS ones.


I had a question: how do you route traffic to your LB instances, especially when they may autoscale? Do you add the public IPs of the instances in the DNS dynamically?

  1. One way to do it is to add their IPs to the DNS.
  2. The other is that when an instance fail the one that replaces it to takes its IP too.

Either way you will have the disadvantages of the DNS: the clients will cache the DNS and it takes 1min-24h to propagate to the globe.

You can use a managed LB and they will take care of that (like AWS, Google cloud), either way one instance of LB can handle a lot of traffic, especially if it only proxy the connections.

That's kind of the problem I'm facing.

I wanted to avoid AWS since it costs a lot for our current needs. And either way having an AWS ELB send requests to our own HAProxy ELB instances sounds like double the work as we would still be paying for the AWS ELB.

In the DNS solution, we tried to dynamically register instances as they came up during autoscaling with Route 53. New instances would put their IP in Route 53, and the ones that were shutting down would remove them.

We then ran into the problem of IP caching, so we tried to create a pool of elastic IPs and attach them to new instances dynamically.... you can see this is getting complicated.

My question is: how would you handle IP assignment to new HAProxy instances? Having a new IP for a new instance is not acceptable since we would loose a lot of connections where the IP was cached by a client. Any solutions for this?

We can take this offline if you want but my guess is that if you need more then 2 HAProxy instances and you cannot afford AWS LB something is wrong.

They should handle 1000s QPS each from what I read and should not have uptime problems, so no need to add or remove them.

You can register the autoscaled app instances to them using consul or similar apps.


Thanks for the detailed information.

But, may I know what you mean by "HAProxy will be overkill for my app" and why?


Because is a more niche technology and solve only the LB problem it's built to handle very large amount of traffic (10s thousands connections by 1 node). The downsides probably (I just presume) are

  • you will have to install a cache and web server solutions behind it
  • if you have a problem with it it's harder to find a solution, being less used
  • you will probably use it at <1% of its potential

This depends on what you want to use this for.
I am assuming HTTP based Microservices or a Web Portal.

  1. HAProxy (Now here you can use it with corosync to do a High available + disaster recoverable system)
  2. Nginx (Easy to configure, and I think the best fit, based on the assumptions I stated above). Feel free to reach out to me directly.

What are your requirements? ;) We actually use haproxy here too, in front of multiple instances of servers running in docker, which works reasonably well on that level. The whole infrastructure is exposed using an apache2/mod_proxy installation that essentially does some caching, static content delivery and reverse-proxying to the various haproxy'd applications. Personal experience is that haproxy is rather "friendly" to start with, also because it focusses on the use case of load balancing and doesn't to much more. ;)


I sometimes use a custom http proxy using NodeJS, or just nginx.


Interesting! how high is the traffic on your application?


HAProxy. Though at my job we're moving to kubernetes.


Kubernetes seems to be a widely used solution now.


I personally use nginx for most things. It's pretty simple to copy an existing config and tweak it if you're not already familiar with the options.

Classic DEV Post from Dec 22 '18

How does your workplace approach recognition?

Does your workplace have a strategy for recognising team members?

Bobai Kato
/dΚ’Ι›n(Ι™)rΙ™lΙͺst/ β€” Intrigued by the beauty of numbers.