DEV Community

Mike Sedzielewski
Mike Sedzielewski

Posted on • Updated on

Migrating from Heroku to AWS with kubernetes and without stopping production

A couple of months ago, we successfully migrated a larger part of our infrastructure from Heroku to AWS. Now, when the dust (or should I say the cloud) has settled, we’d like to share what was the main driver behind our decision and how we approached the transfer without stopping Voucherify API, even for a minute.

Architecture summary

To better understand our reasoning here, let’s take a quick look at what Voucherify is and what the architecture looks like.

Voucherify offers programmable building blocks to build coupon, referral, and loyalty campaigns. It’s basically an API-first platform which devs can use to build complex and personalized promotional campaigns, like send a customer an email with a specific coupon code when he or she joins a “premium” segment. It also allows companies to track coupon redemptions to figure out what promotions work best. Lastly, it provides a dashboard for marketers to take the burden of promotions’ maintenance and monitoring off developers’ shoulders.

The platform consists basically of 3 components:

  • Core application exposing the API
  • Website serving the dashboard
  • Supporting microservices for non-API related jobs
  • When it comes to data storage, we employ Postgres, Mongo, and redis trio.

This is how it looks after the migration:

The load: We serve over 100 customers, who send a couple of million API calls monthly, including both regular requests and some more power-consuming ones like bulk imports/exports or syncs with 3rd party integrations.‍

Why Heroku in the first place and why did we migrate? Read more on our blog.

Top comments (3)

Collapse
 
rohitakiwatkar profile image
Rohit Akiwatkar

Good case study!! Was just curious about auto-scaling and failover as you had mentioned in the blog.

Are you not using multi-AZ configuration? Making the database and load balancer available in multiple availability zones would be a good solution for failover protection.

Also for microservices running non-API related jobs you can use AWS Lambda. Would be a cost-effective solution.

Collapse
 
frakti profile image
Tomasz Sikora

Diagram from the blog doesn't express that, but we, of course, utilize multi-AZ in our dbs and LB.

Although we love Lambda, keep in mind we came outside AWS already having quite nicely scalable and cheap worker solution. True, it is a custom solution, but for now, there is no business value of changing that process to Lambda.

Regarding auto-scaling, based our observations and predictions of our platform utilization, we handle peaks quite well having only k8s pod auto-scaling (In normal conditions that free resource buffer is used for fast, smooth deployments, which happen quite often). For now, there is no reason for focusing on improving that part as well.

Collapse
 
zoispag profile image
Zois Pagoulatos

Hey, you have a typo in K8s. It's Kubernetes, instead of Kubernates!