DEV Community

Cover image for Heroku Database VS AWS RDS
Sam
Sam

Posted on • Originally published at e2e.utopiops.com

Heroku Database VS AWS RDS

Database is the heart of almost any real world software system, and when it comes to choosing the right platform to host your database, two of the famous names are AWS RDS and Heroku PostgreSQL.

In this tutorial I briefly compare the key factors of each of these two offerings without going too deep in the technical details, while hopefully keeping it clear enough to be able to draw a conclusion based on these facts. Also I do not include Redis in the comparison as like many of you even I'm not yet convinced that Redis is a database, plus it's totally a separate service in AWS which I'll cover in another post.

To begin with, let's take a look at the different Database offerings the two platforms provide.

Supported engines

Database AWS Heroku
PostgreSQL Yes Yes
MySQL Yes No
MS Sql Server Yes No
Aurora Yes No
Oracle Yes No

Winner -> AWS

Considering the fact that Heroku doesn't even support many different database engines, let's just compare the same database engine that is provided by both platforms, yup our lovely elephant DB, PostgreSQL!

1_eUTK5gvvm8H1_kGR_S6Ncg.png

Again, let's simplify the comparison and start with few side by side comparisons demonstrated in tables.

Versions

Version AWS Heroku
9.6 Yes (deprecated) Yes (deprecated)
10 Yes Yes (deprecated)
10 Yes Yes
11 Yes Yes
12 Yes Yes
13 Yes Yes
14 Yes No

Winner -> AWS

Failover

Feature AWS Heroku
Url Change* No (good -> seamless) Yes (bad -> restarts applications)
Supported plans All Only Premium, Private and Shield tier plans
Read only support** Yes (if more than two AZs selected) Yes (with extra setup)

*Url change refers to the fact that the URL/endpoint of the database does or does not change after the failover.

**Read only support is about whether the failover/follower instances can be used as read-only databases or not.

Winner -> AWS

Backup

Although you might think you won't need backup and restore functionality that often, you can find yourself restoring production databases in development environments many times. Also, RPO, RTO and in general your disaster recovery strategies are directly impacted by the effectiveness and simplicity of the backup/restore functionalities available to you.

Feature AWS Heroku
Point in time recovery Yes (by default) Yes (conditionally)
Supported plans All Only Standard, Premium, Private databases tier plans
Rollback history 7-35 days (no limitations) 4-7 days (depending on your plan)

Winner -> AWS

Pricing

This one can be a bit misleading. Here we're comparing only the price of the service exclusive of the labour and/or the set up and maintenance price. Also the list is not an exhaustive one but it's a good indicator of the actual difference between the two platforms.

Memory size AWS Heroku
4GB $53.19 - $59.6 (w/w.o multi-AZ storage) $50 - $350 (based on features)
8GB $118.5 - $144.1 (w/w.o multi-AZ storage) $200 - $750 (based on features)
16GB* $277 - $288.2(w/w.o multi-AZ storage) $400 - $1200 (based on features)

*AWS 16 GB memory was the closest match to Heroku 15 GB memory

It's worth mentioning that all the pricing difference in AWS column is based on the extra pricing for the storage when Multi-AZ option is available, otherwise irrespective of the price and size RDS instances have all the features.

Winner -> AWS

Simplicity of use and operational overhead

This section is more of an analytical experience instead of quantitative comparison.

Provisioning a database
Heroku gives you way less options meaning way fewer decisions to be made, and faster progress knowing that lots of the decision Heroku takes for you are already descent decisions.
On the other hand, AWS gives you heaps of options from deciding the VPC and subnets to setting the security groups, parameter groups and option groups. While this might sound appealing at first, we should consider that it comes at the price of increased complexity, meaning a lot more time to spend, and many more ways to take wrong decisions.

Backup and Restore
Heroku has simple options to create backups and restore the databases but again doesn't give you much control even on how long you want to retain the backups.
AWS on the contrary gives you plenty of options or decision points from a different perspective but again it requires a good understanding of how RDS works and even knowing the fact that your RDS snapshots are stored on S3 can help you better understand the logic behind some behaviours.

Monitoring
Heroku provides very limited monitoring options out of the box which can be enough in many cases but as your use cases become complicated they definitely lack many metrics that can help you troubleshoot the system.
AWS on the other hand gives you the ever growing CloudWatch metrics which can help you fine tune even the applications using that database based on the behaviour that becomes clear looking at the extensive metrics available to you. As you can guess, again you have to navigate to different sides of the AWS dashboard to be able to get the metrics you want with the level of details you like, while still some metrics are available on the RDS page with limited options.

On top of all these differences, setting up read-replica instances, failoever and many optimisations on RDS are way more involved than the Heroku's counterparts, meaning that you have to shift the focus from building the product and providing a better service towards operational activities.

Winner -> Heroku

Choosing AWS as your platform while might look like buying directly from the factory, comes at the cost of lots of DIYs which can heavily impact where you spend your time, i.e. product vs operations, and can even impact your hires.

It's very typical for the teams that run their solution on AWS to have multiple DevOps engineers for the larger teams or hire developers particularly with AWS experience to handle the operations for them while doing their coding tasks in smaller teams. Obviously the DevOps engineers, permanent or on contract, in most cases are way more costly than the savings you were planning to have if that was the reason you choose AWS over Heroku and asking the developers to manage AWS means lots of distraction, sub-optimal infrastructure utilization, and in most cases less skilled developers (you're hiring someone who knows AWS and deep down you know that you're willing to sacrifice the development skills to some extent).

Overall if in the middle of the article you were completely convinced that picking AWS is a no brainer, I believe you might now factor in more real life parameters in your assessment.

The best of both worlds

Simplicity of Heroku, means less operational cost and more agility in the development of the product, with a developer first experience and more focus on shipping a quality product that directly brings value and generates revenue.

The high quality service that AWS provides has a way lower cost for the raw material (yup I'm calling RDS raw material compared to Heroku, please forget the pre-historic datacenter era!), along with perfect integrations with the rest of your platform and advanced options to make the solution way more secure irrespective of the instance size.

Utopiops is a solution that gives you the simplicity that Heroku provides on AWS meaning you can now have the best of both worlds. Not only that, as the solution is provided on your own AWS account lots of advanced configurations are included without your involvement. You get your extremely popular RDS with a set up that most experienced DevOps engineers and solution architects can provide with such a simplicity that any developer can use it.

What does it mean for DevSecOps engineers?

While Utopiops is starting a new era that we call no-ops and low-ops, it doesn't mean the DevOps role will become redundant at least in the near future.

The operational expenses of the companies will definitely reduce in the future benefiting from platforms and services like Utopiops, while helping teams to achieve even more.

Auto-tuning the parameters, settings, scales and even diagnosis are a few of the things you can expect to see in this era, meaning that the DevOps role will shift more and more towards solution architect activities and the experts in the field can help the teams achieve more by designing more complex and efficient solutions without having to go through the unnecessary, boring and low level grunt work.

Top comments (23)

Collapse
 
mxdpeep profile image
Filip Oščádal

don't run on AWS!

devs running AWS support slavery

Collapse
 
mohsenkamrani profile image
Sam

It's very accurate. That's the whole idea behind Utopiops.

Using AWS is like buying from factory but it's extremely costly and time consuming to manage it. Utopiops is all about fixing that issue.

Collapse
 
mxdpeep profile image
Filip Oščádal

My story is like this:

1) buy a dedicated server (40 Eur/m.)
2) setup with Ubuntu or Debian
3) git clone and run setup scripts
4) deploy containers
5) setup backup and monitoring (20 Eur/m.)

It's the most reliable and cheapest solution ever.
No need for 3rd party devops.

Thread Thread
 
mohsenkamrani profile image
Sam

Do you implement all of this on that single server?

Is it reliable at all, let a lone most reliable? Is it highly available? Any load-balancing by any chance?

What are the RTO and RPO?
What container orchestrator do you use? Just please don't tell me docker swarm.

And on top of that is your time is less valuable than $9 per month?

Thread Thread
 
mxdpeep profile image
Filip Oščádal

6 CPU AMD Ryzen (12 threads) / 64 GB RAM / 500 GB nVME RAID1

1) reliable 100 % so far (10 months), better than DigitalOcean or GoDaddy
2) I don't need load-balancing - average load is 0.04, no need for a clone
3) recovery of a failed container under 2 minutes or max. 10 minutes if done manually from a daily backup
4) customers pay $4 a month so they don't need much (it's a managed hosting, so we take care of their needs, setup changes, plugins, debugging problems)
5) it's not a professional business, it's a hobby
6) my time is my time, I prefer to know what's happening on the server
7) I have my own orchestration, I am a programmer for fucking 35 years

Thread Thread
 
mxdpeep profile image
Filip Oščádal

BTW I take care of customers solutions under 1 hour a month, it works on its own, everything is automated (I <3 cron)

Thread Thread
 
mohsenkamrani profile image
Sam

Well, number 5 tells it all.

And regarding number 7, actually your time is then 10x more valuable because you can make money out of each second of it with your experience.

But again, if it works for you it's good.

Thread Thread
 
mxdpeep profile image
Filip Oščádal

no, my time is not valuable, I am nearly dead

Collapse
 
luccabiagi profile image
Lucca Biagi de Paula Prado

how?

Collapse
 
mxdpeep profile image
Filip Oščádal

There have been many complaints from employees at Amazon's fulfillment centers. Workers alleged that they are given back-breaking tasks in the warehouses. They also vent their dismay over intrusive surveillance technologies, including automated tracking systems and cameras that monitor their every move.

Collapse
 
mxdpeep profile image
Filip Oščádal

get DigitalOcean VPS or a dedicated server from Hetzner
run Ubuntu and Docker, it's much cheaper anyway (1 beer a day)

Collapse
 
sin13 profile image
Sina

this is a solid and detailed comparison. thank you!

Collapse
 
eduardonwa profile image
Eduardo Cookie Lifter • Edited

honestly, avoid comparisons

Collapse
 
mohsenkamrani profile image
Sam

I'm curious to know why you think so.

Collapse
 
mehdikamrani profile image
mehdikamrani

interesting , thanks

Collapse
 
mxdpeep profile image
Filip Oščádal

docker run --detach --name some-mariadb --env MARIADB_USER=example-user --env MARIADB_PASSWORD=my_cool_secret --env MARIADB_ROOT_PASSWORD=my-secret-pw mariadb:latest

is it too hard??

Collapse
 
mohsenkamrani profile image
Sam

I'm not sure if you really mean this can be used in production and I really doubt that's your intention.

Of course it's a great idea to use a simple docker container in your local dev. We always have a docker compose in each repository as well.

Collapse
 
mxdpeep profile image
Filip Oščádal

this works just fine:
docker-compose

Collapse
 
mxdpeep profile image
Filip Oščádal

Actually we run many simple containers in production, using docker-compose.
Where's the problem?

Speed? nope! Reliability? nope! Security? nope!

So if you find a problem here I am all ears...

Thread Thread
 
mohsenkamrani profile image
Sam

I just say it's good if it works for you!

Collapse
 
pblgllgs profile image
pbl.gllgs

En mi opinión igual necesitaras un contenedor, administrar la base de datos, respaldos, espacio en disco... AWS RDS > servicios de bases de datos relacionales en la nube.

Collapse
 
collimarco profile image
Marco Colli

I agree that Heroku is simple, but extremely expensive. We save thousands of dollars per month by using 10 bare servers directly on DigitalOcean: it's reliable but you need good DevOps.

Now I am building Cuber and we plan to move to it: github.com/cuber-cloud/cuber-gem It makes the deployment of apps on Kubernetes extremely simple and you can choose any cloud provider.

Other projects related to your idea are Dokku, Kuby (for Ruby), Coolify and probably many others (?).

Collapse
 
mohsenkamrani profile image
Sam

Best of luck Marco. Have to make cloud a lot more user friendly.
We're trying our best and have gained amazing feedback so far.