DEV Community

Cover image for AWS ECS Managed Instances
Ryan Batchelder for AWS Community Builders

Posted on • Originally published at rdbatch.dev

AWS ECS Managed Instances

If you've been particularly focused in the Kubernetes world, you may have missed AWS continuing to roll out new features to their (imo) excellent ECS service with ECS Managed Instances. Let's take a look at who this feature is for and why you may (or may not) want to use it.

Who is this for?

At the highest level, ECS Managed Instances offloads the management of EC2 resources in your clusters to AWS. This might sound like the value proposition of Fargate, but Managed Instances takes a different, complementary approach.

Serverless Spectrum showing serverful compute to the left starting with EC2 and moving toward serverless Lambda on the right. AWS ECS Managed Instances is placed in the middle.

Fargate brings the serverless experience to container orchestration - running your workloads on compute that's entirely outside of your purview and charging you only for the compute time that your container uses. As a trade off, you have effectively no control over the compute that's provisioned behind the scenes to run your containers. This is totally fine for many general purpose workloads that don't have particular hardware requirements or aren't sensitive to where they're running.

But what if you did need specific hardware...?

Unfortunately, not all web services are simple apps serving up memes and cat videos. Sometimes you might need to run a workload that needs access to GPUs, high-throughput networking, or really fast local storage.

Historically, the only answer to this would be to manage your own fleet of EC2 instances and bring them into an ECS cluster. While this allows you to have all of the nice benefits of the ECS container orchestration platform, it comes with all the downsides of managing your own infrastructure. OS configuration, patching, security, etc. are all your responsibility using EC2 directly to back ECS clusters. Capacity Providers were eventually released to make scaling clusters easier, but they still relied on you maintaining Launch Templates for the compute that they... provided.

I don't need specific hardware, should I just use Fargate?

Maybe! Fargate is excellent at providing a way to run containers without having to really think about the compute underneath, but that management doesn't come for free. Fargate tends to be more expensive per vCPU and GB of memory than the equivalent capacity on EC2, with the benefit being that you get to save yourself (or your ops/infra teams) time not worrying about management. For a small number of containers this price premium is probably a welcome trade off, but the math doesn't always work out as you scale up. Being able to bin-pack a number of containers onto one instance can make EC2-backed ECS clusters quite cost-efficient, and Fargate generally can't keep up on this front at least for containers that need to run 24/7.

Fargate has a few other drawbacks as well:

  • No access to the underlying host means you can't run daemon tasks in the background. This means you need to configure things like log or metric collection per-task rather than having it run at the instance-level like you can with EC2-backed clusters.
  • Privileged containers aren't supported on Fargate.
  • Fargate compute isn't always on the latest generation hardware. A vCPU on an m5 instance has very different performance characteristics from an m7i or m7a vCPU, and Fargate can schedule your tasks to run on any sort of compute that AWS chooses.

The Best of Both Worlds - ECS Managed Instances

ECS Managed Instances slides further down the serverless spectrum and solves the problem of needing to manage compute whenever your workloads call for more specific hardware requirements, while allowing you to offload management to AWS. On top of that, the compute that's provisioned and managed on your behalf are still regular EC2 instances joined to your cluster. This means that they can run as many containers as reasonably fit, dividing the cost between multiple workloads and likely saving you money on top of Fargate. Layer on the fact that managed instances take advantage of all your existing EC2 reservations and savings plans, and you can potentially be looking at significant savings.

It's important to note that ECS Managed Instances aren't necessarily free. There is an additional cost overhead over the EC2 sticker price to cover the management, but all in all it's still cheaper than Fargate is. How much cheaper? For most scenarios I checked, only about 3% on the face, but given that EC2 savings plans can have stronger discounts than Fargate that difference can grow significantly depending on your particular scenario. As part of this management fee, you get security patching every 14 days and behind the scenes ECS will still try to cost-optimize your compute by consolidating containers into underutilized instances.

Where can I learn more?

I'd highly recommend taking a read of the official AWS launch announcement first. Thankfully this release came with CloudFormation and CDK support right away, so you can start playing with this right away via infrastructure-as-code.

Top comments (0)