Deploy Rails in Amazon ECS: Part 4 - Create an ECS Cluster

raphael_jambalos profile image Raphael Jambalos Updated on ・7 min read

This is the fourth part of the Deploy Rails in Amazon ECS post. It's part of a broader series called More than 'hello world' in Docker. The series will help you ready your app: from setting it up locally to deploying it as a production-grade workload in AWS.

9 | Create an ECS Cluster

An ECS cluster is a collection of tasks and services. When using ECS-EC2 services, a cluster is also a collection of EC2 instances. Your containers run on these EC2 instances. Check out this post for a more thorough explanation.

(9.0) On the services tab, search for EC2 and click it. Then, click Key Pairs on the left-hand side menu. Click on create key-pair and add the name "rails-docker". This will download a rails-docker.pem file to your local. We are creating a key pair because we want to be able to access the EC2 instances that our clusters will create later on. Without a key pair, those EC2 instances won't be accessible via SSH

(9.1) On the services tab, search for ECS and click it. Then, click "Create Clusters". On the next page, click EC2 Linux + Networking.

(9.2) On the next page, add my-simple-ruby-app as the cluster name.
Change the EC2 instance type to m3.medium, and set the key-pair to rails-docker.pem (or whatever you set in step 9.0). Do note that using m3.medium would incur fees on your AWS account. If you are following along the tutorial and made your own simple Rails app from scratch, a t2.micro EC2 instance type would suffice. The t2.micro instance is the only instance that is under the free tier. However, if you are deploying an app with more features and gems, I suggest you try m3.medium first, before experimenting with t2.micro.

(9.3) Scroll down the page. In the network section, make sure you are using the same VPC as the one you used in 6.2. In the subnets field, add all the subnets inside the VPC.

(9.4) Scroll further down. Tick Enable Container Insights so ECS will collect CPU and memory utilization data from your containers. Then, click "Create". You should see something like this:

When everything is checked, click on "View Cluster"

10 | Create an ECS Service

An ECS service is responsible for keeping a set number of tasks up and running at all times. A task is a collection of containers that must run together. To learn more about services, click here.

(10.1) On the services tab, click "Create"

(10.2) Then, choose the EC2 launch type. Choose the Task Definition Family of docker-rails-app, and its latest revision (which should be 1 if you're doing this for the first time).

(10.3) On the Configure Network section, tick "Application Load Balancer". Then, choose the load balancer we created in section 8. Then, add the container_name:port of web:0:8080.


Untick Enable service discovery integration and click next. On the next page, click Do not adjust the service’s desired count, then, click next.

(10.5) Review your ECS Service. It should look something like this.

Then click, "Create Service". You should see something like this:

11 | Setting up Security Groups

We have created all the components necessary for our ECS setup. The problem is they currently exist in isolation. There is currently no way for them to communicate with one another. This is because the security groups of each of our components aren't set for them to communicate with one another. Security groups are a set of rules for incoming and outgoing traffic. They specify which resources can communicate with a specific set of resources. For instance. the security group for the ECS cluster specifies which resources can communicate with it.

We have to set up the security group of our ECS cluster to be able to accept SSH traffic (so we can connect to the container via SSH) and accept traffic from all ports from the load balancer (to enable dynamic port mapping). We also have to set up the security group of the database we created in Section 6 to be able to accept traffic from the ECS cluster.

(11.1) On the services tab, search for ECS and click it. Then, click "Clusters", and then find the cluster we just created in Section 9. Then, click on the "ECS Instances" tab. This should list the EC2 instances that have been created for the cluster. Click the EC2 instance.

(11.2) This should direct you to the EC2 page. Click on the instance and on the details pane below, find the security group, and click on it. Take note of this security group. This security group defines which resources can communicate with our ECS cluster.

(11.3) This should direct you to the EC2 Security Group page. Click the "inbound" tab, and then, click "Edit". Add a rule allowing SSH traffic from anywhere.

Add another rule allowing all TCP traffic from the security group we created for the ALB. The security group's name is rails-docker-alb-sg. Alternatively, put the security group name you entered in Step 8.3.

(11.4) On the services tab, search for RDS and click it. On the left-hand side menu, click "Databases". Then, find and click the database you created in Step 6.

(11.5) On the Connectivity & Security tab, click the security group on the VPC Security Groups. This security group defines which resources can communicate with the database.

(11.6) This should redirect you to the EC2 Security Group page for this particular security group. Click on the "Inbound" tab and click "Edit". Add a rule to allow "PostgreSQL" traffic (port 5432) on the security group you took note of in step 11.2

(11.7) Next, we repeat step 11.1 to go to the EC2 instance created for the cluster. Then, we click "Connect". This should tell us specific instructions on how to connect to our instance.

On our local terminal, go to the folder where the pem file was downloaded in step 9.0. Then, execute chmod 400 rails-docker.pem to secure the pem file. The command restricts the permissions of the file to only be accessible by the current user. Then, run the ssh command on the pane but change root to ec2-user.

For me, the command looks something like this:

ssh -i "rails-docker.pem" ec2-user@ec2-18-139-219-189.ap-southeast-1.compute.amazonaws.com

(11.8) Once you're inside the instance, do a docker ps to list the containers that are currently running. Find the container with the sample-docker-rails-app:v1.0.0 in its name. Take note of the container_id of that container.

Then enter the container via this command docker exec -it <<container_id>> /bin/bash. Replace <<container_id>> with the container_id above.

(11.9) Execute the following commands inside. You may be kicked out several times from the container. Repeat from step 11.8 until you have successfully executed the commands below.

rake db:create && rake db:migrate && rake db:seed

12 | Seeing our app for the first time

(12.1) On the services tab, search for EC2 and click it. Find the load balancer we created in Section 8 and click it. Then, copy paste the DNS name to another tab in your browser.

(12.2) You should be able to see the working app!

(12.3) Click on the "like" button though, we see we have an error. That's because we haven't set up Redis and Sidekiq.

What's next?

In the next post, we will fix the Sidekiq issue by setting up Redis and a Sidekiq service. It's still in the works, though. It'll be up in a few days. In the meantime, you can follow me here to be updated.

Or, as a challenge, figure out how to make the like button work. The hint is the Redis service is set up as a separate EC2 instance

Special thanks to Allen, my editor, for helping this post become more coherent. Also, special thanks to tanmmz for pointing out that only the t2.micro instance type is under the EC2 free tier.

I'm happy to take your comments/feedback on this post. Just comment below, or message me!

Posted on by:

raphael_jambalos profile

Raphael Jambalos


Avid Reader, Curious Learner. AWS Certified ☁️


markdown guide

Hi @jamby1100 ! Thank you for this great post!

I followed it step by step and had issues in the final step! When I connected through SSH and runned docker ps, I got an "amazon/amazon-ecs-agent:latest" instead of the image with the name I created in the ECS part.
The only difference was that, as I'm migrating from Elastic Beanstalk, I created DB restoring it from a snapshot, the app that I pushed was quite more complex than yours and I added a lot more ENV variables (I am using secrets.yml instead of MASTER_KEY).

Anyway, as I assumed I didn't need to create and migrate the database, I tried to open in the browser the DNS of the Load Balancer and got 503 error.
Any thoughts?

Again, thank you very much for all this series of posts, they are being very helpful!!


Hi Nico,

Thank you for following through the post series. I’m glad you found it helpful.

The amazon-ecs-agent container should always be there. But I think the problem is that your application’s container is not being deployed. There are a number of things you can check:

(1) check if the instance is detected by amazon ecs. Go to the clusters > yourcluster, and then go the ecs instances tab. If you see your ec2 instance there, this is not a problem

(2) check if there is a problem with the amazon ecs service. Go to the clusters > yourcluster > yourservice. In the events tab, you should see logs of what happened to the service. The most common problem there is the CPU and memory that your EC2 instance has is not enough for the CPU and memory demands of your containers. If this is so, you can either change the cpu and memory requirements of your containers (by editing the task definition and deploying that) or you can change the instance type of the ec2 instances in the fleet by going to clusters > myclusters and then go to the ecs instances tab.

Kindly let me know if this helped you resolve the issue :D


Hi Raphael! Thank you for you quick response.

My EC2 instance is there (actually it was that path the one I followed to get to it and then connect by SSH, where I realized I didn't have the image and just the ECS agent.

In the events tab of the service, I can see repeated logs indicating:

  • service myservice has started 1 tasks: task task_id.
  • service myservice deregistered 1 targets in target-group default-target
  • service myservice has begun draining connections on 1 tasks.

I made a new revision on the task definition with 1024 for CPU and memory and then updated the revision in the service but nothing change, if I connect to the instance, there's still only one image, the ecs-agent. I don't think it's the EC2 instance as in EB I'm using a t2.small.

Any thoughts?

Hi Nico,

From your reply, it seems to me that the your app fails the health check. I recommend the following:

(i) I think your application has some configuration problems that you might need to address. For this approach, we would look at your application logs.

Go to yourcluster > yourservice, and go to the "Tasks" tab. Inside the tab, find "Task Status:" and then click "Stopped". You will see a list of tasks that have been stopped. Click one of those tasks. You will be redirected to a page with information about your task. Try playing around with this page. You will see a reason why the task was killed. If you don't find anything useful, go to the "logs" tab. You will see application logs from that specific task (assuming you did step 7.6-7.7 perfectly ).

Alt Text

(ii) If your app is perfectly normal, then I think the load balancer is killing your app even before it has the chance to turn on. For this approach, we would add a grace period.

Go to your service and find the option for the Health Check grace period. If it's zero, turn it to 300s. If it's more than zero, double it.

Alt Text

Kindly let me know if this helps!

Hi Raphael! Thank you again for the help.

So, yes, clearly the tasks are being stopped but I can't understand the reason why. The logs don't give me any information. The only message I see in the task is "Essential container in task exited" and in the logs "Switch to inspect mode".
I tried to deploy a previous version of the app, to make sure it was stable, and still didn't create the image.
I changed the grace period and the image still didn't appear in the EC2 instance.

I think everything points to my app itslef malfuctioning, but locally I can run the image just fine and same with docker-compose.

I don't want to spam so much this thread so if you prefer, we can chat directly. Thank you a lot!

Hi Nico,

Sorry for the delayed reply. It's been a long week at work. I think what you have to do is SSH to the EC2 instance directly. Then, do docker ps and find the container. If it's not there, do docker ps -a to see containers who recently died. Try to do docker log <chash> first.

Then, try to revive the container via docker start <chash>, and then you'd be able to go inside the container via docker exec -it <chash>. Then, explore your app. Look for log files that may contain clues to why your app failed.

If you prefer, you can also send screenshots of your task to me (via PM). [mycluster > myservice > tasks > click on one of the tasks].


Thanks for writing this @jamby1100 . The series of posts is exactly what I'm looking for! We are trying to move to deploying sidekiq with our Rails apps. The next post is the one I really need to use as a reference. Any idea when you will have it posted?


Hey Matthew! I have a rough draft but I’ll have it posted by Sunday :D

A lot of the tutorials out there don’t go all the way in terms of making a setup that is near prod ready. I’m glad you found the series helpful! Let me know if there are things I can improve on or clear up :D


Hi Matthew, publishing the Sidekiq post here as promised: dev.to/jamby1100/deploy-rails-in-a....

Let me know if you need any help, or if you find some of the parts to be unclear.

Goodluck on your deployment. :D


Hi Raphael, thank you for the great tutorial! It helps me a lot.

Just one small thing, could you add a warning message to this:

"Change the EC2 instance type to m3.medium," in part 4: Create cluster

for those who are using Free Tier, they should be warned because only "t2.micro" EC2 free to use.


Hi tannmz,

Thank you for this advice! I'll add it in the post. So sorry for the late reply! :( Been incredibly busy at work the past few weeks.


Thanks Raphael, I have been struggling with getting ECS running for a few days now and these posts got be from 0 too 100. One thing I stumbled on was that the POSTGRESQL_HOST is literally just the host (obvious in hindsight). It's just the thing in the endpoint, no port included, no DB name, no protocol (which again, why would I have to dictate to rails to use postgresql:// to talk to a postgres db??) . Thanks so much!


Hi Daniel! I'm glad you found the article helpful. Thank you for this input. Indeed, I think this can be a common point of confusion, especially for devs coming from other programming languages where the database string is used. I added your feedback to the specific section in Part 3 to make sure people would avoid getting confused. Thanks again! :D


I accessed container terminal with command
docker exec -it <> /bin/bash

And as you mentioned, it recreated container every time I ran command:
rake db:create && rake db:migrate && rake db:seed

Alternative I tried and worked for me:
docker exec -it <> rake db:create
docker exec -it <> rake db:migrate
docker exec -it <> rake db:seed

It gave errors for first few times as I didn’t have all configurations for staging env in place. After fixing them above commands worked.