DEV Community

Cover image for πŸš€ How I Built, & Deployed My Portfolio Site With Docker, AWS ECR, ECS-FARGATE, Terraform & Spacelift.
Akingbade Omosebi
Akingbade Omosebi

Posted on

πŸš€ How I Built, & Deployed My Portfolio Site With Docker, AWS ECR, ECS-FARGATE, Terraform & Spacelift.

Hey folks,
So as you know I've been playing with Spacelift, and honestly, I think i'm starting to find myself enjoy it, even more than the traditional approach of just handling and deploying infrastructures. That's why I want to share a fun project of mine i worked on today.

I actually decided to mess around and play both the roles of a Frontend Software Developer and a DevOps Engineer alone by myself for this project. I created and turned a portfolio site using HTML, CSS, JAVASCRIPT into a full-on AWS Cloud DevOps project. Which I containerized with Docker and and pushed to AWS Elastic Container Registry (ECR) and deployed with AWS Elastic Container Service (ECS), and all these were managed with Terraform and Spacelift.

Before you jump into it, here is the Architecture diagram to explain the logical flow of what the project is about. But if not, you can always refer to it, incase you're lost within the concept.

Architecture Diagram

Why I Did This
Originally, I just wanted a simple portfolio site, you know, something to show off my projects and have a personal space online. But I didn’t want to stop at just pushing HTML, CSS, and JavaScript to a random static host. I thought: Why not make this an opportunity to show real DevOps skills too? make sense, right?

So, I took it up a notch β€” wrapped the site in a Docker container, pushed it to AWS ECR, deployed it on ECS, wrote my whole infra with Terraform, then wired it all up with Spacelift to handle the deployments automatically whenever I push changes to GitHub.

Pre-requisites

But before you dive into something similar, trying to accomplish this too (which i encourage you to), here are some basics I suggest you should have ready:

  • A basic front-end project (HTML, CSS, JavaScript) β€” I got a template, and build my frontend code based on the template, then added some extra more stuffs to it.

  • At least have docker installed locally β€” you’ll need it to build your image, which you you have developed, to tag it, and to test it through containers locally at first.

  • An AWS account β€” I use my AWS free tier account, you can use yours too if you are eligible, because you’ll need it for creating ECR repos, ECS clusters, IAM roles, VPCs, all the good and essential stuff.

  • Terraform installed β€” For this project I used Terraform, you can use CloudFormation or others as you desire, but you need an IaC like Terraform to define your infrastructure as code.

  • A Spacelift account connected to your GitHub repo (or any IaC CI/CD tool you prefer), you can register for a free trial.

  • Basic git knowledge β€” After I had finished my work, everything went well, With my brain overexcited from this project, I accidentally typed "git init", rather than "terraform init" which really messed up my nearly perfect workflow between my local machine and remote repo, and that's where my Git skills came in handy.
    I promise, you’ll use git pull, rebase, and probably force push more than once.

Phase 1: Building the Portfolio Website

Building the Site

There was nothing too fancy here, but here is where my Frontend Developer skills played out, by eventually crafting a nice portfolio using Html, Css and JavaScript. I found a clean web template, stripped it down, rewrote parts, tossed in some more parts, added my own images (yes, I even took some quick professional portrait shot of myself just for this talk about dedication to the craft 😌😌). Made sure it was responsive, light, and looked modern enough.

Phase 2: Dockerizing the App

Meet our friends; Docker Desktop and Docker Engine, without it this project wont be possible

I decided to spin up my Docker Desktop, to get Docker engine running, and if you're on windows you know you will need to have WSL running, to be able to do that. If you do not have docker desktop, you can get it from its official website below:

Docker Desktop

Containerizing with Docker

Next, I wrote a simple Dockerfile. The goal was to serve the static site with NGINX inside a container. My Dockerfile was basically something like:

# I will be using the Nginx alpine image as the base image
FROM nginx:alpine

# to copy the contents of the current directory to the /usr/share/nginx/html directory in the container.
COPY . /usr/share/nginx/html

# i will expose port 80 from Nginx. This is the port that Nginx will listen on inside the container.
EXPOSE 80
Enter fullscreen mode Exit fullscreen mode

If you don't know how to build docker images. Here is a guide from docker's official website.
Build and push your first image - docker.com

I also added a .dockerignore file to make sure I didn’t accidentally send unnecessary stuff to the build context (like .git folders, local configs, etc), because I initially accidentally built my image with the Nginx:latest tag, and it was bulky 960MB, so i had to stepback, use Nginx:alpine and added the things to be ignored in order to ensure dockerbuild wasn't adding unneccesary files... It came out as 82MB.

Here is the block of code from my .dockerignore file:

# Ignore Git metadata
.git

# Ignore GitHub workflows and config
.github

# Ignore Terraform infrastructure code
terraform

# Ignore documentation and config files not needed in container
README.md
Dockerfile
.dockerignore
workflows

# Optional: ignore logs and env files iif any
*.log
*.env
Enter fullscreen mode Exit fullscreen mode

Then I ran docker build and docker run locally to make sure it actually worked. Seeing my site pop up locally inside a container felt satisfying.

To build use the following commands:

docker build --tag

or

docker build -t <filename>:<tag> .

or

docker build --tag name:latest . #the dot after is extremely important, and make sure you're inside your directory.

Docker Build

GitBash:> my_portfolio latest 5481289d0f89 8 hours ago 82.2MB

Docker host running locally:

Local Host Docker App

While that was cute, it wasn't the main objective or end goal. The true goal was to make it run on something that is more reachable, such as AWS Elastic Container Service or Elastic Kubernetes Service, or something else. So I had to press on.

Before I could run any of these services on AWS, I needed some services to be up and supporting, such as VPC, Security Groups, IAM, ECR,

VPC (with Subnets & IGW)

# My VPC network
resource "aws_vpc" "my_vpc" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name        = "my-vpc"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

# My 3 subnets
resource "aws_subnet" "subnet-1" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "eu-central-1a" # my zone a subnet

  tags = {
    Name        = "subnet-1a"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

resource "aws_subnet" "subnet-2" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "eu-central-1b" # my zone b subnet

  tags = {
    Name        = "subnet-1b"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

resource "aws_subnet" "subnet-3" {
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = "10.0.3.0/24"
  availability_zone = "eu-central-1c" # my zone c subnet

  tags = {
    Name        = "subnet-1c"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

#My route table, so that my subnets can atleast access the internet
resource "aws_route_table" "my_route_table" {
  vpc_id = aws_vpc.my_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.my_igw.id
  }

  tags = {
    Name        = "my-route-table"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}

# my subnet associations to the route table, selecting the subnets i want to have access to the internet. 
resource "aws_route_table_association" "subnet_associations" {
  count          = 3
  subnet_id      = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id, aws_subnet.subnet-3.id][count.index]
  route_table_id = aws_route_table.my_route_table.id
}

# Myy internet gateway
resource "aws_internet_gateway" "my_igw" {
  vpc_id = aws_vpc.my_vpc.id

  tags = {
    Name        = "my-igw"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

Security Group

# My Security group mainly for my ECS tasks
resource "aws_security_group" "ecs_tasks_sg" {
  name        = "ecs-tasks-security-group"
  description = "Security group for ECS tasks"
  vpc_id      = aws_vpc.my_vpc.id

  # SSH access (Just incase, i need it for debugging)
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Normally, restrict to your IP for security, but open for now
    description = "SSH access"
  }

  # Allow HTTP traffic (port 80)
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTP access"
  }

  # Allow HTTPS traffic (port 443)
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTPS access"
  }

  # Allow My_portfolio application port
  ingress {
    from_port   = 5000
    to_port     = 5000
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Application port"
  }

  # Allow all outbound traffic
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "All outbound traffic"
  }

  tags = {
    Name        = "ecs-tasks-sg"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

** ECR **

# My ECR repository for my_portfolio project
resource "aws_ecr_repository" "my_portfolio" {
  name                 = "my_portfolio"
  image_tag_mutability = "MUTABLE" # or "IMMUTABLE" based on your requirement
  image_scanning_configuration {
    scan_on_push = true
  }

  tags = {
    Name        = "my-portfolio-ecr"
    Project     = "my_portfolio"
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

ECR

Phase 3: Pushing to ECR

Pushing my Local Docker Image to AWS ECR

With my image ready, I created an ECR repository in AWS using Terraform and Spacelift. Then I logged in to ECR from my terminal, tagged my image, and pushed it up.

ECR Image Push

This part went pretty smoothly β€” but if you’re new to ECR, make sure you don’t forget to authenticate your Docker client using the aws ecr get-login-password command. If you skip this, your push will fail and the error messages aren’t always the friendliest.

Once it was deployed, I had 3 images, which i only needed one as you can see below:

so for that I added a policy resource block with two rule blocks; one for tagged images and another for untagged images, ensuring there was only a maximum of 1 at a time, i then deployed it via terraform spacelift. Here is the code for that:

resource "aws_ecr_lifecycle_policy" "my_portfolio_ecr_policy" {
  repository = aws_ecr_repository.my_portfolio.name

  policy = <<EOF
{
    "rules": [
         {
      "rulePriority": 1,
      "description": "Expire untagged images",
      "selection": {
        "tagStatus": "untagged",
        "countType": "imageCountMoreThan",
        "countNumber": 1
      },
      "action": {
        "type": "expire"
      }
    },
        {
            "rulePriority": 2,
            "description": "Keep only 1 image max",
            "selection": {
                "tagStatus": "any",
                "countType": "imageCountMoreThan",
                "countNumber": 1
            },
            "action": {
                "type": "expire"
            }
        }
    ]
}
EOF
}
Enter fullscreen mode Exit fullscreen mode

Image Policy

Phase 4: Deploying to ECS with Terraform**

After I was done with the whole ECR and Docker part, here’s where the fun (and a bit of chaos) started. I wrote my Terraform files to create an ECS cluster, services and task definitions.

Here is Spacelift doing the Heavy Terraform Lifting:

Here is the Cluster Created!:

Cluster

I also needed to provision an IAM role and IAM Role Policy Attachment for the ECS. Here is Spacelift deploying it.

And here is its code block:

# So, i need two things, aws_iam_role and aws_iam_role_policy_attachment for my ECS task execution role.

resource "aws_iam_role" "ecs_task_execution_role" {
  name = "ecs-task-execution-role"
  assume_role_policy = jsonencode({    # Terraform's "jsonencode" function converts a Terraform expression result to valid JSON syntax. you can get moore of this templates on terraform registry docs site like i did here.
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          Service = "ecs-tasks.amazonaws.com"       # You must specific the service type!!! This is the service that will assume this role, in my case, it is the ECS tasks. Read more on it.
        }
      },
    ]
  })

  tags = {
    name = "ecs-task-execution-role"
  }
}

Enter fullscreen mode Exit fullscreen mode

and for policy Attachment:

# This is my Policy Attachment role, you can find this within the registry docs and read more about it then modify its attributes and apply it here as i did.

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

Enter fullscreen mode Exit fullscreen mode

Then i needed to create an ECS Task Definition to for our Cluster to know what to do.

This is the code block for the Task defiition:

# now its time for taask definiinnition, you can find this basic example on terraform registry docs as i did, then read and modify it as you need it.

resource "aws_ecs_task_definition" "portfolio_task" {
  family                   = "portfolio-task"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "portfolio-container"
      image     = "194722436853.dkr.ecr.eu-central-1.amazonaws.com/my_portfolio:latest"
      essential = true
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
  ])

  tags = {
    Name = "portfolio-task"
  }
}

Enter fullscreen mode Exit fullscreen mode

Finally for the creation of my ECS, I needed a services resource that pulls my image from ECR and runs it within our ECS cluster, so I created it.

Spacelift Provisioning Services Resource

# Alright, so i am almost done, and here is where i add my service for my ecs and the amount i want running at all times.

resource "aws_ecs_service" "portfolio_service" {
  name            = "portfolio-service"
  cluster         = aws_ecs_cluster.portfolio_cluster.id
  task_definition = aws_ecs_task_definition.portfolio_task.arn
  launch_type     = "FARGATE"
  desired_count   = 1       # Personally id like to think of this as replicas with self heaaling in k8s or kubernetes. You can set to the miniumum about you  want to keep up and running at al times!!

  network_configuration {
    subnets         = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id]
    security_groups = [aws_security_group.ecs_tasks_sg.id]
    assign_public_ip = true
  }

  tags = {
    Name = "portfolio-service"
  }
}
Enter fullscreen mode Exit fullscreen mode

So far, so good. Clean and neatly written codes carefully deployed.

Where Things Went Sideways

Everything was pretty good and working really well, so far so good. until i noticed my Cluster wasnt running, it had "0/1" task running for approx 5 minutes, and that didnt spell any good for me.

Phase 5: Troubleshooting: When Things Broke

Part of my duty as an engineer is to be able to repair broken stuffs, or things that went wrong within my project, So I did what i needed to do, like any DevOps or Technical Engineer. I went into trouble shooting mode, and looked into its log to see what the issues are. Luckily, I found the log and dug my way to it's root cause, a miswritten code, too much happy fingers. It was definitely rewarding to fix what was broken, it boosted my confidence more and gave me extra morale.

Error Log

So, What was my Error Code?

Error Code:

container_definitions = jsonencode([
  {
    name      = "my_portfolio"
    image     = "my_portfolio:latest"  # <-- this is wrong!
    ...
  }
])

Enter fullscreen mode Exit fullscreen mode

Caused by me carelessly re-writting the image name and tag, rather than paying attention to the docs, that specified the name here as image URL.

So what was the correct block of code?

container_definitions = jsonencode([
  {
    name      = "my_portfolio"
    image     = "194722436853.dkr.ecr.eu-central-1.amazonaws.com/my_portfolio:latest"
    ...
  }
])

Enter fullscreen mode Exit fullscreen mode

You can also find it here:

Phase 6: Lessons I Learned

Always take your time, double-check your documentation and code in your Terraform ecs.tf file and every other file. Even a single mistake such as this, or a port mismatch can break the whole deployment. Thankfully, I found it > tweaked it > pushed it > Spacelift picked it up as usual and did the heavy lifting for me, and after a few seconds boom!!! My portfolio was running from my own ECS cluster.

And it was healthy as well...

But i wanted to access my app from the internet, normally, it is advised to use Application Load Balancer (ALB) Network Configuration, but for a small project such as mine, I went with the Public IP, with the rightful ports open in the security group, i had the Public Ip automatically assigned to it from the network configuration underneath the service block.

network_configuration {
    subnets         = [aws_subnet.subnet-1.id, aws_subnet.subnet-2.id]
    security_groups = [aws_security_group.ecs_tasks_sg.id]
    assign_public_ip = true
  }
Enter fullscreen mode Exit fullscreen mode

and when i clicked the Public Ip address, My application was live and running. No more on local host, but on a public ip, and anyone across the globe could access it. I shared it with my friend who was far away, and he confirmed it from his mobile as well.

Automating with Spacelift CICD

Writing Terraform is nice, but I wanted it to run automatically whenever I push changes to GitHub. There came Spacelift, which i've been using for a while now, I bet you're tired of seeing me posting about it.
I connected my repo, set up a stack, wired up the permissions, and boom!! Now every commit and push runs a plan and applies the infra config if approved.

It felt good to see my commits trigger an actual pipeline that deploys my container and updates my infra with no extra manual steps.

Screenshots or It Didn’t Happen

Because I won’t keep the cluster running forever (it costs money!), I took a bunch of screenshots as proof: the live site, my ECS cluster, task definitions, ECR repo, and my Spacelift runs. I added them to my repo’s README for anyone curious.

Tip: Always do this for demo or learning projects. Trust me, you don’t want to pay AWS bills just so people can β€œsee” your container is live forever.

A Few Things to Watch Out For

Be careful with your IAM permissions. ECS won’t run your tasks if you don’t have the right execution role.

Double-check your .dockerignore. You don’t want to push your .git or local config files to your container.

Use lifecycle policies on ECR to clean up old images automatically β€” your storage bill will thank you.

If you run into git messes (I did!), don’t panic. Stash, reset, pull with rebase, force push, but be careful!

Final Thoughts

I started this as a simple portfolio site but turned it into a practical showcase of cloud and DevOps skills. I learned (and re-learned) so many small but real lessons along the way.

If you’re a developer wanting to break into DevOps or cloud engineering, I highly recommend picking a small project like this and taking it through the whole pipeline. From local dev to a live container in the cloud, all automated.

I did not include everything here as it is already long.
However, If you want to see my repo, check it out here: Github
I hope this inspires someone to try the same.

Thanks for reading! πŸš€βœ¨

Top comments (6)

Collapse
 
musictus profile image
Sixtus_Okoro

Fantastic work Engr πŸ™πŸ»
Superb, fun and detailed documentation.

Collapse
 
akingbade_omosebi profile image
Akingbade Omosebi

I'm glad you liked it. Thanks a lot for your feedback.

Collapse
 
akingbade_omosebi profile image
Akingbade Omosebi

Sure Jamey Harris, Im open to that. Lets do that.

Collapse
 
francis_akin_5ce06bd2037d profile image
Francis Akintade

This is a well detailed project breakdown ,with worthy inclusions of errors and troubleshooting steps to get it done. Good job πŸ‘πŸ‘ i

Collapse
 
akingbade_omosebi profile image
Akingbade Omosebi

Thanks a lot for your feedback, it's really encouraging.