DEV Community

Shongkor Talukdar
Shongkor Talukdar

Posted on

Addressing Single Point of Failure Concerns

The risk of a Single Point of Failure (SPOF) has become a critical concern in interconnected modern businesses and technologies. The concept represents a part of a system that, if it fails, will stop the entire system from working. It can be software, hardware, human resources, or any aspect critical to operations. For example,

## FrontEnd ----> Backend(Deployed on single EC2) ----> DataBase
Enter fullscreen mode Exit fullscreen mode

If we deploy a backend on one server, when we add a feature, and when we deploy it, that causes an interruption for the active end user. Think about streaming platforms like Netflix, e-commarce system like Amazon. A single moment of failure will cost millions of dollars.

Avoiding a single point of failure (SPOF) in cloud-based systems is critical for ensuring high availability, fault tolerance, and resilience. To mitigate the risks associated with SPOF, systems should be designed with redundancy and fault tolerance in mind. Here are some best practices to minimize the risk of SPOFs:

Redundant Components and Geographic Distribution: Enhance the resilience of cloud architecture by implementing redundancy for all critical components, such as servers, databases, and load balancers.

Today, I am going to develop a pipeline that will mitigate SPOF for the Backend.

Here, NGINX acts as a load balancer, distributing user requests to both servers. This Architecture will solve :
Redundancy: If Instance 1 fails during the update, Instance 2 is still live, serving user requests.
Automatic Rollback: If the deployment fails (e.g., the app fails to start on Instance 1), CodeDeploy can be configured to automatically roll back to the previous version, ensuring no permanent downtime.

Implementation:

The deployment flow in this context will work as follows. GitHub Actions builds and pushes your Docker image to Docker Hub as it does today. It then notifies CodeDeploy to begin a deployment. CodeDeploy pulls your repository code (specifically the appspec.yml and deployment scripts) from an S3 bucket, then executes the deployment on each EC2 instance in a rolling fashion — one instance at a time — so your application remains available throughout.

Developer pushes code to GitHub
         ↓
GitHub Actions (Build & Test)
         ↓
Docker Image pushed to Docker Hub
         ↓
Deployment bundle uploaded to S3
         ↓
CodeDeploy triggered
         ↓
Instance A deregistered from ALB
         ↓
Scripts run on Instance A (stop old → pull new → start → validate)
         ↓
Instance A re-registered to ALB
         ↓
Same process repeats for Instance B
         ↓
Both instances running the latest code ✓
Enter fullscreen mode Exit fullscreen mode

Phase 1 — Project Folder Structure

trust-estate-server/
├── .github/
│   └── workflows/
│       └── pipeline.yml
├── scripts/
│   ├── before_install.sh
│   ├── after_install.sh
│   ├── start_application.sh
│   └── validate_service.sh
├── src/
│   └── index.js
├── .env.example
├── .gitignore
├── appspec.yml
├── docker-compose.yml
├── Dockerfile
└── package.json
Enter fullscreen mode Exit fullscreen mode

Phase 2 — Application Files

src/index.js

const express = require('express')
const app = express()
const port = process.env.PORT || 3000

app.use(express.json())

// Health check endpoint for ALB and CodeDeploy
app.get('/health', (req, res) => {
  res.status(200).json({
    status: 'healthy',
    app: 'trust-estate-server',
    timestamp: new Date().toISOString()
  })
})

app.get('/', (req, res) => {
  res.status(200).json({
    message: 'Trust Estate API is running',
    version: '1.0.0'
  })
})

app.listen(port, () => {
  console.log(`Trust Estate Server running on port ${port}`)
})
Enter fullscreen mode Exit fullscreen mode

then

.env.example

PORT=3000
NODE_ENV=production
Enter fullscreen mode Exit fullscreen mode

.gitignore

node_modules/
.env
.env.*
!.env.example
*.log
coverage/
dist/
Enter fullscreen mode Exit fullscreen mode

Phase 3 — Docker Files

Dockerfile

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install --omit=dev

COPY . .

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
  CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1

CMD ["node", "src/index.js"]
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml

services:
  node-app:
    image: ${DOCKER_USERNAME}/trust-estate-server:latest
    container_name: trust-estate-server
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - PORT=3000
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
Enter fullscreen mode Exit fullscreen mode

Phase 4 — CodeDeploy Files

appspec.yml

version: 0.0
os: linux

files:
  - source: /
    destination: /home/ubuntu/app
    overwrite: true

permissions:
  - object: /home/ubuntu/app/scripts
    pattern: "**"
    owner: ubuntu
    group: ubuntu
    mode: 755
    type:
      - file

hooks:
  BeforeInstall:
    - location: scripts/before_install.sh
      timeout: 300
      runas: root

  AfterInstall:
    - location: scripts/after_install.sh
      timeout: 300
      runas: root

  ApplicationStart:
    - location: scripts/start_application.sh
      timeout: 300
      runas: root

  ValidateService:
    - location: scripts/validate_service.sh
      timeout: 300
      runas: root
Enter fullscreen mode Exit fullscreen mode

Next for the Scripts Folder

scripts/before_install.sh

#!/bin/bash
set -e

echo "========================================="
echo "BEFORE INSTALL - Cleanup old deployment"
echo "========================================="

echo "Stopping existing container..."
docker stop trust-estate-server || true

echo "Removing existing container..."
docker rm trust-estate-server || true

echo "Removing old Docker images..."
docker rmi $(docker images -q) -f || true

echo "Cleanup complete."
Enter fullscreen mode Exit fullscreen mode

scripts/after_install.sh

#!/bin/bash
set -e

echo "========================================="
echo "AFTER INSTALL - Pulling new Docker image"
echo "========================================="

cd /home/ubuntu/app

echo "Pulling latest image..."
docker pull $DOCKER_USERNAME/trust-estate-server:latest

echo "Image pulled successfully."
Enter fullscreen mode Exit fullscreen mode

scripts/validate_service.sh

#!/bin/bash
set -e

echo "========================================="
echo "VALIDATE SERVICE - Health check"
echo "========================================="

MAX_RETRIES=10
RETRY_INTERVAL=10
HEALTH_URL="http://localhost:3000/health"

echo "Checking health at: $HEALTH_URL"

for i in $(seq 1 $MAX_RETRIES); do
  echo "Attempt $i of $MAX_RETRIES..."

  HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $HEALTH_URL)

  if [ "$HTTP_STATUS" -eq 200 ]; then
    echo "✓ Health check passed - trust-estate-server is healthy"
    exit 0
  fi

  echo "✗ Status: $HTTP_STATUS. Retrying in ${RETRY_INTERVAL}s..."
  sleep $RETRY_INTERVAL
done

echo "✗ trust-estate-server failed health checks after $MAX_RETRIES attempts"
exit 1
Enter fullscreen mode Exit fullscreen mode

Phase 5 — GitHub Actions Pipeline

.github/workflows/pipeline.yml

name: CI/CD Pipeline - Trust Estate Server

on:
  push:
    branches: ['main']

env:
  DOCKER_IMAGE: ${{ secrets.DOCKER_USERNAME }}/trust-estate-server

jobs:

  build:
    name: Build & Push Docker Image
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Login to Docker Hub
        run: |
          echo "${{ secrets.DOCKER_HUB_TOKEN }}" | \
          docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin

      - name: Build Docker Image
        run: |
          docker build \
            --tag ${{ env.DOCKER_IMAGE }}:latest \
            --tag ${{ env.DOCKER_IMAGE }}:${{ github.sha }} \
            .

      - name: Push Docker Image
        run: |
          docker push ${{ env.DOCKER_IMAGE }}:latest
          docker push ${{ env.DOCKER_IMAGE }}:${{ github.sha }}

  deploy:
    name: Deploy to EC2 via CodeDeploy
    runs-on: ubuntu-latest
    needs: build

    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Create Deployment Bundle
        run: |
          zip -r deployment-bundle.zip . \
            -x "*.git*" \
            -x "node_modules/*" \
            -x ".env*" \
            -x ".github/*"

      - name: Upload Bundle to S3
        run: |
          aws s3 cp deployment-bundle.zip \
            s3://${{ secrets.S3_BUCKET_NAME }}/deployments/deployment-${{ github.sha }}.zip

      - name: Trigger CodeDeploy Deployment
        run: |
          DEPLOYMENT_ID=$(aws deploy create-deployment \
            --application-name trust-estate \
            --deployment-group-name production-group \
            --s3-location bucket=${{ secrets.S3_BUCKET_NAME }},bundleType=zip,key=deployments/deployment-${{ github.sha }}.zip \
            --deployment-config-name CodeDeployDefault.OneAtATime \
            --description "Commit: ${{ github.sha }} by ${{ github.actor }}" \
            --query 'deploymentId' \
            --output text)

          echo "Deployment ID: $DEPLOYMENT_ID"
          echo "DEPLOYMENT_ID=$DEPLOYMENT_ID" >> $GITHUB_ENV

      - name: Wait for Deployment to Complete
        run: |
          echo "Waiting for deployment ${{ env.DEPLOYMENT_ID }}..."
          aws deploy wait deployment-successful \
            --deployment-id ${{ env.DEPLOYMENT_ID }}
          echo "✓ Deployment completed successfully."
Enter fullscreen mode Exit fullscreen mode

Now,

Go to https://github.com and:

Click + (top right)
→ New Repository
→ Repository name: trust-estate-server
→ Visibility: Public or Private (your choice)
→ Do NOT check "Add README"
→ Click Create Repository

Push Your Code to GitHub
In your terminal, **inside the trust-estate-server folder**:

`
bash# Initialise git
git init

# Stage all files
git add .

# First commit
git commit -m "initial project setup with CI/CD pipeline"

# Connect to your GitHub repo
# Replace YOUR_USERNAME with your actual GitHub username
git remote add origin https://github.com/YOUR_USERNAME/trust-estate-server.git

# Push to main branch
git branch -M main
git push -u origin main
Enter fullscreen mode Exit fullscreen mode

Go to GitHub and refresh the page. You should see all your files there.


Next Step — Add GitHub Secrets

This is where you store all sensitive values so they never appear in your code.

GitHub → Your Repository
       → Settings (top menu)
       → Secrets and Variables (left sidebar)
       → Actions
       → New Repository Secret
Enter fullscreen mode Exit fullscreen mode

Add these secrets one by one. Click New Repository Secret for each one:

Secret 1:

Name:  DOCKER_USERNAME
Value: your Docker Hub username
Enter fullscreen mode Exit fullscreen mode

Secret 2:

Name:  DOCKER_HUB_TOKEN
Value: the token you copied in Step A2
Enter fullscreen mode Exit fullscreen mode

Secret 3:

Name:  AWS_REGION
Value: eu-west-1
       (use YOUR actual region — check your EC2 instances)
Enter fullscreen mode Exit fullscreen mode

Secret 4:

Name:  S3_BUCKET_NAME
Value: leave this empty for now
       (you will come back and add this after Step D3)
Enter fullscreen mode Exit fullscreen mode

The AWS credential secrets (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN) will be added after you get them from the Learner Lab in Part D.


Next PART — AWS SETUP

(This is the biggest part — take it one step at a time)


Step on AWS-1 — Get Your AWS Credentials from Learner Lab

Open your AWS Academy Learner Lab and:

Click: Start Lab (wait until the circle goes green)
Click: AWS Details
Click: Show (next to AWS CLI)
Enter fullscreen mode Exit fullscreen mode

You will see three values:

aws_access_key_id     = ASIA...
aws_secret_access_key = xxxx...
aws_session_token     = xxxx... (very long string)
Enter fullscreen mode Exit fullscreen mode

Copy each one and go back to GitHub → Settings → Secrets → Actions and add:

Secret 5:

Name:  AWS_ACCESS_KEY_ID
Value: paste aws_access_key_id value
Enter fullscreen mode Exit fullscreen mode

Secret 6:

Name:  AWS_SECRET_ACCESS_KEY
Value: paste aws_secret_access_key value
Enter fullscreen mode Exit fullscreen mode

Secret 7:

Name:  AWS_SESSION_TOKEN
Value: paste aws_session_token value
Enter fullscreen mode Exit fullscreen mode

Remember: Every time you start a new Learner Lab session, you must update these 3 secrets with fresh values.


## Now, enter your AWS homepage. Our next step is create two instances using a single security group.


Enter fullscreen mode Exit fullscreen mode

The Order We Will Follow

Step 1: Create Security Group ← Rules for what traffic is allowed
Step 2: Create EC2 Instance A ← First server
Step 3: Create EC2 Instance B ← Second server
Step 4: Install software on both ← Docker + CodeDeploy agent
Step 5: Create Target Group ← Groups both servers together
Step 6: Create Load Balancer ← Distributes traffic to both servers
Step 7: Create S3 Bucket ← Stores deployment files
Step 8: Create CodeDeploy App ← Manages deployments
Step 9: Tag EC2 Instances ← Labels for CodeDeploy to find them

_**STEP 1 — Create a Security Group**_

A Security Group is a firewall that controls what traffic can reach your EC2 instances. Think of it as a bouncer at a door — it decides who gets in and who does not.Navigate to Security Groups
AWS Console → EC2
           → Security Groups (left sidebar, under Network & Security)
           → Create Security Group
Add Inbound Rules

Enter fullscreen mode Exit fullscreen mode

Inbound rules control traffic coming into your server. Click Add Rule for each of the following:
Rule 1 — SSH (for you to connect to the server):
Type: SSH
Protocol: TCP
Port: 22
Source: My IP
(AWS automatically fills your current IP address)

Rule 2 — HTTP (for web traffic):

Enter fullscreen mode Exit fullscreen mode

Type: HTTP
Protocol: TCP
Port: 80
Source: Anywhere IPv4 (0.0.0.0/0)

Rule 3 — Your App Port:

Enter fullscreen mode Exit fullscreen mode

Type: Custom TCP
Protocol: TCP
Port: 3000
Source: Anywhere IPv4 (0.0.0.0/0)

Enter fullscreen mode Exit fullscreen mode

Outbound Rules

Leave outbound rules as the default. The default allows all outbound traffic, which your instances need to pull Docker images and communicate with AWS services. Click Create the Security Group.

STEP 2 — Create EC2 Instance A

Navigate to EC2 Launch
AWS Console → EC2
→ Instances (left sidebar)
→ Launch Instances (orange button, top right)

Section 1 — Name and Tags
Name: trust-estate-server-A

Section 2 — Application and OS Image (AMI)
Search: Ubuntu
Select: Ubuntu Server 22.04 LTS (HVM), SSD Volume Type
(make sure it says Free tier eligible)
Architecture: 64-bit (x86)

Section 3 — Instance Type
Select: t2.micro
(Free tier eligible — enough for learning and testing)

Section 4 — Key Pair
This is how you SSH into your server. This is very important — do not skip this.
Click: Create new key pair
A dialog box appears:
Key pair name: trust-estate-key
Key pair type: RSA
Private key file format: .pem (for Mac/Linux)
.ppk (for Windows with PuTTY)
Click Create key pair. A file will automatically download to your computer.

Save this file somewhere safe. If you lose it, you can never SSH into your instances again. Put it in a folder you will remember, such as ~/aws-keys/trust-estate-key.pem.

Section 5 — Network Settings
Click Edit on the Network Settings section:
VPC: default VPC
Subnet: select any available subnet
(note which Availability Zone it is in
e.g. eu-west-1a)
Auto-assign public IP: Enable
For Firewall (Security Groups):
Select: Select existing security group
Choose: trust-estate-sg (the one you created in Step 1)

Section 6 — Configure Storage
Size: 8 GiB (default is fine)
Type: gp2

Section 7 — Advanced Details
`Scroll all the way down to Advanced Details. Find the IAM instance profile field:
IAM instance profile: LabRole

This gives your EC2 instance permission to communicate with CodeDeploy and S3.`

Launch Instance A
Click Launch Instance. AWS will show a success screen with your Instance ID. Click View All Instances to go back to the instances list.
Wait until Instance A shows:
Instance State: running
Status Checks: 2/2 checks passed

STEP 3 — Create EC2 Instance B

Repeat the exact same process as Step 2 with these differences:

`Name: trust-estate-server-B
Subnet: select a DIFFERENT subnet from Instance A
(different Availability Zone e.g. eu-west-1b)

`
Why a different subnet? Placing instances in different Availability Zones means if one AWS data centre has a problem, your other instance in a different location keeps running. This is what makes your application highly available.

STEP 4 — Install Software on Both Instances

Get access to both of your instances(instance A & instance B) and install the required software.
Install CodeDeploy Agent

# Update the package list
sudo apt-get update -y

# Install required tools
sudo apt-get install ruby wget curl -y

# Move to home directory
cd /home/ubuntu

# Download the CodeDeploy installer
# IMPORTANT: Replace eu-west-1 with YOUR actual AWS region
wget https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install

# Make it executable
chmod +x ./install

# Run the installer
sudo ./install auto

# Start the CodeDeploy agent service
sudo systemctl start codedeploy-agent

# Make it start automatically when instance reboots
sudo systemctl enable codedeploy-agent
Enter fullscreen mode Exit fullscreen mode

Now verify it is running:

sudo systemctl status codedeploy-agent

You should see this output:
● codedeploy-agent.service - LSB: AWS CodeDeploy Host Agent
   Active: active (running) since ...
Enter fullscreen mode Exit fullscreen mode

Install Docker

# Download and run the official Docker install script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add ubuntu user to docker group
# This allows running docker without sudo
sudo usermod -aG docker ubuntu

# Install Docker Compose plugin
sudo apt-get install docker-compose-plugin -y

Enter fullscreen mode Exit fullscreen mode

Verify Docker works:
docker --version
docker compose version

Next — Create the Target Group

Where are we right now?????

✓ Step 1: Security Group created
✓ Step 2: EC2 Instance A created
✓ Step 3: EC2 Instance B created
✓ Step 4: Software installed on both instances
→ Step 5: Create Target Group          ← YOU ARE HERE
  Step 6: Create Load Balancer
  Step 7: Create S3 Bucket
  Step 8: Create CodeDeploy Application
  Step 9: Tag EC2 Instances
Enter fullscreen mode Exit fullscreen mode

A Target Group is simply a list of your servers that the Load Balancer will send traffic to. Think of it like this:

****Navigate to Target Groups
AWS Console → EC2
→ Scroll down the LEFT sidebar
→ Under "Load Balancing"
→ Click: Target Groups
→ Click: Create Target Group (top right)

Page 1 — Basic Configuration
You will see a form. Fill in each field exactly:
Choose a target type:
● Instances ← select this one
○ IP addresses
○ Lambda function
○ Application Load Balancer
Target group name:
trust-estate-tg
Protocol:
HTTP
Port:
3000

This must be 3000 because your Express app runs on port 3000 inside Docker.

IP address type:
IPv4
VPC:
Select: default VPC
(the same VPC your EC2 instances are in)
Protocol version:
HTTP1

Health Checks Section
This is very important. The Load Balancer uses health checks to decide if an instance is healthy enough to receive traffic.
Health check protocol: HTTP
Health check path: /health
Now click Advanced health check settings to expand it:
Port: Traffic port
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 30
Success codes: 200

You should see it finally like this.

┌─────────────────────────────────────┐
│ Health check path:  /health         │
│ Healthy threshold:  2 checks        │
│ Unhealthy threshold: 2 checks       │
│ Timeout:            5 seconds       │
│ Interval:           30 seconds      │
│ Success codes:      200             │
└─────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Click Next.

Page 2 — Register Targets
This is where you add your two EC2 instances into the group.
You will see a table listing your available instances:

Assign(marked checked) both these instances to the target group named trust-estate-tg .
then Click Create Target Group.

Next step is to create the Application Load Balancer

Navigate to Load Balancers
AWS Console → EC2
→ Left sidebar under "Load Balancing"
→ Click: Load Balancers
→ Click: Create Load Balancer

Section 1 — Basic Configuration
Load balancer name: trust-estate-alb
Scheme: Internet-facing
← This means it accepts traffic from the internet
IP address type: IPv4

Section 2 — Network Mapping
VPC:
Select: default VPC
Availability Zones and Subnets:
You must select at least two Availability Zones. This is a requirement for a Load Balancer.
☑ eu-west-1a → select the subnet shown
☑ eu-west-1b → select the subnet shown

Match these to the Availability Zones where you launched your EC2 instances. You can check this by going to EC2 → Instances and looking at the Availability Zone column.

Section 3 — Security Groups
Remove: default (click the X next to it)
Add: trust-estate-sg ← the one you created earlier
Click in the security groups dropdown and select trust-estate-sg.

Section 4 — Listeners and Routing
This tells the Load Balancer what to do with incoming traffic:
Protocol: HTTP
Port: 80
Default action: Forward to → trust-estate-tg

Your listener should look like:

┌────────────────────────────────────────────────┐
│ HTTP : 80  →  Forward to: trust-estate-tg      │
└────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Section 5 — Summary and Create
Leave everything else as default. Scroll to the bottom and click Create Load Balancer.
AWS will show a success screen. Click View Load Balancer.

Save Your Load Balancer DNS Name
Click on trust-estate-alb in the list. In the details panel below, find:
DNS name: trust-estate-alb-123456789.eu-west-1.elb.amazonaws.com
Copy and save this URL. This is the public address of your application. Once everything is deployed, this is what you open in the browser to see your app running.

Create the S3 Bucket

Navigate to S3
AWS Console → S3
→ Create Bucket (top right)

AWS Region:
Select the SAME region your EC2 instances are in
Object Ownership:
ACLs disabled (default)
Block Public Access:
☑ Block all public access ← leave this ON
Versioning:
Disable (default)
Everything else leave as default. Click Create Bucket.

Update Your GitHub Secret
Now go back to GitHub and update the S3 secret:
GitHub → trust-estate-server repository
→ Settings
→ Secrets and Variables
→ Actions
→ Find S3_BUCKET_NAME
→ Click Edit (pencil icon)
→ Enter your bucket name
→ Click Save

Create the CodeDeploy Application

Navigate to CodeDeploy
AWS Console → search "CodeDeploy" in the top search bar
→ Click: CodeDeploy
→ Click: Applications (left sidebar)
→ Click: Create Application
Fill in the Details
Application name: trust-estate
Compute platform: EC2/On-Premises
Click Create Application.
Now Create the Deployment Group
You will be taken inside the trust-estate application. Click:
Create Deployment Group

Fill in each section:
Section 1 — Name and Role:
Deployment group name: production-group
Service role: LabRole

Click the dropdown for Service role and search for LabRole.

Section 2 — Deployment Type:
● In-place ← select this
○ Blue/green
Section 3 — Environment Configuration:
● Amazon EC2 instances ← select this

Tag group 1:
Key: App
Value: trust-estate
After typing the tag, you should see:
1 unique matched instance
Wait — this shows only 1 because you have not tagged both instances yet. That is fine. We do the tagging in Step 9 and come back to verify.

Section 4 — Agent Configuration:
Install AWS CodeDeploy Agent: Never

Select Never because you already installed it manually on both instances.

Section 5 — Deployment Settings:
Deployment configuration: CodeDeployDefault.OneAtATime
Section 6 — Load Balancer:
☑ Enable load balancing ← check this box

Load balancer:
● Application Load Balancer or Network Load Balancer

Target group: trust-estate-tg
Click Create Deployment Group.

Tag Both EC2 Instances
Now you attach the labels that CodeDeploy uses to find your instances.

Tag Instance A
AWS Console → EC2
→ Instances
→ Click: trust-estate-server-A
At the bottom of the screen you will see tabs:
Details | Security | Networking | Storage | Status checks | Monitoring | Tags
Click the Tags tab:
Click: Manage Tags
Click: Add Tag

Key: App
Value: trust-estate

Click: Save

Tag Instance B
Go back to Instances list
Click: trust-estate-server-B
Tags tab → Manage Tags → Add Tag

Key: App
Value: trust-estate

Click: Save

Verify CodeDeploy Found Both Instances
CodeDeploy → Applications → trust-estate
→ production-group
→ Click Edit (top right)
→ Scroll to Environment Configuration
You should now see:
Key: App
Value: trust-estate
Matching instances: 2 ✓
If you see 2 — everything is connected correctly.

Now

Congratulation

You have completed your CI/CD pipeline with CodeDeploy. Now, if you push your code to the main branch of your repository. You can see, in the action section github action is working automatically.

Github Link : https://github.com/Shongkor/trust-estate-server/actions/runs/22524715828

Top comments (0)