DEV Community

Selvaprakash_S
Selvaprakash_S

Posted on

ECS vs EKS and example with eksctl and copilot cli

Introduction:

In this blog, I am gonna give you an example and the difference between ECS and EKS which are container services in AWS.

As you might already know that ECS is Amazon's native container orchestration service and EKS is for k8s users who wanted to take advantage of AWS integrated services and scalability of the cloud.

Choosing between EKS and ECS:

Customers adopting containers at scale seeking powerful simplicity should start with Amazon ECS.

Teams choose Kubernetes for its vibrant ecosystem and community, consistent open-source APIs, and broad flexibility. They rely on Amazon EKS to handle the undifferentiated heavy lifting of building and operating Kubernetes at scale.

Amazon EKS provides the flexibility of Kubernetes with the security and resiliency of being an AWS-managed service that is optimized for customers building highly available services.

When it comes to cost the EKS cluster comes with a cost of $0.10 per hour for each Amazon EKS cluster that you create, unlike the ECS cluster. ECS cluster is free, only the underlying services like ALB, EC2 or fargate will have pay as you go costs.

For more details about EKS and ECS check out the below links.

https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/welcome-features.html

In this article, I'm gonna use Cloud9 for microservice deployment in both ECS and EKS.

I have created a role with admin permission and attached it to the Cloud9 EC2 instance.

EKS Cluster Creation:

eksctl is a tool jointly developed by AWS and Weaveworks that automates much of the experience of creating EKS clusters.

In this module, we will use eksctl to launch and configure our EKS cluster and nodes.

For this module, we need to download the eksctl binary:

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

sudo mv -v /tmp/eksctl /usr/local/bin

Enter fullscreen mode Exit fullscreen mode

Confirm the eksctl command works:

eksctl version

Enter fullscreen mode Exit fullscreen mode

Enable eksctl bash-completion

eksctl completion bash >> ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion

Enter fullscreen mode Exit fullscreen mode

Run aws sts get-caller-identity and validate that your Arn contains the admin role you created and attached to the EC2 and an Instance Id.

{
    "Account": "123456789012",
    "UserId": "AROA1SAMPLEAWSIAMROLE:i-01234567890abcdef",
    "Arn": "arn:aws:sts::123456789012:assumed-role/eks-admin/i-01234567890abcdef"
}

Enter fullscreen mode Exit fullscreen mode

Create an eksctl deployment file (ekscluster.yaml) use in creating your cluster using the following syntax:

cat << EOF > ekscluster.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ekscluster-eksctl
  region: ${AWS_REGION}
  version: "1.21"

availabilityZones: ["${AZS[0]}", "${AZS[1]}", "${AZS[2]}"]

managedNodeGroups:
- name: nodegroup
  desiredCapacity: 3
  instanceType: t3.small
  ssh:
    enableSsm: true

# To enable all of the control plane logs, uncomment below:
# cloudWatch:
#  clusterLogging:
#    enableTypes: ["*"]

secretsEncryption:
  keyARN: ${MASTER_ARN}
EOF
Enter fullscreen mode Exit fullscreen mode

Next, use the file you created as the input for the eksctl cluster creation.

eksctl create cluster -f ekscluster.yaml

Enter fullscreen mode Exit fullscreen mode

Launching EKS and all the dependencies will take approximately 15 minutes

Test the cluster:

kubectl get nodes # if we see our 3 nodes, we know we have authenticated correctly

Enter fullscreen mode Exit fullscreen mode

Export the Worker Role Name, we will need this on later part.

STACK_NAME=$(eksctl get nodegroup --cluster ekscluster-eksctl -o json | jq -r '.[].StackName')
ROLE_NAME=$(aws cloudformation describe-stack-resources --stack-name $STACK_NAME | jq -r '.StackResources[] | select(.ResourceType=="AWS::IAM::Role") | .PhysicalResourceId')
echo "export ROLE_NAME=${ROLE_NAME}" | tee -a ~/.bash_profile

Enter fullscreen mode Exit fullscreen mode

Congratulations!

You now have a fully working Amazon EKS Cluster that is ready to use! Before you move on to any other labs, make sure to complete the steps on the next page to update the EKS Console Credentials.

This step is optional, as nearly all of the content is CLI-driven. But, if you’d like full access to your cluster in the EKS console this step is recommended.

The EKS console allows you to see not only the configuration aspects of your cluster, but also to view Kubernetes cluster objects such as Deployments, Pods, and Nodes. For this type of access, the console IAM User or Role needs to be granted permission within the cluster.

By default, the credentials used to create the cluster are automatically granted these permissions. Following along in the steps, you’ve created a cluster using temporary IAM credentials from within Cloud9. This means that you’ll need to add your AWS Console credentials to the cluster.

Import your EKS Console credentials to your new cluster:

IAM Users and Roles are bound to an EKS Kubernetes cluster via a ConfigMap named aws-auth. We can use eksctl to do this with one command.

You’ll need to determine the correct credential to add for your AWS Console access. If you know this already, you can skip ahead to the eksctl create iamidentitymapping step below.

If you’ve built your cluster from Cloud9 as part of this tutorial, invoke the following within your environment to determine your IAM Role or User ARN.

c9builder=$(aws cloud9 describe-environment-memberships --environment-id=$C9_PID | jq -r '.memberships[].userArn')
if echo ${c9builder} | grep -q user; then
    rolearn=${c9builder}
        echo Role ARN: ${rolearn}
elif echo ${c9builder} | grep -q assumed-role; then
        assumedrolename=$(echo ${c9builder} | awk -F/ '{print $(NF-1)}')
        rolearn=$(aws iam get-role --role-name ${assumedrolename} --query Role.Arn --output text) 
        echo Role ARN: ${rolearn}
fi

Enter fullscreen mode Exit fullscreen mode

With your ARN in hand, you can issue the command to create the identity mapping within the cluster.

eksctl create iamidentitymapping --cluster ekscluster-eksctl --arn ${rolearn} --group system:masters --username admin

Enter fullscreen mode Exit fullscreen mode

Note that permissions can be restricted and granular but as this is an example cluster, you’re adding your console credentials as administrator.

Now you can verify your entry in the AWS auth map within the console.

kubectl describe configmap -n kube-system aws-auth

Enter fullscreen mode Exit fullscreen mode

DEPLOY THE EXAMPLE MICROSERVICES:

Clone the service repos:

cd ~/environment
git clone https://github.com/aws-containers/ecsdemo-frontend.git
git clone https://github.com/aws-containers/ecsdemo-nodejs.git
git clone https://github.com/aws-containers/ecsdemo-crystal.git

Enter fullscreen mode Exit fullscreen mode

Let’s bring up the NodeJS Backend API!

Copy/Paste the following commands into your Cloud9 workspace:

cd ~/environment/ecsdemo-nodejs
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml

Enter fullscreen mode Exit fullscreen mode

We can watch the progress by looking at the deployment status:

kubectl get deployment ecsdemo-nodejs

Enter fullscreen mode Exit fullscreen mode

Let’s bring up the Crystal Backend API!

Copy/Paste the following commands into your Cloud9 workspace:

cd ~/environment/ecsdemo-crystal
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml

Enter fullscreen mode Exit fullscreen mode
kubectl get deployment ecsdemo-crystal

Enter fullscreen mode Exit fullscreen mode

LET'S CHECK SERVICE TYPES

Before we bring up the frontend service, let’s take a look at the service types we are using: This is kubernetes/service.yaml for our frontend service:

apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-frontend
spec:
  selector:
    app: ecsdemo-frontend
  type: LoadBalancer
  ports:
   -  protocol: TCP
      port: 80
      targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

Notice type: LoadBalancer: This will configure an ELB to handle incoming traffic to this service.

Compare this to kubernetes/service.yaml for one of our backend services:

apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-nodejs
spec:
  selector:
    app: ecsdemo-nodejs
  ports:
   -  protocol: TCP
      port: 80
      targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

Notice there is no specific service type described. When we check the kubernetes documentation we find that the default type is ClusterIP. This Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster.

ENSURE THE ELB SERVICE ROLE EXISTS:

In AWS accounts that have never created a load balancer before, it’s possible that the service role for ELB might not exist yet.

We can check for the role, and create it if it’s missing.

Copy/Paste the following commands into your Cloud9 workspace:

aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"

Enter fullscreen mode Exit fullscreen mode

DEPLOY FRONTEND SERVICE:

Let’s bring up the Ruby Frontend!

cd ~/environment/ecsdemo-frontend
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml

Enter fullscreen mode Exit fullscreen mode
kubectl get deployment ecsdemo-frontend

Enter fullscreen mode Exit fullscreen mode

Now that we have a running service that is type: LoadBalancer we need to find the ELB’s address. We can do this by using the get services operation of kubectl:

kubectl get service ecsdemo-frontend -o wide

Enter fullscreen mode Exit fullscreen mode

It will take several minutes for the ELB to become healthy and start passing traffic to the frontend pods.

If you want you can scale the services.

kubectl scale deployment ecsdemo-nodejs --replicas=3
kubectl scale deployment ecsdemo-crystal --replicas=3

Enter fullscreen mode Exit fullscreen mode
kubectl get deployments
kubectl scale deployment ecsdemo-frontend --replicas=3
kubectl get deployments

Enter fullscreen mode Exit fullscreen mode

CLEANUP THE APPLICATIONS:

To delete the resources created by the applications, we should delete the application deployments:

Undeploy the applications:

cd ~/environment/ecsdemo-frontend
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

cd ~/environment/ecsdemo-crystal
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

cd ~/environment/ecsdemo-nodejs
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

Enter fullscreen mode Exit fullscreen mode

Delete the cluster:

eksctl delete cluster --name=eksworkshop-eksctl

Enter fullscreen mode Exit fullscreen mode

Without the --wait flag, this will only issue a delete operation to the cluster’s CloudFormation stack and won’t wait for its deletion. The nodegroup will have to complete the deletion process before the EKS cluster can be deleted. The total process will take approximately 15 minutes, and can be monitored via the CloudFormation Console.

ECS Demo deployment:

In the Cloud9 workspace, run the following commands.

note: I'm gonna use copilot-cli method throughout the rest of the ECS demo.

The Copilot CLI is a tool for developers to build, release, and operate production-ready containerized applications on AWS App Runner, Amazon ECS, and AWS Fargate.
From getting started, pushing to staging, and releasing to production, Copilot can help manage the entire lifecycle of your application development.

Install and setup prerequisites:

# Install prerequisites 
sudo yum install -y jq

pip install --user --upgrade awscli

# Install copilot-cli
sudo curl -Lo /usr/local/bin/copilot https://github.com/aws/copilot-cli/releases/latest/download/copilot-linux && sudo chmod +x /usr/local/bin/copilot && copilot --help

# Setting environment variables required to communicate with AWS API's via the cli tools
echo "export AWS_DEFAULT_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)" >> ~/.bashrc
source ~/.bashrc

mkdir -p ~/.aws

cat << EOF > ~/.aws/config
[default]
region = ${AWS_DEFAULT_REGION}
output = json
role_arn = $(aws iam get-role --role-name ecsworkshop-admin | jq -r .Role.Arn)
credential_source = Ec2InstanceMetadata
EOF

Enter fullscreen mode Exit fullscreen mode

Clone the service repos to your workspace:

cd ~/environment
git clone https://github.com/aws-containers/ecsdemo-platform
git clone https://github.com/aws-containers/ecsdemo-frontend
git clone https://github.com/aws-containers/ecsdemo-nodejs
git clone https://github.com/aws-containers/ecsdemo-crystal
Enter fullscreen mode Exit fullscreen mode

BUILD THE PLATFORM:

If breaking down a monolith into microservices is a good idea, then it stands to reason that keeping the code that manages your app platform small and simple also makes sense.

In this step, we manage the infrastructure with this repository, and then each service will be maintained in its own separate repository.

This repository will be used to build the base environment for the microservices to deploy to.

copilot-cli:

When we initialize our application, we will create our environment (which builds the platform resources). The platform resources that will be built and shared across our application are: VPC, Subnets, Security Groups, IAM Roles/Policies, ECS Cluster, Cloud Map Namespace (Service Discovery), Cloudwatch logs/metrics, and more!

Image description

In the Cloud9 workspace, ensure service linked roles exist for Load Balancers and ECS:

aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"

aws iam get-role --role-name "AWSServiceRoleForECS" || aws iam create-service-linked-role --aws-service-name "ecs.amazonaws.com"

Enter fullscreen mode Exit fullscreen mode

Deploy our application, service, and environment:

Navigate to the frontend service repo.

cd ~/environment/ecsdemo-frontend

Enter fullscreen mode Exit fullscreen mode

To start, we will initialize our application, and create our first service. In the context of copilot-cli, the application is a group of related services, environments, and pipelines. Run the following command to get started:

copilot init

Enter fullscreen mode Exit fullscreen mode

We will be prompted with a series of questions related to the application, and then our service. Answer the questions as follows:

Application name: ecsworkshop
Service Type: Load Balanced Web Service
What do you want to name this Load Balanced Web Service: ecsdemo-frontend
Dockerfile: ./Dockerfile
After you answer the questions, it will begin the process of creating some baseline resources for your application and service. This includes the manifest file for the frontend service, which defines the desired state of your service deployment. For more information on the Load Balanced Web Service manifest, see the copilot documentation.

Next, you will be prompted to deploy a test environment. An environment encompasses all of the resources that are required to support running your containers in ECS. This includes the networking stack (VPC, Subnets, Security Groups, etc), the ECS Cluster, Load Balancers (if required), service discovery namespace (via CloudMap), and more.

Before we proceed, we need to do a couple of things for our application to work as we expect. First, we need to define the backend services url’s as environment variables as this is how the frontend will communicate with them. These url’s are created when the backend services are deployed by copilot, which is using service discovery with AWS Cloud Map. The manifest file is where we can make changes to the deployment configuration for our service. Let’s update the manifest file with the environment variables.

cat << EOF >> copilot/ecsdemo-frontend/manifest.yml
variables:
  CRYSTAL_URL: "http://ecsdemo-crystal.test.ecsworkshop.local:3000/crystal"
  NODEJS_URL: "http://ecsdemo-nodejs.test.ecsworkshop.local:3000"
EOF

Enter fullscreen mode Exit fullscreen mode

Next, the application presents the git hash to show the version of the application that is deployed. All we need to do is run the below command to put the hash into a file for the application to read on startup.

git rev-parse --short=7 HEAD > code_hash.txt

Enter fullscreen mode Exit fullscreen mode

We’re now ready to deploy our environment. Run the following command to get started:

copilot env init --name test --profile default --default-config

Enter fullscreen mode Exit fullscreen mode

This part will take a few minutes because of all of the resources that are being created. This is not an action you run every time you deploy your service, it’s just the one time to get your environment up and running.

Next, we will deploy our service!

copilot svc deploy

Enter fullscreen mode Exit fullscreen mode

At this point, copilot will build our Dockerfile and deploy the necessary resources for our service to run.

Ok, that’s it! By simply answering a few questions, we have our frontend service deployed to an environment!

Grab the load balancer url and paste it into your browser.

copilot svc show -n ecsdemo-frontend --json | jq -r .routes[].url

Enter fullscreen mode Exit fullscreen mode

You should see the frontend service up and running. The app may look strange or like it’s not working properly. This is because our service relies on the ability to talk to AWS services that it presently doesn’t have access to. The app should be showing an architectural diagram with the details of what Availability Zones the services are running in. We will address this fix later in the chapter. Now that we have the frontend service deployed, how do we interact with our environment and service? Let’s dive in and answer those questions.

Interacting with the application:

To interact with our application, run the following in the terminal:

copilot app

Enter fullscreen mode Exit fullscreen mode
copilot app ls

Enter fullscreen mode Exit fullscreen mode
copilot app show ecsworkshop

Enter fullscreen mode Exit fullscreen mode

Interacting with the environment:

copilot env ls

Enter fullscreen mode Exit fullscreen mode
copilot env show -n test

Enter fullscreen mode Exit fullscreen mode

With this view, we’re able to see all of the services deployed to our application’s test environment. As we add more services, we will see this grow. A couple of neat things to point out here:

  • The tags associated with our environment. The default tags have the application name as well as the environment.
  • The details about the environment such as account id, region, and if the environment is considered production.

Interacting with the frontend service:

Image description

There is a lot of power with the copilot svc command. As you can see from the above image, there is quite a bit that we can do when interacting with our service.

Let’s look at a couple of the commands:

  • package: The copilot-cli uses CloudFormation to manage the state of the environment and services. If you want to get the CloudFormation template for the service deployment, you can simply run copilot svc package. This can be especially helpful if you decide to move to CloudFormation to manage your deployments on your own.
  • deploy: To put it simply, this will deploy your service. For local development, this enables one to locally push their service changes up to the desired environment. Of course when it comes time to deploy to production, a proper git workflow integrated with CI/CD would be the best path forward. We will deploy a pipeline later!
  • status: This command will give us a detailed view of the the service. This includes health information, task information, as well as active task count with details.
  • logs: Lastly, this is an easy way to view your service logs from the command line.
copilot svc status -n ecsdemo-frontend

Enter fullscreen mode Exit fullscreen mode

Image description

We can see that we have one active running task, and the details.

Scale our task count:

One thing we haven’t discussed yet is ways to manage/control our service configuration. This is done via the manifest file. The manifest is a declarative yaml template that defines the desired state of our service. It was created automatically when we ran through the setup wizard (running copilot init), and includes details such as docker image, port, load balancer requirements, environment variables/secrets, as well as resource allocation. It dynamically populates this file based off of the Dockerfile as well as opinionated, sane defaults.

Open the manifest file (./copilot/ecsdemo-frontend/manifest.yml), and replace the value of the count key from 1 to 3. This is declaring our state of the service to change from 1 task, to 3. Feel free to explore the manifest file to familiarize yourself.

# Number of tasks that should be running in your service.
count: 3

Enter fullscreen mode Exit fullscreen mode

Once you are done and save the changes, run the following:

copilot svc deploy

Enter fullscreen mode Exit fullscreen mode

Copilot does the following with this command:

  • Build your image locally
  • Push to your service’s ECR repository
  • Convert your manifest file to CloudFormation
  • Package any additional infrastructure into CloudFormation
  • Deploy your updated service and resources to CloudFormation

To confirm the deploy, let’s first check our service details via the copilot-cli:

copilot svc status -n ecsdemo-frontend

Enter fullscreen mode Exit fullscreen mode

You should now see three tasks running! Now go back to the load balancer url, and you should see the service showing different IP addresses based on which frontend service responds to the request. Note, it’s still not showing the full diagram, we’re going to fix this shortly.

Review the service logs:

The services we deploy via copilot are automatically shipping logs to Cloudwatch logs by default. Rather than navigate and review logs via the console, we can use the copilot cli to see those logs locally. Let’s tail the logs for the frontend service.

copilot svc logs -a ecsworkshop -n ecsdemo-frontend --follow

Enter fullscreen mode Exit fullscreen mode

Note that if you are in the same directory of the service you want to review logs for, simply type the below command. Of course, if you wanted to review logs for a service in a particular environment, you would pass the -e flag with the environment name.

copilot svc logs

Enter fullscreen mode Exit fullscreen mode

Create a CI/CD Pipeline:

In this section, we’ll go from a local development workflow, to a fully automated CI/CD pipeline and git workflow. We will be using AWS CodeCommit to host our git repository, and the copilot cli to do the rest.

Prepare and setup the repository in AWS Codecommit

First, we are going to create a repository in AWS Codecommit. Note that AWS Copilot supports GitHub and BitBucket as well.

repo_https_url=$(aws codecommit create-repository \
  --repository-name "ecsdemo-frontend" \
  --repository-description "ECSWorkshop frontend application" | jq -r '.repositoryMetadata.cloneUrlHttp')

Enter fullscreen mode Exit fullscreen mode

Next, let’s add the new repository as a remote, and push the current codebase up to our new repo.

# Configure git client to use the aws cli credential helper
git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true

# Add the new repo as a remote
git remote add cc $repo_https_url

# Push the changes
git push cc HEAD

Enter fullscreen mode Exit fullscreen mode

Creating the pipeline

Generally, when we create CI/CD pipelines there is quite a bit of work that goes architecting and creating them. Copilot does all of the heavy lifting leaving you to just answer a couple of questions in the cli, and that’s it. Let’s see it in action!

Run the following to begin the process of creating the pipeline:

copilot pipeline init

Enter fullscreen mode Exit fullscreen mode

Once again, you will be prompted with a series of questions. Answer the questions with the following answers:

Which environment would you like to add to your pipeline? Answer: Choose “test”
Which repository would you like to use for your pipeline? Answer: Choose the repo url that matches the code commit repository. It will contain git-codecommit.

The core pipeline files will be created in the ./copilot directory.

Commit and push the new files to your repo. To get more information about the files that were created, check out the copilot-cli documentation. In short, we are pushing the deployment manifest (for copilot to use as reference to deploy the service), pipeline.yml (which defines the stages in the pipeline), and the buildspec.yml (contains the build instructions for the service).

git add copilot
git commit -m "Adding copilot pipeline configuration"
git push cc

Enter fullscreen mode Exit fullscreen mode

Now that our repo has the pipeline configuration, let’s build/deploy the pipeline:

copilot pipeline update

Enter fullscreen mode Exit fullscreen mode

Our pipeline is now deployed (yes, it’s that simple). Let’s interact with it!

Now there are two ways that I can review the status of the pipeline.

Console: Navigate here: https://${YOUR_REGION}.console.aws.amazon.com/codesuite/codepipeline/pipelines, click your pipeline, and you can see the stages.

Command line: copilot pipeline status.

Whether you’re in the console or checking from the cli, you will see the pipeline is actively running. You can watch as the pipeline executes, and when it is complete, all updates in the Status column will show “Succeeded”.

Let’s make an update to the frontend application, and push it to git. The expected result will be the changes automatically get deployed via the pipeline.

Open the following file using your favorite text editor: app/views/application/index.html.erb. Modify line 9 and prepend to the line [Pipeline Deployed!]. It should look like the code below

[Pipeline Deployed!] Rails frontend: Hello! from <%= @az %> running <%= @code_hash %>

Enter fullscreen mode Exit fullscreen mode

Now let’s commit the file and push it to our repository.

git rev-parse --short=7 HEAD > code_hash.txt
git add app/views/application/index.html.erb code_hash.txt
git commit -m "Updating code hash and updating frontend"
git push cc

Enter fullscreen mode Exit fullscreen mode

At this point we have handed off the deployment to the CI/CD pipeline. In a real world scenario, it would be following good practices to have a code review as well as tests integrated into the pull request and pipeline. It will take a few minutes for the changes to be deployed, so we can follow along in one of two ways.

  1. Console: https://${AWS_REGION}.console.aws.amazon.com/codesuite/codepipeline/pipelines, click your pipeline, and you can see the stages.

  2. Command line: copilot pipeline status.

When the pipeline is complete, navigate to the load balancer url. You should now see the application showing the Pipeline deployed in front of the Rails frontend data.

The next steps would be, we have to deploy node and crystal backend same like this. if you want you can set up CICD for that as well. Also, you have to choose the backend for the below question.

Which service type best represents your service’s architecture? Select “Backend Service”, hit enter

CLEAN UP COMPUTE RESOURCES.

Answer yes when prompted. Make sure the application you delete is called “ecsworkshop”

cd ~/environment/ecsdemo-frontend/
copilot pipeline delete

Enter fullscreen mode Exit fullscreen mode
copilot app delete 

Enter fullscreen mode Exit fullscreen mode

Top comments (0)