<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ProsperAgada</title>
    <description>The latest articles on DEV Community by ProsperAgada (@prosperagada).</description>
    <link>https://dev.to/prosperagada</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prosperagada"/>
    <language>en</language>
    <item>
      <title>Containers made easy on AWS</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Thu, 31 Oct 2024 11:22:37 +0000</pubDate>
      <link>https://dev.to/prosperagada/containers-made-easy-on-aws-385c</link>
      <guid>https://dev.to/prosperagada/containers-made-easy-on-aws-385c</guid>
      <description>&lt;p&gt;Hi everyone &lt;br&gt;
And welcome back to our blog article. &lt;br&gt;
And today we are gonna be talking about container services on AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What will be Covered in this short article&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;what are containers&lt;/li&gt;
&lt;li&gt;Compute options on AWS&lt;/li&gt;
&lt;li&gt;what are container orchestrators &lt;/li&gt;
&lt;li&gt;How ECS works&lt;/li&gt;
&lt;li&gt;How EKS works &lt;/li&gt;
&lt;li&gt;and finally AWS ECR&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Link to video on Youtube: &lt;a href="https://youtu.be/PUZfZaA21fI" rel="noopener noreferrer"&gt;https://youtu.be/PUZfZaA21fI&lt;/a&gt;&lt;br&gt;
Connect with me on Linked: &lt;a href="http://www.linkedin.com/in/prosper-agada-3a4016241" rel="noopener noreferrer"&gt;www.linkedin.com/in/prosper-agada-3a4016241&lt;/a&gt;&lt;br&gt;
Don't forget to drop your reactions and feedback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdcui23jo7tca772zhry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdcui23jo7tca772zhry.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Containers are like tiny boxes we use for packing our application and everything it needs, including your dependencies and libraries.&lt;br&gt;
This makes our application easier and lightweight to deploy on platforms like AWS, google, or Azure.&lt;br&gt;&lt;br&gt;
Docker and containers can be used for the containerization of your application, &lt;br&gt;
Docker is the most popular in the market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkohm3ssp0si1wruf6n50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkohm3ssp0si1wruf6n50.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;br&gt;
AWS has two computing infrastructure options for deploying containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2&lt;/li&gt;
&lt;li&gt;Fargate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ec2&lt;/strong&gt; - is a self-managed computing service, &lt;br&gt;
This means you have some responsibilities in managing the infrastructure yourself. This might include installing the container runtime and deploying your application container on it. &lt;/p&gt;

&lt;p&gt;While &lt;strong&gt;Fargate&lt;/strong&gt; is a fully managed compute service on aws, aws are responsible for managing everything  for you &lt;br&gt;
all you have to do is deploy your application.. this is called serverless computing&lt;/p&gt;

&lt;p&gt;Depending on your workload and demands you can choose between these two options,&lt;br&gt;
if you want to handle the installation and management yourself, Then ec2 is the best option for running your workload,&lt;br&gt;&lt;br&gt;
But if you want to abstract the infrastructure management and focus more on development, then Fargate would be the best option for your workload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphey1a9wh50ntytbwjej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphey1a9wh50ntytbwjej.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
when your applications grow, the number of containers will increase as well, as some microservice applications can. Range from 100s to 100000s containers.. managing this yourself might become tasking.&lt;br&gt;
Some problems may arise like;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do you know when a container is down&lt;/li&gt;
&lt;li&gt;How to schedule another task for the cluster&lt;/li&gt;
&lt;li&gt;How to LoadBalance &lt;/li&gt;
&lt;li&gt;How to autoscale&lt;/li&gt;
&lt;li&gt;How to allocate the right resources in your cluster
and so on..&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Management might become hard and messy, that is why we need a container orchestrator..&lt;/p&gt;

&lt;p&gt;We all know KUBERNETES, as the most popular container orchestrator tool in the market.. &lt;br&gt;
We also have other tools like &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker swarm&lt;/li&gt;
&lt;li&gt;Nomad &lt;/li&gt;
&lt;li&gt;Apache Mesos &lt;/li&gt;
&lt;li&gt;and AWS ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1hwcjof7gf1vvscmlmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1hwcjof7gf1vvscmlmv.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
ECS means elastic container service..&lt;br&gt;
it is an owned fully managed container orchestration service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does it work?&lt;/strong&gt;&lt;br&gt;
it comes with a control plan with the control tools installed and a dashboard&lt;br&gt;
with it, you can control a fleet of instances, called the worker's node&lt;br&gt;
the worker will have the runtime installed alongside other worker processes.&lt;br&gt;
The worker node can be your EC2 instance which you will have access to manage yourself or Fargate which is serverless which is managed by AWS..&lt;br&gt;
you can also choose a blend of both.. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1eu0f3rgtc4zam28u0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1eu0f3rgtc4zam28u0s.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;br&gt;
ex means elastic Kubernetes service..&lt;br&gt;
if you love using Kubernetes, EKS is the best option for you, EKS is AWS managed Kubernetes service.&lt;br&gt;
It is used for container orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does EKS work?&lt;/strong&gt;&lt;br&gt;
it comes with a control plane with the whole control tools pre-installed, and with that, &lt;br&gt;
You can create a cluster of instances&lt;br&gt;&lt;br&gt;
And deploy your application using KUBECTL.. &lt;br&gt;
the instance can be ec2 which is self-managed or fargate which is serverless. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzppu6fhu2qh2mzfwu438.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzppu6fhu2qh2mzfwu438.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;                                                                                      ECR means elastic container registry&lt;br&gt;
this is AWS managed private container registry..  similar to what we have on the docker hub.&lt;br&gt;
You can push images to ECR and pull them into your ECS or EKS clusters seamlessly.&lt;br&gt;
&lt;strong&gt;One of the advantages includes..&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Seamless integration.
since all these are under aws ecosystem, it integrates seamlessly with other services like;&lt;/li&gt;
&lt;li&gt;The VPC&lt;/li&gt;
&lt;li&gt;Elastic LoadBalancer&lt;/li&gt;
&lt;li&gt;IAM
and so on..&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Some key takeaways..&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container is like an envelope for packing our applications with all it needs to run, including the (dependencies and libraries).&lt;/li&gt;
&lt;li&gt;Ec2 is a self-managed compute instance offered by AWS&lt;/li&gt;
&lt;li&gt;Fargate is a serverless compute instance offered by AWS&lt;/li&gt;
&lt;li&gt;Container orchestrators are used for handling the management
of multiple containers, like load balancing, scheduling tasks, 
resource allocation, autoscaling, and so on.&lt;/li&gt;
&lt;li&gt;ECS is AWS owned managed container orchestration service&lt;/li&gt;
&lt;li&gt;EKS is AWS managed Kubernetes orchestration service&lt;/li&gt;
&lt;li&gt;finally, ECR is a fully managed container registry &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ready to Take Your DevOps Skills to the Next Level? 🚀&lt;/strong&gt;&lt;br&gt;
If you're excited about mastering AWS container services and beyond, join our 6-month DevOps Bootcamp! Perfect for beginners and pros alike, this boot camp covers everything from fundamental DevOps principles to hands-on experience with CI/CD, AWS, Docker, Kubernetes, Automation, Monitoring, and more.&lt;br&gt;
👉 Sign up now and start your journey towards becoming a certified DevOps professional!&lt;br&gt;
&lt;a href="mailto:prosperagada01@gmail.com"&gt;prosperagada01@gmail.com&lt;/a&gt;&lt;br&gt;
agasglobaltech.com&lt;/p&gt;

&lt;p&gt;Thank you and see you next time.                                                                        &lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Building Scalable Java Applications on AWS ECS with Fargate - A Step-by-Step Guide</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Mon, 30 Sep 2024 22:21:41 +0000</pubDate>
      <link>https://dev.to/prosperagada/building-scalable-java-applications-on-aws-ecs-with-fargate-a-step-by-step-guide-4e53</link>
      <guid>https://dev.to/prosperagada/building-scalable-java-applications-on-aws-ecs-with-fargate-a-step-by-step-guide-4e53</guid>
      <description>&lt;p&gt;As a passionate advocate for AWS cloud technologies, I constantly explore ways to optimize and automate the deployment of applications. In this article, I will share a recent project where I deployed a containerized Java application on Amazon Elastic Container Service (ECS) using Fargate, Elastic Container Registry (ECR), and an Application Load Balancer (ALB). This hands-on experience is a reflection of my drive to share best practices and empower others in the community to leverage AWS services effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Project?
&lt;/h2&gt;

&lt;p&gt;My goal is to contribute to the community by sharing practical, real-world examples of how AWS services can simplify application deployment. In this project, I demonstrate how to deploy a Java application with PostgreSQL using ECS and Fargate, a serverless compute engine that eliminates the need to manage servers. This approach reflects AWS's philosophy of automation and scalability, making it accessible for developers and businesses alike.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Project Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoyw05oyg1fnz6j6u3qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoyw05oyg1fnz6j6u3qh.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
The key services used in this project are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon ECS (Elastic Container Service): Simplifies the orchestration of containerized applications.&lt;/li&gt;
&lt;li&gt;Amazon ECR (Elastic Container Registry): Secures and manages Docker images.&lt;/li&gt;
&lt;li&gt;AWS Fargate: Provides a serverless environment to run containers without managing infrastructure.&lt;/li&gt;
&lt;li&gt;Application Load Balancer (ALB): Distributes incoming traffic across multiple containers for high availability.&lt;/li&gt;
&lt;li&gt;Security Groups: Enforces security and access control at various layers.
This combination of services showcases how AWS cloud-native solutions allow for seamless, scalable, and secure deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Cloning the Repository and Building the Java Application&lt;br&gt;
To begin, I cloned the repository containing the Java and PostgreSQL application from GitHub:&lt;/p&gt;

&lt;p&gt;Copy code&lt;br&gt;
&lt;code&gt;git clone https://github.com/ProsperAgada/java-springboot-app.git&lt;/code&gt;&lt;br&gt;
Next, I used Maven to build the application into an artifact (.jar file), which would later be containerized:&lt;/p&gt;

&lt;p&gt;Copy code&lt;br&gt;
&lt;code&gt;mvn clean package&lt;/code&gt;&lt;br&gt;
This process generates the Java application's .jar file, which is the core component that will be deployed using AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Containerizing the Application with Docker&lt;br&gt;
To prepare the application for deployment, I created a Docker image using the following Dockerfile:&lt;/p&gt;

&lt;p&gt;Copy code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM openjdk:11
WORKDIR /app
COPY target/*.jar app.jar
EXPOSE 8080
CMD ["java", "-jar", "app.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Building the Docker image ensures that the application can run consistently across different environments. Here’s how I built the image:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy code&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;docker build -t simple-java-app .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Pushing the Docker Image to Amazon ECR&lt;br&gt;
With the Docker image ready, the next step was to push it to Amazon Elastic Container Registry (ECR):&lt;br&gt;
&lt;strong&gt;1. Create ECR repository&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bpxgvb2ctxd6l9x0kr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bpxgvb2ctxd6l9x0kr2.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;2. Login to ECR:  &lt;/strong&gt;&lt;br&gt;
Copy code   &lt;br&gt;
&lt;code&gt;aws ecr get-login-password --region &amp;lt;region&amp;gt; | docker login --username AWS --password-stdin &amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com&lt;/code&gt;&lt;br&gt;
   &lt;br&gt;
&lt;strong&gt;3. Tag and Push the Docker Image:&lt;/strong&gt;&lt;br&gt;
Copy code   &lt;br&gt;
&lt;code&gt;docker tag simple-java-app:latest &amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/simple-java-app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker push &amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/simple-java-app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Deploying the Application on ECS with Fargate&lt;br&gt;
AWS ECS and Fargate make it easy to manage and scale containerized applications without worrying about infrastructure. To deploy the Java application, I followed these steps:&lt;br&gt;
&lt;strong&gt;1. Create a Fargate Cluster:&lt;/strong&gt; Using the ECS console, I created a Fargate cluster that supports our containers.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpakd0ybi0q2q6i5w9yr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpakd0ybi0q2q6i5w9yr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;select fargate(serverless) for infrastructure&lt;/li&gt;
&lt;li&gt;click create&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Define a Task:&lt;/strong&gt; The task defines how the containers run, specifying the image pulled from ECR and necessary configurations like port 8080.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwec2menmxvd6if1a13w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwec2menmxvd6if1a13w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;select the &lt;strong&gt;ecsTaskExecutionRole&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;configure container 1&lt;/strong&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpxjukq1bafzm1e7sdtj.png" alt="Image description"&gt;
container name: postgres
container image: postgres
container port : 5432&lt;/li&gt;
&lt;li&gt;&lt;p&gt;add environment variables for postgres database:&lt;br&gt;
POSTGRES_PASSWORD&lt;br&gt;
POSTGRES_USER = postgres&lt;br&gt;
POSTGRES_DB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;configure container 2&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7qhzq2xjauw5iwjme1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv7qhzq2xjauw5iwjme1o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;container port: 8080&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;protocol: http  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;add environment variables for java api:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fham7v52ib2ae0tvi3g5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fham7v52ib2ae0tvi3g5u.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
DB_HOST = localhost&lt;br&gt;
DB_PORT = 5432&lt;br&gt;
POSTGRES_PASSWORD = &lt;br&gt;
POSTGRES_USER = postgres&lt;br&gt;
POSTGRES_DB  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click create&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Set Up Security Groups:&lt;/strong&gt; I configured security groups to allow traffic on HTTP (port 80), ensuring that the application is publicly accessible but remains secure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create a New Service&lt;/strong&gt;&lt;br&gt;
In the ECS dashboard, click on your cluster and navigate to the Services tab. Here, you will create the service.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure Basic Service Settings&lt;/strong&gt;
&lt;strong&gt;Launch Type:&lt;/strong&gt; Choose Fargate as the launch type since we’re using serverless container management.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Task Definition:&lt;/strong&gt; Select the task definition that you created earlier. This task definition will pull your Docker image from ECR.&lt;br&gt;
Service Name: Enter a name for your service (e.g., java-app-service).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of Tasks:&lt;/strong&gt; Specify the desired number of tasks (replicas). For instance, if you want two instances of your container running, enter 2.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select capacity provider strategy&lt;/li&gt;
&lt;li&gt;application type select service&lt;/li&gt;
&lt;li&gt;name your service
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl233n5qdmpihr52nf2rk.png" alt="Image description"&gt;
Click on Create to start the service creation wizard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under networking drop down&lt;br&gt;
&lt;strong&gt;Configure VPC and Subnets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster VPC:&lt;/strong&gt; Select the VPC (Virtual Private Cloud) in which your ECS cluster is running. If you're using the default VPC, select it here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subnets:&lt;/strong&gt; Choose the subnets where your tasks will be deployed. These subnets need to have internet access if you're deploying a web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Groups:&lt;/strong&gt; Assign the security groups that control access to your service. You can create a new security group or use an existing one. Ensure port 80 is open for HTTP access if you want your application to be publicly accessible.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkan0b51g3hc0o9ou6j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkan0b51g3hc0o9ou6j4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Using an Application Load Balancer for High Availability&lt;br&gt;
To ensure high availability, you will configured an Application Load Balancer (ALB):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ALB distributes incoming traffic across the running containers in the ECS cluster.&lt;/li&gt;
&lt;li&gt;This setup helps the application handle more requests and improves its fault tolerance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Testing and Verifying the Deployment&lt;br&gt;
Once everything was set up, I deployed the ECS service, and the application was accessible through the ALB’s public DNS. The load balancer successfully routed traffic between the running containers, ensuring the Java app was accessible to users through a single endpoint.&lt;br&gt;
  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploying a Node.js App on ECS and Fargate 3Tier architecture</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Sun, 28 Jan 2024 13:37:29 +0000</pubDate>
      <link>https://dev.to/prosperagada/deploying-a-nodejs-app-on-ecs-and-fargate-3tier-architecture-5721</link>
      <guid>https://dev.to/prosperagada/deploying-a-nodejs-app-on-ecs-and-fargate-3tier-architecture-5721</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9zdeoymts4j349t2un7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9zdeoymts4j349t2un7.png" alt="Image description"&gt;&lt;/a&gt;After completing an exciting workshop (deploying a 3tier application ) frontend written in react, backend written in node.js and RDS MySQL instance as database.. So I decided to take a step further to make it more exciting and challenging by containerizing the frontend and backend with docker and running it on ECS cluster and Fargate.&lt;br&gt;
In this article i will be demonstrating how i deployed the application on ECS 3tier architecture with Fargate.&lt;br&gt;
If you are new to docker and containerization i will like to refer you to this vivid video by nana janashia.&lt;/p&gt;

&lt;p&gt;This are the following steps to follow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;push code to github&lt;/li&gt;
&lt;li&gt;setup cloud 9 or AWS CLI&lt;/li&gt;
&lt;li&gt;setup security group&lt;/li&gt;
&lt;li&gt;setup MySQL database&lt;/li&gt;
&lt;li&gt;setup ECR&lt;/li&gt;
&lt;li&gt;setup ECS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 1:&lt;br&gt;
Pushing code to github&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd9v99w8jx6xf59cl6rc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd9v99w8jx6xf59cl6rc.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Step 2:&lt;br&gt;
Login the AWS Management console and Setup AWS Cloud 9 Environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxyiz6zqw7z3eq8dfhyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxyiz6zqw7z3eq8dfhyl.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmumj4wpzen58aae937i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmumj4wpzen58aae937i.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sq4ya2dwex45d8yzq5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sq4ya2dwex45d8yzq5l.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F660azvus1584z8n57eec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F660azvus1584z8n57eec.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fly6d7ch9ituljmi5k7ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fly6d7ch9ituljmi5k7ic.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1iantpfudsesc5h3qtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1iantpfudsesc5h3qtt.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvboej2795p8msz61leo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvboej2795p8msz61leo.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqevn6zk1ljkweawn81t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqevn6zk1ljkweawn81t.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zk09gvhwqwa04h160eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zk09gvhwqwa04h160eg.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxos3q40xo4go523hoxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxos3q40xo4go523hoxs.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxibkyo7iei3hvkfaoetj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxibkyo7iei3hvkfaoetj.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2g8wsgb39pqj0tx25vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2g8wsgb39pqj0tx25vs.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprlnikjiqq60cy3mikj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprlnikjiqq60cy3mikj7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding AWS SAM(Serverless Application Model)</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Mon, 10 Jul 2023 23:37:26 +0000</pubDate>
      <link>https://dev.to/prosperagada/understanding-aws-samserverless-application-model-ljo</link>
      <guid>https://dev.to/prosperagada/understanding-aws-samserverless-application-model-ljo</guid>
      <description>&lt;p&gt;The AWS Serverless Application Model (SAM) is an open-source framework designed to facilitate the development of serverless applications. It offers a concise syntax for expressing functions, APIs, databases, and event source mappings. By using just a few lines of code for each resource, you can define your desired application and represent it using YAML. During the deployment process, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications more efficiently.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kYJ1ZfxT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5atraqvhi7f387szf3j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kYJ1ZfxT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5atraqvhi7f387szf3j.jpg" alt="Image description" width="707" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To begin building SAM-based applications, you can utilize the AWS SAM CLI. This CLI provides an environment similar to Lambda, allowing you to locally build, test, and debug applications defined using SAM templates or the AWS Cloud Development Kit (CDK). Additionally, the SAM CLI offers the capability to deploy your applications to AWS. It also supports the creation of secure continuous integration and deployment (CI/CD) pipelines that adhere to best practices and integrate seamlessly with both AWS' native tools and third-party CI/CD systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of SAM:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Deployment Configuration&lt;/strong&gt;&lt;br&gt;
Use SAM to organize related components, share configuration such as memory and timeouts between resources, and deploy all related resources together as a single, versioned entity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built on AWS CloudFormation&lt;/strong&gt;&lt;br&gt;
AWS SAM is an extension of AWS CloudFormation, so you get the reliable deployment capabilities of CloudFormation. You can also define resources using CloudFormation in your SAM template and use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Testing and Debugging&lt;/strong&gt;&lt;br&gt;
Use SAM CLI to step-through and debug your code. It provides a Lambda-like execution environment locally and helps you catch issues upfront. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-In Best Practices&lt;/strong&gt;&lt;br&gt;
Deploy your infrastructure as config to leverage best practices such as code reviews. Enable gradual deployments through AWS CodeDeploy and tracing using AWS X-Ray with just a few lines of SAM config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with Development Tools&lt;/strong&gt;&lt;br&gt;
SAM integrates with a suite of AWS serverless tools. Find new applications in the AWS Serverless Application Repository, use AWS Cloud9 IDE to author, test, and debug SAM-based serverless applications, and AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline to build a deployment pipeline. To start with a project structure, code repository, and CI/CD pipeline configured for you, try AWS CodeStar.&lt;/p&gt;

&lt;p&gt;if you find this piece of article helpful, don't forget to drop your reactions and feedback&lt;br&gt;
thanks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>serverless</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Deploying a Static Website using CodePipeline,GitHub, and CloudFront</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Thu, 29 Jun 2023 06:55:49 +0000</pubDate>
      <link>https://dev.to/prosperagada/creating-a-static-website-using-codepipelinegithub-and-cloudfront-46ep</link>
      <guid>https://dev.to/prosperagada/creating-a-static-website-using-codepipelinegithub-and-cloudfront-46ep</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For this project, we want to create a website that doesn't change much and store it on a service called S3. Instead of manually updating the website whenever we make changes, we will use a tool called CodePipeline to make it automatic. CodePipeline will watch our code stored on GitHub, where we keep our website's main file called index.html. Whenever we make changes to that file, CodePipeline will automatically update the website for us.&lt;/p&gt;

&lt;p&gt;To make the website faster and more secure, we will use another tool called CloudFront. It acts like a special server that stores copies of our website in different locations. When someone tries to access our website using the insecure HTTP protocol, CloudFront will quickly redirect them to the secure HTTPS protocol, which is safer. This way, our website will be both faster and more secure for visitors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your team has asked you to create an easier way to deploy a website without manual work. Currently, the developers have to manually test each new update they make to the code. Your job is to make it simpler by giving the developers a website address they can use to see their changes right away. You also need to make a small change in the code stored on GitHub to check if everything is working correctly. This will save time and make it easier for the developers to update the website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new repository in GitHub and load the static website content.&lt;/li&gt;
&lt;li&gt;Create and configure a S3 bucket to host your static website.&lt;/li&gt;
&lt;li&gt;Create a CloudFront distribution and restrict access to S3 bucket content with OAI&lt;/li&gt;
&lt;li&gt;Create a CI/CD pipeline using the *&lt;em&gt;AWS Codepipeline service *&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Set your repo as the Source Stage of the Codepipeline that is triggered when an update is made to a GitHub repo.&lt;/li&gt;
&lt;li&gt;For the deploy stage select your S3 bucket.&lt;/li&gt;
&lt;li&gt;Deploy the pipeline and verify that you can reach the static website.&lt;/li&gt;
&lt;li&gt;Make an update to the code in your github to verify that the codepipeline is triggered. This can be as simple as a change to the Readme file because any change to the files should trigger the workflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Create a new repository in GitHub and load the content of the static website.&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://github.com/ProsperAgada/deploy-static-website-using-AWS-Code-Pipeline-S3-and-GitHub-new.git"&gt;https://github.com/ProsperAgada/deploy-static-website-using-AWS-Code-Pipeline-S3-and-GitHub-new.git&lt;/a&gt; you can clone this repo.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--16T4YCl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xzs36ukuj3mxg6mmp4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--16T4YCl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xzs36ukuj3mxg6mmp4g.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Create S3 Bucket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Navigate to S3 -&amp;gt; &lt;strong&gt;Create Bucket.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;Uncheck&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Block all Public Access&lt;br&gt;
 and acknowledge -&amp;gt; &lt;strong&gt;create&lt;/strong&gt;&lt;br&gt;
c. Upload static website content to the bucket created.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Du3OukV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bwlc0hdg6y0iwy4aiayl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Du3OukV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bwlc0hdg6y0iwy4aiayl.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;
d. Navigate to your bucket -&amp;gt; Properties -&amp;gt; Edit Static website hosting&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W4qYRmBo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lxwpumj437ewfld218bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W4qYRmBo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lxwpumj437ewfld218bq.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
e. Enable Static website hosting and add your index document&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Create a CloudFront distribution and restrict access to S3 bucket content with OAI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Navigate to Cloudfront -&amp;gt; &lt;strong&gt;Create&lt;/strong&gt;&lt;br&gt;
b. Add the details for Origin in the source of the distribution, so add the S3 path there.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LwskeWsi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvmhnkekb23qc6p9g82p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LwskeWsi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvmhnkekb23qc6p9g82p.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fi1Ksbbf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47fp1fpkq7hi0ggsz9yh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fi1Ksbbf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47fp1fpkq7hi0ggsz9yh.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p9oJgcVI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctq2o6a4ussmo6ufvmuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p9oJgcVI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctq2o6a4ussmo6ufvmuo.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P9reUf57--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzb7lxpx77rzpkdqgixe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P9reUf57--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzb7lxpx77rzpkdqgixe.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;br&gt;
c. Include index.html as your default root object&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Ce1k_WQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sq84hw4qqpbwp2gtowm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Ce1k_WQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sq84hw4qqpbwp2gtowm7.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;br&gt;
Your CloudFront Distribution is ready with this quick configuration guild.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GWai0JI6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4u2qebromnuvbhh580e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GWai0JI6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4u2qebromnuvbhh580e.png" alt="Image description" width="800" height="142"&gt;&lt;/a&gt;&lt;br&gt;
d. Copy and paste the Cloudfront distribution domain name on a browser to access the website&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s----3ASMuC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3ovbmr5dazj6up7kblv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s----3ASMuC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3ovbmr5dazj6up7kblv.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iwueunIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bibl1jcfk0ylljgx0mow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iwueunIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bibl1jcfk0ylljgx0mow.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TJjkz80y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dwifdf3wc5putbkedlti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TJjkz80y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dwifdf3wc5putbkedlti.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;br&gt;
** Implementing CI/CD through AWS CodePipeline**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g5IzCHfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rjam0omy6xn939t9f6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g5IzCHfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rjam0omy6xn939t9f6j.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Connect GitHub Account to CodePipeline&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V5haxowN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2hg6gycf3xur9ty2eo2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V5haxowN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2hg6gycf3xur9ty2eo2e.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--38INArAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjsq34tekvn1i8jvlug9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--38INArAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjsq34tekvn1i8jvlug9.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Configure CodePipeline and deploy CI/CD pipeline&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V2bvFurW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy84ib5mpmbi38t2ezgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V2bvFurW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy84ib5mpmbi38t2ezgs.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
a. &lt;strong&gt;You can skip the build stage -&amp;gt; create&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PSX0KkcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f2ji7bcvrherggeu6w1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PSX0KkcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f2ji7bcvrherggeu6w1c.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PoJVwc8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vadqryqymbw3e5m4bdvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PoJVwc8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vadqryqymbw3e5m4bdvf.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Amazing!!&lt;/strong&gt;..the pipeline has been created, next we need to verify it.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jcm4W1Hw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehjaykthgg61lwt41z9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jcm4W1Hw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehjaykthgg61lwt41z9q.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 8:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Verify functionality of CI/CD Pipeline&lt;/strong&gt;&lt;br&gt;
Go to the github repo and edit any file, in my case i will edit the Readme file&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q2JC7cjG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70h4at5t5icz6k8stzza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q2JC7cjG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70h4at5t5icz6k8stzza.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;br&gt;
Once that is done, you will notice the pipeline is triggered,&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XvRihDLi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rcfx39t2h7j1inqb46t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XvRihDLi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rcfx39t2h7j1inqb46t.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;br&gt;
Finally, paste the domain name of the cloudfront distribution on a browser to access the website&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pw2t8g5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjt8cqlq7iq0ph8xg44h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pw2t8g5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjt8cqlq7iq0ph8xg44h.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
Yes the website can still be acessed, click on the "tell me more" button to see more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lKi70Nt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e34nr8u03oex5pfmlfz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lKi70Nt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e34nr8u03oex5pfmlfz5.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In conclusion, if you follow all the steps mentioned above, you will succeed in deploying a any static website with AWS Codepipeline using Github and S3 bucket.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>devops</category>
      <category>development</category>
    </item>
    <item>
      <title>Understanding AWS Instance Connect End-Point</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Mon, 26 Jun 2023 15:53:41 +0000</pubDate>
      <link>https://dev.to/prosperagada/understanding-aws-instant-connect-end-point-30g5</link>
      <guid>https://dev.to/prosperagada/understanding-aws-instant-connect-end-point-30g5</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Bye to Bastion hosts!!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS lunches EC2 Instance Connect Endpoint, it is a feature that allows you to securely connect to your Amazon EC2 instances without the need for additional components like a bastion host,  or public IP addresses. It provides a simple and secure way to establish connections to your EC2 instances within your Amazon VPC. Let's explore the features and benefits of EC2 Instance Connect Endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits and Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk9mbuxy0vxayz789c52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk9mbuxy0vxayz789c52.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. No need for a bastion host:&lt;/strong&gt; With EC2 Instance Connect Endpoint, you don't need a separate bastion host to establish a secure connection to your EC2 instances. This simplifies the setup and reduces management overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enhanced security and isolation:&lt;/strong&gt; EC2 Instance Connect Endpoint leverages IAM-based authentication and authorization, along with security groups, to ensure that only authorized users can access your EC2 instances. This provides granular access control and protects your private resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Simplified administration:&lt;/strong&gt; By eliminating the need for a bastion host, EC2 Instance Connect Endpoint reduces the complexity of managing connectivity to your EC2 instances. You don't have to worry about maintaining and patching additional infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Compatibility with existing tools:&lt;/strong&gt; You can continue using your preferred client tools like PuTTY and OpenSSH to connect to your EC2 instances through EC2 Instance Connect Endpoint. This means you don't have to learn new tools or workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Controls and Capabilities&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowugd52otsyump8v85e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowugd52otsyump8v85e7.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
EC2 Instance Connect Endpoint incorporates robust security controls to ensure the integrity and confidentiality of the connection process:&lt;/p&gt;

&lt;p&gt;**a. Identity-based access controls: **Access to EC2 Instance Connect Endpoint is governed by IAM policies, which define who can create and access the endpoint. This ensures proper authentication and authorization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b. Network-perimeter controls:&lt;/strong&gt; Security groups associated with your VPC resources can be used to allow or deny access through EC2 Instance Connect Endpoint. This adds an extra layer of control over network access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. Separation of privileges:&lt;/strong&gt; EC2 Instance Connect Endpoint separates control-plane and data-plane privileges. This means that administrators and users have distinct privileges for creating and using the endpoint, providing better security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;d. Auditability and logging:&lt;/strong&gt; API calls related to EC2 Instance Connect Endpoint are logged in AWS CloudTrail, allowing you to monitor and audit endpoint activity. This helps in identifying any potential security issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with EC2 Instance Connect Endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To get started with EC2 Instance Connect Endpoint, you need to follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Creating an EIC Endpoint:&lt;/strong&gt; As an administrator with the necessary IAM permissions, you can create an EC2 Instance Connect Endpoint using the AWS CLI or Console. You'll need to specify the subnet and security group IDs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Connecting to Linux instances using SSH:&lt;/strong&gt; For Linux instances, you can establish a connection using the AWS CLI. There are two methods available:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. One-click command:&lt;/strong&gt; The AWS CLI provides a command to generate ephemeral SSH keys and establish a connection with enhanced security. You need appropriate IAM permissions to use this command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Open-tunnel command:&lt;/strong&gt; Alternatively, you can establish a private tunnel to the instance using SSH with standard tooling or the proxy command. This method offers flexibility for existing workflows and requires the AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Connecting to Windows instances using RDP:&lt;/strong&gt; If you have Windows instances, you can use RDP (Remote Desktop Protocol) to securely access them within your Amazon VPC. RDP client applications ensure a seamless and secure experience for connecting to Windows instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EC2 Instance Connect Endpoint simplifies and enhances secure connectivity to your private EC2 instances within Amazon VPCs. It eliminates the need for additional components like bastion hosts and complex network configurations. By leveraging IAM-based authentication, network-perimeter controls, and auditability, EC2 Instance Connect Endpoint ensures secure remote access to your private resources. Adopting EC2 Instance Connect Endpoint provides a streamlined and secure connectivity solution in your AWS environment.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>aws</category>
      <category>news</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Understanding Serverless on AWS</title>
      <dc:creator>ProsperAgada</dc:creator>
      <pubDate>Thu, 22 Jun 2023 14:35:26 +0000</pubDate>
      <link>https://dev.to/prosperagada/aws-serverless-for-computing-2f69</link>
      <guid>https://dev.to/prosperagada/aws-serverless-for-computing-2f69</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build and run applications without thinking about servers.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In recent years, serverless computing has emerged as a transformative technology, revolutionizing the way developers build and deploy applications. &lt;br&gt;
Amazon Web Services (AWS), a pioneer in the cloud computing industry, offers a robust serverless platform that allows developers to focus on writing code without the burden of managing infrastructure. In this article, we will delve into the world of serverless computing on AWS, exploring its benefits, architecture, and key services that empower developers to build scalable and efficient applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Serverless computing?.
&lt;/h2&gt;

&lt;p&gt;Serverless computing is a cloud computing model where developers can write and run applications without the need to manage servers or infrastructure. In this model, the cloud provider takes care of all the underlying server management, allowing developers to focus solely on writing application logic.&lt;/p&gt;

&lt;p&gt;what are the benefits of serverless computing?.&lt;br&gt;
The benefits of serverless computing for developers are numerous:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Increased developer productivity: With serverless, developers can focus on writing code and building features rather than managing servers or worrying about infrastructure scalability. This enables faster development cycles and enhances overall productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic scaling: Serverless platforms, such as AWS Lambda, automatically scale applications based on incoming requests. This eliminates the need for capacity planning and ensures that applications can handle varying workloads, providing excellent scalability without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost optimization: Serverless computing follows a pay-as-you-go pricing model, where developers are only billed for the actual compute time used by their applications. This eliminates the need to pay for idle server resources, resulting in cost savings, especially for applications with sporadic or unpredictable traffic patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;High availability and fault tolerance: Serverless platforms handle the underlying infrastructure and automatically replicate applications across multiple availability zones. This ensures high availability and fault tolerance, reducing the risk of application downtime and improving reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simplified deployment and management: Serverless platforms abstract away the complexities of managing infrastructure, allowing developers to focus on deploying and managing their applications more efficiently. Services like AWS Lambda provide easy deployment mechanisms and integrated monitoring and logging capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalable and event-driven architecture: Serverless computing is well-suited for event-driven architectures, where applications respond to events such as data changes, API calls, or scheduled triggers. This architectural approach enables the building of highly responsive and scalable applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ecosystem and integrations: Serverless platforms, such as AWS Lambda, offer integrations with a wide range of services and APIs, enabling seamless integration with various AWS services, third-party APIs, and software-as-a-service (SaaS) applications. This expands the ecosystem and allows developers to leverage existing services to build more powerful applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the serverless services AWS offers?.
&lt;/h2&gt;

&lt;p&gt;AWS offers a varriaty of serverless services ranging from compute, application integrations, to databases. &lt;/p&gt;

&lt;p&gt;let's take a look at AWS serverless compute services:&lt;br&gt;
Amazon offers two major serverless compute services which are;&lt;br&gt;
Lambda and Fargate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How AWS Lambda works.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcva68l0unbtqzv6efkad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcva68l0unbtqzv6efkad.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS Lambda is like a magical compute engine that runs your code in response to events or triggers. You write your code, upload it to Lambda, and AWS takes care of running it whenever the specified events occur. Lambda automatically scales your code, so it can handle any number of requests without worrying about server capacity. You only pay for the actual compute time consumed by your code, making it cost-efficient. Lambda supports multiple programming languages and integrates seamlessly with other AWS services, allowing you to build highly scalable and event-driven applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How fargate works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0ffr8lgpmeiiae9vrj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0ffr8lgpmeiiae9vrj1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Fargate is a service provided by Amazon that allows developers to run their containerized applications without having to worry about managing the underlying infrastructure. It takes care of tasks like server provisioning, scaling, and patching, so developers can focus on deploying and scaling their containers easily. Fargate integrates with popular container orchestration platforms and provides features like granular resource allocation, high availability, and automatic scaling. It also ensures security and compliance and follows a pay-as-you-go pricing model, where you only pay for the resources you use.&lt;/p&gt;

&lt;p&gt;This are the major compute services AWS provide for their customers, Now let's take a look at application integration services.&lt;br&gt;
AWS provides several serverless integration services that enable seamless communication and coordination between different components of serverless application. These services enhance the functionality and flexibility of your serverless architecture. Let's explore some of the key serverless integration services offered by AWS:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon Event Bridge works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx2adowy64sat6rkxbxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx2adowy64sat6rkxbxm.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS EventBridge is a serverless event bus that enables different components of your application to communicate through events. It acts as a central hub for events, allowing you to send events from sources to targets. You can filter and transform events, integrate with various AWS services and SaaS applications, and create custom event buses for better organization. EventBridge simplifies event-driven architectures and enables scalable and flexible communication between components in a beginner-friendly manner.&lt;br&gt;
EventBridge can be applicable for some cases like, Image Processing Pipeline, System Monitoring and Alerts, Workflow Orchestration, Order Processing System, Real-time Analytics.&lt;br&gt;
By leveraging EventBridge, you can build robust and scalable systems that respond to events and streamline various processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon Step Function works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7z9g7wpnzvt2idvdwox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7z9g7wpnzvt2idvdwox.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS Step Functions simplifies the management and coordination of tasks in your application, helping you build scalable and resilient workflows. With its visual representation, built-in error handling, support for human interaction, parallel execution, and integration with AWS services, Step Functions provides a powerful tool for organizing and executing tasks within your serverless applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon SQS(simple queue service) works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex18ggcqk9xo4ooy4nq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex18ggcqk9xo4ooy4nq9.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
SQS (Simple Queue Service) is an Amazon Web Services (AWS) messaging service that allows you to send, store, and receive messages between different parts of your software systems. It uses queues to temporarily hold messages, enabling decoupling and scalability. Producers send messages to the queues, while consumers retrieve and process them. SQS ensures reliable message delivery, visibility, and retention. It provides automatic scaling to handle varying message loads and offers integration with other AWS services for building flexible and robust applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon SNS(Simple Notification Service) works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1pmpq5kse284j5zkuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1pmpq5kse284j5zkuf.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
SNS (Simple Notification Service) is an Amazon Web Services (AWS) messaging service that enables sending notifications to a large number of subscribers. It follows a "publisher-subscriber" model, where publishers send messages to topics, and subscribers receive those messages based on their subscriptions. With SNS, you can send notifications through different protocols like email, SMS, HTTP/S, or mobile push notifications.&lt;/p&gt;

&lt;p&gt;In simpler terms, SNS is a way to send messages or notifications to many people or applications at once. Imagine a broadcaster sending news to a group of listeners. The broadcaster (publisher) sends messages to a topic, and anyone who wants to receive those messages (subscribers) can subscribe to the topic. Subscribers can choose to receive notifications through email, text messages, or other methods. SNS makes it easy to send messages to a large audience, ensuring they get the information they need quickly and efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon API Gateway works&lt;/strong&gt;&lt;br&gt;
AWS API Gateway is a service provided by Amazon Web Services that allows you to create, publish, and manage APIs (Application Programming Interfaces) for your applications. An API is like a bridge that enables different software systems to communicate and interact with each other.&lt;br&gt;
In simpler terms, think of API Gateway as a receptionist at a building entrance. It manages the flow of visitors (API requests) coming into the building (your application). It checks their credentials, ensures they have the right access permissions, and monitors the number of requests they make. API Gateway takes care of all the technical details, making it easier for your applications to communicate with each other and providing security and control over how they interact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon AppSync works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9bmlhcfd1hknqq7zt46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9bmlhcfd1hknqq7zt46.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS AppSync is a managed service by Amazon Web Services that simplifies the process of building and deploying scalable and real-time applications with GraphQL. GraphQL is a query language for APIs that allows you to request specific data from your backend systems.&lt;br&gt;
think of AppSync as a translator that helps your applications talk to your data sources using a powerful language called GraphQL. It simplifies the process of building and deploying applications by automatically generating APIs and providing real-time and offline capabilities. AppSync ensures that your applications receive the data they need efficiently and stay up-to-date with real-time updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon Simple S3 works&lt;/strong&gt;&lt;br&gt;
S3 (Simple Storage Service) is a popular storage service provided by Amazon Web Services (AWS). It is like a virtual hard drive in the cloud that allows you to store and retrieve any amount of data at any time.&lt;/p&gt;

&lt;p&gt;In simple terms, think of S3 as a giant storage room where you can keep your files and objects. It provides a secure and reliable way to store your data, whether it's documents, images, videos, or backups. S3 is highly scalable, meaning it can handle small or large amounts of data without any trouble. You can access your stored data from anywhere in the world using a unique web address. S3 is also designed to be durable, meaning your data is protected and highly available, even in the event of hardware failures or natural disasters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon EFS works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EFS (Elastic File System) is a scalable and managed file storage service provided by Amazon Web Services (AWS). It offers a simple way to store and access files from multiple instances or servers simultaneously.&lt;/p&gt;

&lt;p&gt;In simple terms, EFS is like a shared network drive in the cloud. It allows multiple users or systems to access and modify the same files simultaneously, making it ideal for collaborative work or applications that require shared file storage. EFS is highly scalable, meaning it can grow or shrink as your storage needs change, without any disruptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon DynamoDB works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DynamoDB is a fully managed NoSQL database service provided by Amazon Web Services (AWS). It offers a simple and scalable way to store and retrieve structured data, making it suitable for a wide range of applications.&lt;/p&gt;

&lt;p&gt;In simpler terms, think of DynamoDB as a digital filing cabinet where you can store structured data, such as user information, product details, or sensor readings. It eliminates the need for traditional database management tasks, as AWS takes care of scaling, backups, and maintenance. DynamoDB provides fast and reliable performance, even with large amounts of data, and can automatically scale up or down to meet your application's needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon Aurora works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Aurora is a fully managed relational database service offered by Amazon Web Services (AWS). It combines the best features of traditional databases, like MySQL and PostgreSQL, with the scalability and availability of the cloud.&lt;/p&gt;

&lt;p&gt;In simple terms, think of Aurora as a powerful and efficient database engine that helps you store and retrieve structured data for your applications. It is designed to handle large workloads with low latency and high performance. Aurora automatically replicates your data across multiple availability zones for increased durability and fault tolerance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon OpenSearch works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenSearch is an open-source search and analytics engine that enables you to index, search, and analyze large volumes of data. It is derived from the popular Elasticsearch project and provides powerful search capabilities for various applications and use cases.&lt;/p&gt;

&lt;p&gt;In simpler terms, think of OpenSearch as a smart search engine that helps you find information quickly from vast amounts of data. It can be used to build search functionality in applications, websites, or even analyze logs and metrics. OpenSearch supports full-text search, allowing you to search for specific words or phrases within your data. It also offers advanced search features like filtering, sorting, and aggregations to extract valuable insights from your data.&lt;/p&gt;

&lt;p&gt;other serverless under database include &lt;strong&gt;RDS proxy, Redshift and Naptune&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
