<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Madhesh Waran</title>
    <description>The latest articles on DEV Community by Madhesh Waran (@madhesh_waran_63).</description>
    <link>https://dev.to/madhesh_waran_63</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/madhesh_waran_63"/>
    <language>en</language>
    <item>
      <title>Internship Notes</title>
      <dc:creator>Madhesh Waran</dc:creator>
      <pubDate>Mon, 16 Sep 2024 08:11:45 +0000</pubDate>
      <link>https://dev.to/madhesh_waran_63/internship-notes-468p</link>
      <guid>https://dev.to/madhesh_waran_63/internship-notes-468p</guid>
      <description>&lt;h2&gt;
  
  
  🌟 CareerByteCode Cloud DevOps Challenge🌟
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Deploy Virtual Machines in AWS &amp;amp; Google Cloud:
&lt;/h2&gt;

&lt;p&gt;Using Terraform, deploy virtual machines (VMs) in both AWS and Google Cloud and ensure they are properly configured.&lt;/p&gt;

&lt;p&gt;AWS:&lt;/p&gt;

&lt;p&gt;Step 1: Download and install terraform. Also install AWS CLI and configure it with your credentials using 'aws configure'.&lt;/p&gt;

&lt;p&gt;Step 2: Write a main.tf configuration file that tells terraform to deploy a EC2 instance.&lt;/p&gt;

&lt;p&gt;** This documentation is very helpful.&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest/docs&lt;/a&gt; **&lt;/p&gt;

&lt;p&gt;Step 3: Initialize terraform using 'terraform init', preview the changes using 'terraform plan' and create the resources using 'terraform apply'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotz6mun9w0qbjawegjbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotz6mun9w0qbjawegjbg.png" alt="Image description" width="800" height="873"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwudewademy42zqvh9p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwudewademy42zqvh9p6.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Use 'terraform destroy' to destroy the resources that you created using terraform.&lt;/p&gt;

&lt;p&gt;GCP:&lt;br&gt;
(should create a GCP account and do this ASAP after finishing AWS stuff.)&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Kubernetes Microservices Deployment
&lt;/h2&gt;

&lt;p&gt;Deploy a microservices-based application using Kubernetes and ensure it scales automatically.&lt;/p&gt;

&lt;p&gt;Step 1: First you need to containerize the application using Docker. To do this, write a Dockerfile to create an image of the container of your application. I wrote a simple node.js app that displays 'Hello World' and wrote it into the Dockerfile.&lt;/p&gt;

&lt;p&gt;Simple Dockerfile:&lt;br&gt;
FROM node:14&lt;br&gt;
WORKDIR /app&lt;br&gt;
COPY . .&lt;br&gt;
RUN npm install&lt;br&gt;
EXPOSE 3000&lt;br&gt;
CMD ["node", "app.js"]&lt;/p&gt;

&lt;p&gt;Step 2: After saving the Dockerfile and simple 'app.js', type 'docker build -t nodeapp .' to build the Docker image. Using the image, create the container and check it by using 'docker run -p 3000:3000 nodeapp'.&lt;br&gt;
If you did everything right, if you type localhost:3000 into your browser, it will show the below image. Push the image to dockerhub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqoyzzdubgfz74vtrqbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqoyzzdubgfz74vtrqbf.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Deploy the application in your DockerHub on Kubernetes using the deployment manifest file which specifies the number of replicas, the container that is used etc. Type 'kubectl apply -f deployment.yaml' in the command line to do it. Before doing this, you should download and install minikube on your local PC.&lt;/p&gt;

&lt;p&gt;Simple deployment.yaml file:&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
   name: deployment&lt;br&gt;
   labels:&lt;br&gt;
      app: nodeapp&lt;br&gt;
spec:&lt;br&gt;
   selector:&lt;br&gt;
      matchLabels:&lt;br&gt;
         app: nodeapp&lt;br&gt;
   replicas: 2&lt;br&gt;
   template:&lt;br&gt;
      metadata:&lt;br&gt;
         labels:&lt;br&gt;
            app:  nodeapp&lt;br&gt;
      spec:&lt;br&gt;
         containers:&lt;br&gt;
         - name: nodeapp-container01&lt;br&gt;
           image: [yourdockerhubrepo]/nodeapp:latest&lt;br&gt;
           ports:&lt;br&gt;
           - containerPort: 3000&lt;/p&gt;

&lt;p&gt;Simple Service Manifest file:&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: nodeapp-service&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: nodeapp&lt;br&gt;
  ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple Ingress manifest file:&lt;br&gt;
apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  name: nodeapp-ingress&lt;br&gt;
  annotations:&lt;br&gt;
    nginx.ingress.kubernetes.io/ssl-redirect: "true"&lt;br&gt;
    nginx.ingress.kubernetes.io/secure-backends: "true"&lt;br&gt;
spec:&lt;br&gt;
  rules:&lt;br&gt;
    - host: localhost&lt;br&gt;
      http:&lt;br&gt;
        paths:&lt;br&gt;
          - path: /&lt;br&gt;
            pathType: Prefix&lt;br&gt;
            backend:&lt;br&gt;
              service: &lt;br&gt;
                name: nodeapp-service&lt;br&gt;
                port:&lt;br&gt;
                  number: 80&lt;/p&gt;

&lt;p&gt;Step 3: The above service manifest file exposes the app to the Kubernetes cluster internally using ClusterIP. To expose the app externally we use the ingress which is deployed using ingress manifest file.&lt;/p&gt;

&lt;p&gt;Step 4:To automatically scale the containers based on the load, we can use horizontal pod autoscaler which is deployed using the following hpa manifest file. To get metrics we need to enable metrics-server on minikube.&lt;/p&gt;

&lt;p&gt;Simple HPA manifest file:&lt;br&gt;
apiVersion: autoscaling/v1&lt;br&gt;
kind: HorizontalPodAutoscaler&lt;br&gt;
metadata:&lt;br&gt;
  name: hpa&lt;br&gt;
spec:&lt;br&gt;
  scaleTargetRef:&lt;br&gt;
    apiVersion: apps/v1&lt;br&gt;
    kind: Deployment&lt;br&gt;
    name: deployment&lt;br&gt;
  minReplicas: 2&lt;br&gt;
  maxReplicas: 5&lt;br&gt;
  targetCPUUtilizationPercentage: 70&lt;/p&gt;

&lt;p&gt;Step 5: This will ensure that the container gets scaled based on the load.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Azure Infrastructure Automation
&lt;/h2&gt;

&lt;p&gt;Use Terraform to automate the provisioning of an infrastructure setup in Azure.&lt;/p&gt;

&lt;p&gt;(Do after GCP)&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AWS Lambda with EC2 Tag Management
&lt;/h2&gt;

&lt;p&gt;Using AWS Lambda, write a Python program to start and stop EC2 instances based on tags.&lt;/p&gt;

&lt;p&gt;Step 1: Go to lambda in the AWS console and create a new function with a role that gives EC2 access to the lambda (mandatory permissions: ec2:DescribeInstances, ec2:StartInstances, ec2:StopInstances). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1uzar87vwhgeoghljo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1uzar87vwhgeoghljo1.png" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: After creating the lambda function, populate the code section with the script to automate the starting or stopping of ec2 instances based on their tags.&lt;/p&gt;

&lt;p&gt;**&lt;br&gt;
This document is very helpful in writing the code for Python.&lt;br&gt;
&lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-managing-instances.html" rel="noopener noreferrer"&gt;https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-managing-instances.html&lt;/a&gt;&lt;br&gt;
**&lt;/p&gt;

&lt;p&gt;My Code:&lt;br&gt;
import json&lt;br&gt;
import boto3&lt;/p&gt;

&lt;p&gt;ec2 = boto3.client('ec2')&lt;/p&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;br&gt;
    response = ec2.describe_instances()&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TagName = 'Environment'

TagValue = 'Test'

for i in response['Reservations']:
    id = i['Instances'][0]['InstanceId']
    str = i['Instances'][0]['Tags']
    for j in str:
        if j['Key'] == TagName and j['Value'] == TagValue:
            if i['Instances'][0]['State']['Name'] == 'stopped':
                print(ec2.start_instances(InstanceIds = [id]))
                action_performed = 'starting'
            else:
                print(ec2.stop_instances(InstanceIds = [id]))
                action_performed = 'stopping'
print("\n")

return {
    'statusCode': 200,
    'body': json.dumps(f'{action_performed} the instances with the tag {TagName}:{TagValue}')
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;My Result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kny1wpks311p61fxziz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kny1wpks311p61fxziz.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qzhkuczz7tm1xfyqhnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qzhkuczz7tm1xfyqhnz.png" alt="Image description" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logic:&lt;br&gt;
If the instance with the tag is stopped, running this script will start them again. If they are already running, we can stop them using this script.&lt;br&gt;
We can combine this script with API gateway or CloudWatch to automate the running of this script based on an event.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Monitoring &amp;amp; Logging Setup
&lt;/h2&gt;

&lt;p&gt;Set up monitoring and logging for a cloud infrastructure using Prometheus and Grafana.&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3&lt;/a&gt; | bash&lt;br&gt;
helm repo add prometheus-community &lt;a href="https://prometheus-community.github.io/helm-charts" rel="noopener noreferrer"&gt;https://prometheus-community.github.io/helm-charts&lt;/a&gt;&lt;br&gt;
helm repo update&lt;br&gt;
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace&lt;/p&gt;

&lt;p&gt;(need to add deleted screenshots soon)&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Secure Cloud Network Configuration
&lt;/h2&gt;

&lt;p&gt;Configure a VPC in AWS with proper security groups, NACLs, and public/private subnets.&lt;/p&gt;

&lt;p&gt;Step 1: Click 'Create VPC'. Click 'VPC Only' and provide the 'NameTag' and publicly available IPV4 cider (I chose 10.0.0.0/16 which has a range of 65,536 IP addresses from 10.0.0.0 to 10.0.255.255). &lt;/p&gt;

&lt;p&gt;Step 2: Click 'Create Subnet' in the Subnets dashboard. When it asks for VPC details, select the VPC you just created. For Public Subnet, choose IPV4 CIDR block as 10.0.1.0/24 and choose any availablity zone. (us-east-1). Create another subnet for public subnet with the same availability zone as public subnet and IPV4 CIDR block as 10.0.2.0/24.&lt;/p&gt;

&lt;p&gt;Step 3: Create an Internet Gateway and attach it to the VPC. Create a NAT gateway choosing the public subnet as its subnet and allocate elastic IP.(it incurs some costs so, be careful and delete it as soon as possible) &lt;/p&gt;

&lt;p&gt;Step 4: Create a Route table with the 'Name' as Public Route Table  and the VPC as the one you just created. In the Route table, click and edit routes and add the following route: Destination: 0.0.0.0/0, Target: The Internet Gateway you just created. In the Subnets Associations tab, add the Public Subnet to the table.&lt;/p&gt;

&lt;p&gt;Step 5: Create a Route table with the 'Name' as Private Route Table  and the VPC as the one you just created. In the Route table, click and edit routes and add the following route: Destination: 0.0.0.0/0, Target: The NAT Gateway you just created. In the Subnets Associations tab, add the Private Subnet to the table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e0lvsdbzzgveeh1x6pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e0lvsdbzzgveeh1x6pv.png" alt="Image description" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6: Click on the Security Groups on the VPC Dashboard. Create  a Security Group and select the VPC as VPC you just created. Click Edit inbound rules and allow SSH (port 22), HTTP (port 80), HTTPS (port 443) access from Source: 0.0.0.0/0 (anywhere). Create a new security group again for the private instances and edit the inbound rules with: Allow all traffic: From the CIDR range 10.0.0.0/16.&lt;/p&gt;

&lt;p&gt;Step 7: select Network ACLs on the VPC Dashboard. Create a public Network ACL and edit the inbound rules as: &lt;br&gt;
Rule 100: Allow HTTP (80) from 0.0.0.0/0. , Rule 110: Allow HTTPS (443) from 0.0.0.0/0. , Rule 120: Allow SSH (22) from 0.0.0.0/0.&lt;br&gt;
and edit the outbound rules as:&lt;br&gt;
Rule 100: Allow All Traffic to 0.0.0.0/0 and associate this NACL with the public subnet.&lt;br&gt;
Create another private NACL and edit the inbound rules as:&lt;br&gt;
Rule 100: Allow All Traffic from the VPC CIDR block (10.0.0.0/16).&lt;br&gt;
and edit the outbound rules as:&lt;br&gt;
Rule 100: Allow All Traffic to 0.0.0.0/0. and associate with the private subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nt826e0rlpbmfqv49xx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nt826e0rlpbmfqv49xx.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm0i2ciytahrxubr19ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm0i2ciytahrxubr19ca.png" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvzsoo63k4hut40nx0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvzsoo63k4hut40nx0t.png" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any other doubts, refer this article by me: &lt;a href="https://dev.to/madhesh_waran_63/deploying-wordpress-on-a-private-subnet-in-aws-ec2-using-a-linux-server-4a65"&gt;https://dev.to/madhesh_waran_63/deploying-wordpress-on-a-private-subnet-in-aws-ec2-using-a-linux-server-4a65&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Database Backup and Restore
&lt;/h2&gt;

&lt;p&gt;Automate the backup and restore of a MySQL database in AWS RDS.&lt;/p&gt;

&lt;p&gt;Step 1: Create a MYSQL database in RDS or use an existing one for this workshop. Make sure automated backups is enabled on the database under Maintenance and Backups tab. Click on modify and change the retention days or backup window or duration and save the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptobbb6camqndiy2i5pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptobbb6camqndiy2i5pd.png" alt="Image description" width="800" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2:  Click on automated backups in the left dashboard. Select the database backup that you would like to restore and click the Actions button to bring down 'Restore to point in time'. Configure the database as you like and launch it to create a restored database from the backup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjbo5q7av8u6pyk48wa8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjbo5q7av8u6pyk48wa8.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  10. 3-Node Kubernetes Cluster Setup
&lt;/h2&gt;

&lt;p&gt;Build a Kubernetes cluster with 1 master node and 2 worker nodes using Ubuntu OS VMs.&lt;br&gt;
Step 1: Create 3 EC2 instances with Ubuntu as their AMI with one of them being Master Node and the other 2 being Worker Nodes. Create Secu&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ihxrn0vxi7to5imsvdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ihxrn0vxi7to5imsvdv.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: SSH into the Master Node and use the following code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install docker&lt;/strong&gt;&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt install docker.io -y&lt;br&gt;
sudo systemctl start docker&lt;br&gt;
sudo systemctl enable docker&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Kubernetes components&lt;/strong&gt;&lt;br&gt;
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.28/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.28/deb/&lt;/a&gt; /" | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;/p&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;/p&gt;

&lt;p&gt;sudo apt update&lt;br&gt;
sudo apt install -y kubelet kubeadm kubectl socat&lt;br&gt;
sudo apt-mark hold kubeadm kubelet kubectl&lt;/p&gt;

&lt;p&gt;Step 3: Initialize the master node and install a pod network using the following code.&lt;br&gt;
&lt;strong&gt;Initializing master node and setting up kubeconfig&lt;/strong&gt;&lt;br&gt;
sudo kubeadm init --pod-network-cidr=10.244.0.0/16&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Pod network&lt;/strong&gt;&lt;br&gt;
kubectl apply -f &lt;a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg6obflvphxvh6fveeb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg6obflvphxvh6fveeb3.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: After you setup the master-node, you will get the join token for worker nodes. Use them to join the worker nodes to the master node. Now, if you  check the status of the clusters, you will see that the 2 worker nodes and the master node form the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhszfq7knkdnil97plyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhszfq7knkdnil97plyn.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focqipuqjfxokkhbl85gk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focqipuqjfxokkhbl85gk.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Elastic Load Balancing
&lt;/h2&gt;

&lt;p&gt;Configure AWS Elastic Load Balancer (ELB) for an auto-scaling web application.&lt;/p&gt;

&lt;p&gt;Step 1: Launch EC2 Instances and let the instance's security group to allow HTTP (port 80).&lt;br&gt;
Step 2: Create an Auto Scaling Launch Template by clicking on launch Templates on EC2 console. Configure it based on your needs. Here, we will deploy a web app using apache server and simple userdata.&lt;/p&gt;

&lt;p&gt;Simple User Data :&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;p&gt;yum update -y&lt;br&gt;
yum install -y httpd&lt;br&gt;
systemctl start httpd&lt;br&gt;
systemctl enable httpd&lt;br&gt;
echo "&lt;/p&gt;
&lt;h1&gt;Hello World from $(hostname -f)&lt;/h1&gt;" &amp;gt; /var/www/html/index.html

&lt;p&gt;Step 3: Click on Create Auto Scaling Group in ASG console and select the launch template that you just created. Let the Min: 0 Max: 3, Desired: 2. Set scaling policies to auto-scale based on CPU utilization or memory usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm5ricoqjbvhzv4n1s12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm5ricoqjbvhzv4n1s12.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Click 'Create Load Balancer' on EC2 console and choose Application Load Balancer (ALB).&lt;br&gt;
Select the VPC and subnets. Configure a listener (port 80 for HTTP).&lt;/p&gt;

&lt;p&gt;Step 4: Under Target Groups, create a new target group to register your instances. Choose EC2 Instances target type and select HTTP/HTTPS as the protocol.&lt;br&gt;
Choose the health check for your application (e.g., HTTP with a path /healthcheck). Add the EC2 instances as targets in your target group.&lt;/p&gt;

&lt;p&gt;Step 5: Edit your Auto Scaling Group and under Load Balancing section, select Attach to an existing load balancer and attach the ELB you just created and select the target group created for the ELB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe212kk8d5n8o0a5gwgw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe212kk8d5n8o0a5gwgw8.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6: If you type in the DNS name of your ELB in your browser it will switch between the two instances that got deployed by ASG. This shows that our load is balanced across multiple EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsbyhitej5fxa5z2e55e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsbyhitej5fxa5z2e55e.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faomxns23zcqq49gfusye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faomxns23zcqq49gfusye.png" alt="Image description" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  13. AWS IAM Role Setup
&lt;/h2&gt;

&lt;p&gt;Create custom IAM roles for secure access control to specific AWS resources.&lt;/p&gt;

&lt;p&gt;Step 1: Go to IAM and click the 'Create Role' button.&lt;br&gt;
Choose the AWS service that will assume this role(Lambda). We will create the role that I used for python automation using Lambda in 'AWS Lambda with EC2 Tag Management'.&lt;/p&gt;

&lt;p&gt;Step 2: Select the existing EC2 Full Access policy or create a custom policy. To create a custom policy, go to the policy section, choose create policy and write the json file containing all the policies we need. (ec2:DescribeInstances, ec2:StartInstances, ec2:StopInstances). Give the Name, Tags and other optional things and create the role&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhapz5mj5g5k9ro0osblo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhapz5mj5g5k9ro0osblo.png" alt="Image description" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: We can now use this role to control our EC2 instance using Lambda&lt;/p&gt;

&lt;h2&gt;
  
  
  14. DNS Setup with Route 53
&lt;/h2&gt;

&lt;p&gt;Configure Amazon Route 53 to route traffic to multiple endpoints based on geolocation.&lt;/p&gt;

&lt;p&gt;Step 1: Buy a domain or use one you already own. Create a hosted zone for your domain. Click create Hosted Zone and enter the name of your domain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4lqovwuu49t5vsfzn7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4lqovwuu49t5vsfzn7b.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uk7kk0wa303r1bcwd5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uk7kk0wa303r1bcwd5y.png" alt="Image description" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: In the hosted zone, create records that will route traffic based on geolocation.&lt;br&gt;
Click Create Record and Select 'A' Record and enter a valid IP address/AWS resource for the record (e.g., 172.217.12.100).&lt;/p&gt;

&lt;p&gt;Step 3: In the Routing Policy section, select Geolocation. Select the country to where this record will apply. Do this for different locations like North America, Europe, etc. Also create a default record for DNS queries from locations not specified in the hosted zone records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jzhxnliyetq3uartqrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jzhxnliyetq3uartqrc.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: If you search for this domain from different regions, it will direct you to different websites.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpzz77x4bhmn20eui8dt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpzz77x4bhmn20eui8dt.png" alt="Image description" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  15. Cloud Migration Plan
&lt;/h2&gt;

&lt;p&gt;Create a detailed migration plan to move an on-premise application to AWS. Include architecture diagrams, tools, and risk mitigation strategies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljpmxua02yi8fx8tle3g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljpmxua02yi8fx8tle3g.jpg" alt="Image description" width="783" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Highly Available, Fault-tolerant E-Commerce Web Application Deployment on AWS using Terraform.</title>
      <dc:creator>Madhesh Waran</dc:creator>
      <pubDate>Tue, 20 Aug 2024 11:09:15 +0000</pubDate>
      <link>https://dev.to/madhesh_waran_63/highly-available-fault-tolerant-e-commerce-web-application-deployment-on-aws-using-terraform-pic</link>
      <guid>https://dev.to/madhesh_waran_63/highly-available-fault-tolerant-e-commerce-web-application-deployment-on-aws-using-terraform-pic</guid>
      <description>&lt;p&gt;In today’s fast-paced digital world, ensuring that your application is always available and can handle failures gracefully is crucial. AWS provides robust infrastructure options, and with Terraform, you can manage this infrastructure as code, ensuring consistency, repeatability, and scalability.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll walk you through deploying a highly available, fault-tolerant e-commerce web application on AWS using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, ensure you have the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt;: You'll need an active AWS account with sufficient permissions.&lt;br&gt;
&lt;strong&gt;Terraform Installed&lt;/strong&gt;: Install Terraform on your local machine. You can download it from the official Terraform website.&lt;br&gt;
AWS CLI: Install and configure the AWS CLI with your credentials.&lt;br&gt;
&lt;strong&gt;Basic Knowledge&lt;/strong&gt;: Familiarity with AWS services (like EC2, RDS, and S3) and Terraform basics.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Overview of the Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Key Components:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;VPC (Virtual Private Cloud)&lt;/strong&gt;: Isolated network for the application.&lt;br&gt;
&lt;strong&gt;Subnets&lt;/strong&gt;: Public and private subnets across multiple Availability Zones (AZs).&lt;br&gt;
&lt;strong&gt;Internet Gateway&lt;/strong&gt;: To allow access to the public subnet.&lt;br&gt;
&lt;strong&gt;NAT Gateway&lt;/strong&gt;: For outbound traffic from the private subnets.&lt;br&gt;
&lt;strong&gt;EC2 Instances&lt;/strong&gt;: To host the web application.&lt;br&gt;
&lt;strong&gt;Auto Scaling Group (ASG)&lt;/strong&gt;: For scaling EC2 instances based on demand.&lt;br&gt;
&lt;strong&gt;Elastic Load Balancer (ELB)&lt;/strong&gt;: For distributing traffic across multiple instances.&lt;br&gt;
&lt;strong&gt;RDS (Relational Database Service)&lt;/strong&gt;: For a highly available database.&lt;/p&gt;

&lt;p&gt;** will update with code soon.**&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Dockerizing 3-Tier E-Commerce Web Application</title>
      <dc:creator>Madhesh Waran</dc:creator>
      <pubDate>Tue, 20 Aug 2024 10:05:01 +0000</pubDate>
      <link>https://dev.to/madhesh_waran_63/dockerizing-3-tier-e-commerce-web-application-1898</link>
      <guid>https://dev.to/madhesh_waran_63/dockerizing-3-tier-e-commerce-web-application-1898</guid>
      <description>&lt;p&gt;We are going to deploy a simple 3 Tier Architecture containing a frontend, a backend and a database on Docker.&lt;br&gt;
Docker can be used to create simple containers. But you need &lt;strong&gt;Docker Compose&lt;/strong&gt; to deploy applications that contain multiple containers.&lt;/p&gt;
&lt;h2&gt;
  
  
  Frontend Container:
&lt;/h2&gt;

&lt;p&gt;Create a frontend folder that has the required html, CSS and JavaScript files.&lt;br&gt;
Next, create a file called &lt;strong&gt;docker-compose.yml&lt;/strong&gt; with the below information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"
services:
  frontend:
    image: httpd:latest
    volumes:
      - "./frontend:/usr/local/apache2/htdocs"
    ports:
      - 3000:80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this file, we are telling docker to create a container using a httpd image with the latest tag and mount the local files in ./frontend to the  /usr/local/apache2/htdocs directory inside the container. We are also mapping port 3000 of your local machine to port 80 of the container.&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;docker compose up&lt;/code&gt; to deploy the application and access the frontend application on '&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;' in your browser. Use Ctrl + C to stop the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Container:
&lt;/h2&gt;

&lt;p&gt;Create a backend folder that contains the required PHP file. Make sure that the PHP file contains,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;header('Access-Control-Allow-Origin: http://localhost:3000');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;so that we can call the backend api from the frontend.&lt;br&gt;
Instead of using a pre-existing image, we are going to create an image that contains all our required dependencies.&lt;br&gt;
Create a new file called &lt;code&gt;Dockerfile&lt;/code&gt; that contains the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:20.04

LABEL maintainer="testexample@gmail.com"
LABEL description="Apache / PHP development environment"

ARG DEBIAN_FRONTEND=newt
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y lsb-release &amp;amp;&amp;amp; apt-get clean all
RUN apt install ca-certificates apt-transport-https software-properties-common -y
RUN add-apt-repository ppa:ondrej/php

RUN apt-get -y update &amp;amp;&amp;amp; apt-get install -y \
apache2 \
php8.0 \
libapache2-mod-php8.0 \
php8.0-bcmath \
php8.0-gd \
php8.0-sqlite \
php8.0-mysql \
php8.0-curl \
php8.0-xml \
php8.0-mbstring \
php8.0-zip \
mcrypt \
nano

RUN apt-get install locales
RUN locale-gen fr_FR.UTF-8
RUN locale-gen en_US.UTF-8
RUN locale-gen de_DE.UTF-8

# config PHP
# we want a dev server which shows PHP errors
RUN sed -i -e 's/^error_reporting\s*=.*/error_reporting = E_ALL/' /etc/php/8.0/apache2/php.ini
RUN sed -i -e 's/^display_errors\s*=.*/display_errors = On/' /etc/php/8.0/apache2/php.ini
RUN sed -i -e 's/^zlib.output_compression\s*=.*/zlib.output_compression = Off/' /etc/php/8.0/apache2/php.ini

# to be able to use "nano" with shell on "docker exec -it [CONTAINER ID] bash"
ENV TERM xterm

# Apache conf
# allow .htaccess with RewriteEngine
RUN a2enmod rewrite
# to see live logs we do : docker logs -f [CONTAINER ID]
# without the following line we get "AH00558: apache2: Could not reliably determine the server's fully qualified domain name"
RUN echo "ServerName localhost" &amp;gt;&amp;gt; /etc/apache2/apache2.conf
# autorise .htaccess files
RUN sed -i '/&amp;lt;Directory \/var\/www\/&amp;gt;/,/&amp;lt;\/Directory&amp;gt;/ s/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf

RUN chgrp -R www-data /var/www
RUN find /var/www -type d -exec chmod 775 {} +
RUN find /var/www -type f -exec chmod 664 {} +

EXPOSE 80

# start Apache2 on image start
CMD ["/usr/sbin/apache2ctl","-DFOREGROUND"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code creates an Apache server and downloads all the necessities to run PHP on it.&lt;/p&gt;

&lt;p&gt;Now let us update the &lt;code&gt;docker-compose.yml&lt;/code&gt; file to use the above image to create the backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  backend:
    container_name: simple-backend
    build:
      context: ./
      dockerfile: Dockerfile
    volumes:
      - "./backend:/var/www/html/"
    ports:
      - 5000:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells docker to create a container for the backend using the Dockerfile image and mount the local files in ./backend to the  /var/www/html directory inside the container. We are also mapping port 5000 of your local machine to port 80 of the container.&lt;/p&gt;

&lt;p&gt;Rerun &lt;code&gt;docker compose up&lt;/code&gt; to create the frontend and backend containers and access the backend application on '&lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt;' in your browser.&lt;/p&gt;

&lt;p&gt;Make sure to update the JavaScript and PHP files so that we can access the backend API from the frontend&lt;/p&gt;

&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;Create a folder named db and write a dump.sql file that creates a table and dumps values into that table.&lt;/p&gt;

&lt;p&gt;To make docker create our database let us append the following code into our &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    database:
    image: mysql:latest
    environment:
      MYSQL_DATABASE: web_commerce
      MYSQL_USER: testuser
      MYSQL_PASSWORD: password
      MYSQL_ALLOW_EMPTY_PASSWORD: 1
    volumes:
      - "./db:/docker-entrypoint-initdb.d"
  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    ports:
      - 8080:80
    environment:
      - PMA_HOST=database
      - PMA_PORT=3306
    depends_on:
      - database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a MySQL server and creates our table with the required values. To access our databases, we use a phpmyadmin image to create it and access our databases with the given username and password&lt;/p&gt;

&lt;p&gt;Typing 'docker compose up' in your command line will create all the three frontend, backend and database containers. Use '&lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt;' to access the database. &lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring backend container to access the database
&lt;/h2&gt;

&lt;p&gt;Create a folder called app inside the backend folder.&lt;br&gt;
To let the backend access the data from the database, we first create a &lt;code&gt;config.php&lt;/code&gt; file that gives the necessary data to access the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;config.php&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php 
define("DB_HOST", "database");
define("DB_USERNAME", "testuser");
define("DB_PASSWORD", "password");
define("DB_NAME", "web_commerce");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create a &lt;code&gt;database.php&lt;/code&gt; file that contains the script that maintains our connection to the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;database.php&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
class Database
{
    protected $connection = null;

    public function __construct()
    {
        try {
            $this-&amp;gt;connection = new mysqli(DB_HOST, DB_USERNAME, DB_PASSWORD, DB_NAME);
            if (mysqli_connect_errno()) {
                throw new Exception("Database connection failed!");
            }
        } catch (Exception $e) {
            throw new Exception($e-&amp;gt;getMessage());
        }
    }

    private function executeStatement($query = "", $params = [])
    {
        try {
            $stmt = $this-&amp;gt;connection-&amp;gt;prepare($query);
            if ($stmt === false) {
                throw new Exception("Statement preparation failure: " . $query);
            }
            if ($params) {
                $stmt-&amp;gt;bind_param($params[0], $params[1]);
            }
            $stmt-&amp;gt;execute();
            return $stmt;
        } catch (Exception $e) {
            throw new Exception($e-&amp;gt;getMessage());
        }
    }

    public function select($query = "", $params = [])
    {
        try {
            $stmt = $this-&amp;gt;executeStatement($query, $params);
            $result = $stmt-&amp;gt;get_result()-&amp;gt;fetch_all(MYSQLI_ASSOC);
            $stmt-&amp;gt;close();
            return $result;
        } catch (Exception $e) {
            throw new Exception($e-&amp;gt;getMessage());
        }
        return false;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create a new &lt;code&gt;web-commerce.php&lt;/code&gt; file that contains the data that needs to be updated in the &lt;code&gt;index.php&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php 
require_once "./app/Database.php";

class Products extends Database
{
    public function getProducts($limit)
    {
        return $this-&amp;gt;select("SELECT * FROM products");
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my index.php file after final updates. Use this as a reference to make your own.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php
header("Content-Type: application/json");
header('Access-Control-Allow-Origin: http://localhost:3000');
header('Access-Control-Allow-Methods: GET, POST, OPTIONS');

session_start();
file_put_contents('php://stderr', print_r($_GET, TRUE)); // Log to the error log

require "./app/config.php";
require_once "./app/web_commerce.php";

$productModel = new Products();
$products = $productModel-&amp;gt;getProducts(10);

$cart = [];

// Determine the API action based on the 'action' parameter in the query string
$action = isset($_GET['action']) ? $_GET['action'] : null;

switch ($action) {
    case 'getProducts':
        getProducts();
        break;
    case 'getProductDetails':
        getProductDetails();
        break;
    case 'addToCart':
        addToCart();
        break;
    case 'getCart':
        getCart();
        break;
    default:
        echo json_encode(["error" =&amp;gt; "Invalid action"]);
        break;
}

// Function to get all products
function getProducts() {
    global $products;
    echo json_encode($products);
}

// Function to get details of a single product by ID
function getProductDetails() {
    global $products;
    $id = isset($_GET['id']) ? intval($_GET['id']) : null;

    if ($id === null) {
        echo json_encode(["error" =&amp;gt; "Product ID is required"]);
        return;
    }

    foreach ($products as $product) {
        if ($product['id'] === $id) {
            echo json_encode($product);
            return;
        }
    }

    echo json_encode(["error" =&amp;gt; "Product not found"]);
}

// Function to add a product to the cart
function addToCart() {
    global $cart, $products;
    $id = isset($_GET['id']) ? intval($_GET['id']) : null;

    if ($id === null) {
        echo json_encode(["error" =&amp;gt; "Product ID is required"]);
        return;
    }

    foreach ($products as $product) {
        if ($product['id'] === $id) {
            $cart[] = $product;
            echo json_encode(["message" =&amp;gt; "Product added to cart", "cart" =&amp;gt; $cart]);
            return;
        }
    }

    echo json_encode(["error" =&amp;gt; "Product not found"]);
}

// Function to get the current cart contents
function getCart() {
    global $cart;
    echo json_encode($cart);
}

?&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will ensure that your backend can effortlessly fetch the values from your database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Final folder structure:
C:.
│   docker-compose.yml
│   Dockerfile
│
├───backend
│   │   index.php
│   │
│   └───app
│           config.php
│           database.php
│           web_commerce.php
│
├───db
│       dump.sql
│
└───frontend
        index.html
        script.js
        styles.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying WordPress on a Private Subnet in AWS EC2 Using a Linux Server</title>
      <dc:creator>Madhesh Waran</dc:creator>
      <pubDate>Thu, 06 Jun 2024 07:21:22 +0000</pubDate>
      <link>https://dev.to/madhesh_waran_63/deploying-wordpress-on-a-private-subnet-in-aws-ec2-using-a-linux-server-4a65</link>
      <guid>https://dev.to/madhesh_waran_63/deploying-wordpress-on-a-private-subnet-in-aws-ec2-using-a-linux-server-4a65</guid>
      <description>&lt;p&gt;After conquering the AWS Cloud Resume Challenge, I decided to build another project that would broaden my cloud skills. This time I wanted to deploy the WordPress application using a secure LAMP stack on a private subnet.&lt;br&gt;
The LAMP stack consists of Linux, Apache, MySQL, and PHP and provides an efficient server environment for application development and web hosting.&lt;br&gt;
Here, I will take you through the process of setting up a LAMP stack and deploying WordPress on a private subnet to host secure websites.&lt;/p&gt;

&lt;p&gt;The first thing you should do is create a VPC with private and public subnets. Let's see how it's done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi28qia2twx0e2dvmvdhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi28qia2twx0e2dvmvdhb.png" alt="Creating an VPC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a VPC:
&lt;/h2&gt;

&lt;p&gt;Log in to your AWS console and select Create VPC on Your VPC section. There are 2 ways you can go. You can select the VPC section and configure the rest later or select VPC and More and configure it all then and there. I decided to configure everything by myself, so I selected the first option. If you want to skip some steps, select the more options. Anyways, when asked to specify an IPv4 CIDR block for your VPC, put some publicly available IPv4 CIDRs. I put 10.0.0.0/16 which has a range of 65,536 IP addresses from 10.0.0.0 to 10.0.255.255. After creating this VPC, we should start creating the public and private subnets that we will be using for this project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating our required subnets:
&lt;/h2&gt;

&lt;p&gt;For my purpose, I needed 2 public subnets and 4 private subnets in 2 Availability Zones. To create it, go to the subnet section of your VPC and select Create Subnet. Select the VPC ID of the VPC you just created and fill in the details of the first subnet. Click Add New Subnets for as many subnets as you need and fill them with your desired details.&lt;br&gt;
For my project, I filled them as below:&lt;/p&gt;

&lt;p&gt;Subnet name - public-subnet-1&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.0.0/24&lt;/p&gt;

&lt;p&gt;Subnet name - private-subnet-1&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.1.0/24&lt;/p&gt;

&lt;p&gt;Subnet name - public-subnet-2&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.2.0/24&lt;/p&gt;

&lt;p&gt;Subnet name - private-subnet-2&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.3.0/24&lt;/p&gt;

&lt;p&gt;Subnet name - public-subnet-3&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.4.0/24&lt;/p&gt;

&lt;p&gt;Subnet name - private-subnet-3&lt;br&gt;
Availability Zone - us-east-1&lt;br&gt;
CIDR block for the subnet: 10.0.5.0/24&lt;/p&gt;

&lt;p&gt;After creating the subnet, we need to give them a connection outside of the VPC using gateways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Internet and NAT gateways:
&lt;/h2&gt;

&lt;p&gt;Internet gateway is the primary reason that determines the difference between public and private subnets. A subnet that routes traffic to IGW is a public subnet and one that doesn't automatically become a private subnet. Create an Internet Gateway from your VPC section and attach it to the VPC that you created.&lt;/p&gt;

&lt;p&gt;Private subnets need some way to communicate with the outside world. This can be done by NAT gateways. A NAT gateway is placed in a public subnet and acts like a proxy for traffic from private subnets in the VPC to the rest of the internet. Create a NAT gateway by choosing a public subnet as its subnet and Allocate Elastic IP. After creating the gateways, we need to configure their traffic by using the Route table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Route Tables:
&lt;/h2&gt;

&lt;p&gt;The main table that is created while we deploy our VPC is the route table that will be used as the default table for subnets that do not have any table associated with them. We should configure them to be private by making this main table target the NAT gateway. This can be done by clicking Edit Routes and then clicking Add Route. Enter 0.0.0.0/0 as the Destination and &lt;br&gt;
select NAT Gateway as the Target then select the NAT gateway you created before.&lt;br&gt;
We should create a new public route table for the public subnets that target the IGW. Create a new route table and associate it with the VPC that you just deployed.  Clicking Edit routes and then click Add route and configure the table by entering 0.0.0.0/0 as the Destination and &lt;br&gt;
selecting Internet Gateway as the Target then selecting the NAT gateway you created above. Now navigate back to the route tables list and select the public route table you just created. Select the Subnet Associations tab click on Edit subnet associations and pick public-subnet-1 and public-subnet-2 subnets you created and save them to make them public subnets.&lt;/p&gt;

&lt;p&gt;Doing this will ensure that you have an AWS VPC (Virtual Private Cloud) with public and private subnets with NAT gateway access for private subnets.&lt;br&gt;
For more details refer to &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html" rel="noopener noreferrer"&gt;this document.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the networking part of the project is over, let's create the instance that will host and deploy our WordPress dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Amazon Linux 2023 Instance:
&lt;/h2&gt;

&lt;p&gt;Almost all of the tutorials and guides for setting up LAMP stack out there are based on Amazon Linux 2 AMI or Ubuntu AMI, so to keep things fresh let's deploy the LAMP stack on an Amazon Linux 2023 Image EC2 instance.&lt;/p&gt;

&lt;p&gt;Navigate to the EC2 dashboard and click on “Launch Instance”. Choose the most recent “Amazon Linux 2023 AMI”. Choose the t2.micro instance that qualifies for the AWS free tier. You won't be using key pairs to SSH into that instance but if you wish you can Use an Existing Key Pair or Create a New Key Pair. Click the edit button next to network settings and enter the following details to deploy the instance in our private subnet.&lt;br&gt;
VPC - (Choose the VPC you deployed)&lt;br&gt;
Subnet - (Choose one of the private subnets eg: public_subnet_1)&lt;br&gt;
Create a new security group with the following configuration:&lt;br&gt;
SSH: Port 22 Protocol: TCP (Source: Anywhere)&lt;br&gt;
HTTP: Port 80 Protocol: TCP (Source: Anywhere).&lt;br&gt;
HTTPS: Port 443 Protocol: TCP (Source: Anywhere).&lt;br&gt;
Click on “Launch Instance” once everything’s set.&lt;/p&gt;

&lt;p&gt;Now since the instance is completely locked down in the private subnet, you cannot access it using normal ways. If you require additional access via SSH or Remote Desktop, you can try one of the following ways.&lt;br&gt;
1) Use a Bastion host in a public subnet that can SSH into our private secure instance in the private subnet. (Old way of doing things, incurs costs due to the provision of EC2 resources)&lt;br&gt;
2) Connect to the private EC2 Instances using SSM.&lt;br&gt;
3) Connect using EC2 Instance Connect Endpoint.&lt;/p&gt;

&lt;p&gt;There are loads of resources available for the first method. You can use this &lt;a href="https://towardsdatascience.com/going-bastion-less-accessing-private-ec2-instance-with-session-manager-c958cbf8489f" rel="noopener noreferrer"&gt;link&lt;/a&gt; for a detailed tutorial on using the second method. This article will guide you through the third method.&lt;/p&gt;

&lt;p&gt;In the instance that you created, click on the connect button at the top. Select EC2 instance connect and click on Connect using EC2 Instance Connect Endpoint. The EC2 Instance Connect Endpoint sublist will be empty so we need to create one. Click on Create New Endpoint and you will be redirected to the VPC section. Select Endpoint and click on Create New Endpoint. Name the Endpoint and choose EC2 Instance Connect Endpoint as the Service Category. Select your VPC and create a new SG with no inbound rules and an Outbound rule allowing all traffic. Choose this SG for your endpoint. Select the same subnet that your instance is running on (eg: private_subnet_1). Now create the endpoint and choose it as the endpoint for EC2 Instance Connect Endpoint. Now this will allow you to SSH into your instance on the private server.&lt;br&gt;
Now that we can access our instance, we can SSH into it and set up and deploy our WordPress package. Let's leave the EC2 Instance Connect page alone for now.&lt;/p&gt;

&lt;p&gt;Before starting this let us create the RDS database that will store all our data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating RDS Database:
&lt;/h2&gt;

&lt;p&gt;Instead of creating a host database, we are creating a new RDS database in another subnet and connecting it to our instance. Create a Database with the following configurations:&lt;br&gt;
Engine: MySQL&lt;br&gt;
Template: Free Trial&lt;br&gt;
DB Instance Identifier: wp_database&lt;br&gt;
Master Username: admin&lt;br&gt;
Autogenerate or type in your password (remember this if you typed it in)&lt;br&gt;
DB Instance Class: db.t3.micro&lt;br&gt;
Connectivity: Don't Connect to an EC2 Compute Resource&lt;br&gt;
VPC: (Choose your VPC)&lt;br&gt;
Subnet Group: Default&lt;br&gt;
Public Access: No&lt;br&gt;
Choose the default option for rest and create the database.&lt;/p&gt;

&lt;p&gt;After creating the database, edit the inbound rules for the SG security group by adding a rule of type MYSQL/Aurora and Source being Custom and the ID of your instance security group(This step is very important). &lt;br&gt;
&lt;u&gt;Now write down the Username, Password and get the DB endpoint address&lt;/u&gt; (It will usually be like wp-database.msifunffjxhn.us-east-1.rds.amazonaws.com)&lt;br&gt;
Now this will ensure that your instance can access the RDS database.&lt;/p&gt;

&lt;p&gt;Now let's SSH into the instance using EC2 Instance Connect Endpoint and set up and deploy our LAMP stack and WordPress application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up LAMP Stack:
&lt;/h2&gt;

&lt;p&gt;Type the following codes in the instance command shell to install and run the LAMP stack. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo dnf update -y&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo dnf install -y httpd wget php-fpm php-mysqli php-json php php-devel&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo dnf install mariadb105-server&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl start httpd&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl enable httpd&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl enable mariadb&lt;br&gt;
sudo systemctl start mariadb&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
To connect our RDS database with Wordpress,&lt;br&gt;
&lt;code&gt;&lt;br&gt;
mysql -h wp_database.xxxxx.us-east-1.rds.amazonaws.com -u admin -p&lt;/code&gt;&lt;br&gt;
(Note: do not copy-paste this, &lt;del&gt;wp_database.xxxxx.us-east-1.rds.amazonaws.com&lt;/del&gt;. Get your endpoint address from your RDS Database)&lt;br&gt;
Enter Password: (Note: The password you type will be blank.) (This password is the one you configured while creating RDS database).&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying WordPress:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cd /tmp&lt;br&gt;
sudo wget https://wordpress.org/latest.tar.gz&lt;br&gt;
sudo tar xzvf latest.tar.gz&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cd Wordpress&lt;br&gt;
sudo mv * /var/www/html/&lt;br&gt;
cd /var/www/html/&lt;br&gt;
sudo mv wp-config-sample.php wp-config.php&lt;br&gt;
sudo nano wp-config.php&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7ccbx9cnmg80u8ob22d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7ccbx9cnmg80u8ob22d.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Replace database details with the name, user, and password you just created. Replace localhost with your RDS database endpoint (wp_database.xxxxx.us-east-1.rds.amazonaws.com).&lt;/p&gt;

&lt;p&gt;Now your instance will have all the necessary dependencies to host WordPress. But you have no way of accessing it since it is in the private subnet. To access it, you need to use an Application Load Balancer to route all the HTTP traffic to the instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Load Balancer:
&lt;/h2&gt;

&lt;p&gt;Create a security group for the Load balancer. Edit Inbound rules to HTTP/80/Anywhere. &lt;br&gt;
Create Target groups so that the load balancer routes the traffic to the registered targets in the Target group. Provide the target group name, Protocol, Port, VPC, and Health checks. Register the two Private EC2 instances in the Target group&lt;br&gt;
It is now time to create the Application Load Balancer&lt;br&gt;
Give a name to the ALB. Select the VPC you created and click on one of the public Subnets in ALB (only Public Subnets will receive HTTP requests).Add the security group you created. Choose the Target group created above under Listeners and Routing. Wait for 5 mins.&lt;br&gt;
Copy the DNS name from Load Balancer and paste the DNS address in a browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hz3vn7i3923ncjw5weu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hz3vn7i3923ncjw5weu.png" alt="Final Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be able to see the WordPress Configuration page.&lt;br&gt;
Enjoy.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Conquering AWS Cloud Resume Challenge</title>
      <dc:creator>Madhesh Waran</dc:creator>
      <pubDate>Fri, 24 May 2024 09:47:47 +0000</pubDate>
      <link>https://dev.to/madhesh_waran_63/conquering-aws-cloud-challenge-1799</link>
      <guid>https://dev.to/madhesh_waran_63/conquering-aws-cloud-challenge-1799</guid>
      <description>&lt;p&gt;The Cloud Resume Challenge was created by Forrest Brazeal to upgrade and showcase your cloud skills. Completing it requires you to follow a strict set of requirements that will test and challenge your understanding of the cloud.&lt;br&gt;
I did the AWS Cloud Resume Challenge and this is how it went:&lt;/p&gt;

&lt;h2&gt;
  
  
  Certification
&lt;/h2&gt;

&lt;p&gt;First, your resume needs to have an AWS certification on it. I got my AWS Cloud Practitioner certificate by using the Stephen Mareek course on Udemy. I think that the course and the accompanying practice test are enough if you get above 80% on your first try. But if you score below 70%, I would advise you to sit through some practice tests on Udemy and not try your hand at the exam till you consistently get an 80% in most of the practice tests you try.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTML
&lt;/h2&gt;

&lt;p&gt;Your resume needs to be written in HTML. Not a Word doc, not a PDF. I previously had no idea about HTML except the little I learned in fifth grade and knowing it was an easy language that you can easily pick up. I learned HTML using the w3schools website and the Odin project and made a simple HTML page for my resume.&lt;/p&gt;

&lt;h2&gt;
  
  
  CSS
&lt;/h2&gt;

&lt;p&gt;Your resume needs to be styled with CSS. I already had a good idea of CSS since you get an HTML/CSS as a package deal in most of the tutorials you find. I didn’t want to think too much about designing my website, so I just watched a YouTube video for a website resume and styled my page to look exactly like that. I decided that I would redesign my website with my ideas later when I had free time. But this would do for now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Static Website
&lt;/h2&gt;

&lt;p&gt;Your HTML resume should be deployed online as an Amazon S3 static website. This was the easiest part with lots of tutorials and a very extensive AWS document that provides a comprehensive guide. So, I easily whipped up an S3 bucket, turned on static website hosting, and uploaded my website files to it. The website endpoint worked fine and this did not require any troubleshooting except me looking through AWS documentation and stack overflow to see if I have given any unnecessary permissions that could threaten my account security.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTPS
&lt;/h2&gt;

&lt;p&gt;The S3 website URL should use HTTPS for security. You will need to use Amazon CloudFront to help with this. This was where I encountered my very first hiccup. I bought a custom domain name from &lt;a href="https://www.namecheap.com/"&gt;Namecheap&lt;/a&gt; and wanted it to point to my CloudFront distribution. I was very excited that my domain name only cost a dollar but I fear that the service merits that cheap price. I wanted that lock sign next to my website but I learned that getting an SSL certificate from ACM for my cheap domain would require a whole lot more effort to validate it than if I had purchased from route 53. The process should be easy but since I did not purchase the premium DNS package that lets me manipulate host records, I had to find a sneaky way to validate which I did. This stunted my progress for quite a while but, I persevered and created a custom DNS server for it using route 53 which will be explained in detail in the next process. Aside from getting my SSL certificate from ACM, everything else was a breeze. I quickly set up a CloudFront distribution using my S3 bucket website endpoint as the domain origin. This gave me that sweet https:// locked sign for my site which I wanted very much.&lt;br&gt;
The resource that was very helpful during my troubleshooting process is this doc:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-alternate-domain-names.htmlument."&gt;Alternate Domain Developer Guide&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS
&lt;/h2&gt;

&lt;p&gt;Point a custom DNS domain name to the CloudFront distribution, so your resume can be accessed at a custom domain name. You can use Amazon Route 53 or any other DNS provider for this. I first created a Route 53 hosted zone which gave me 4 nameservers. I switched my Namecheap DNS with these 4 servers and they were very helpful in creating records to get my DNS validated without falling into the premium DNS trap of Namecheap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Javascript
&lt;/h2&gt;

&lt;p&gt;Your resume webpage should include a visitor counter that displays how many people have accessed the site. You will need to write a bit of Javascript to make this happen. Once again, I used the w3schools database to get a feel of the language. I decided to write a simple script that will call the API as soon as the page has loaded and then display the responding data from the API. My code was a bit archaic since I used the XML HTTP function instead of the function that is specifically made for fetching APIs but it seemed to work as it is so I did not change it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;The visitor counter will need to retrieve and update its count in a database somewhere. I was advised to use Amazon’s DynamoDB for this purpose. Creating the table was straightforward and was finished in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  API
&lt;/h2&gt;

&lt;p&gt;You should not communicate directly with DynamoDB from your Javascript code. Instead, you will need to create an API that accepts requests from your web app and communicates with the database. I used AWS’s API Gateway and Lambda services for this. This gave me a bit of a struggle as I did not have a single idea of what I was supposed to do. So, I read many documents and watched many videos to understand what it was that I was supposed to do. Once I felt enough confidence in my knowledge of API gateways I decided that I would stumble around and make it work somehow since the AWS official documentation was confusing to me and I decided not to use it. I first experimented with an HTTP API which I felt must be cheaper and got it working. I later switched to using REST APIs as it was easier to deploy with CI/CD integration. Integrating with the Lambda and deploying gave me an API endpoint which I inserted into the Javascript code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda
&lt;/h2&gt;

&lt;p&gt;I created a Lambda function to integrate with the API and DynamoDB database. You will need to write a bit of code in the Lambda function to access and update the database. I decided to explore Python – a common language used in back-end programs and scripts – and its boto3 library for AWS. There were many resources available for creating a Python code and I decided to use them as my guidance since I had not used Python before this. Those guides were very extensive and helpful in helping me create my code. &lt;a href="https://hands-on.cloud/boto3/dynamodb/"&gt;Boto3 for Dynamodb&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code
&lt;/h2&gt;

&lt;p&gt;You should not be configuring your API resources – the DynamoDB table, the API Gateway, the Lambda function – manually, by clicking around in the AWS console. Instead, define them and deploy them using Terraform. This is called “infrastructure as code” or IaC. It saves you time in the long run. I had no previous experience with Terraform and I went in fresh with only the official documentation as my guide. Even though I had to rewrite my code due to various trial and error methods, every time the code worked it felt like Christmas. It was simple and the official guide is the only thing you need to deploy the entire infrastructure. The error logs were very specific and this helped me not waste my time making unnecessary code changes. It took me three days to write the code to automatically provision all the AWS resources with only the official guide and no other resources.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Source Control
&lt;/h2&gt;

&lt;p&gt;You do not want to be updating either your back-end API or your front-end website by making calls from your laptop, though. You want them to update automatically whenever you make a change to the code. This is called continuous integration and deployment, or CI/CD. I achieved this by creating a GitHub repository for my backend code.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD (Back end)
&lt;/h2&gt;

&lt;p&gt;I set up GitHub Actions such that when I push an update to my Terraform template or Python code, they automatically get packaged and deployed to AWS. This was achieved by this Github action by &lt;a href="https://github.com/appleboy/lambda-action"&gt;appleboy.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD (Front end)
&lt;/h2&gt;

&lt;p&gt;Create GitHub Actions such that when you push new website code, the S3 bucket automatically gets updated. I used the s3 sync action resource made by &lt;a href="https://github.com/jakejarvis/s3-sync-action"&gt;jakejarvis.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Blog post
&lt;/h2&gt;

&lt;p&gt;The final goal of the challenge was to post our experience with this challenge in a blog. I was deciding between Dev.to, Hashnode, and medium as my blog site. I still have goals to create my own blog but I chose dev.to since I felt that I would rather be a part of a close-knit dedicated community than be a part of a large site writing pointless articles to get more and more traffic.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cicd</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
