<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: san dev</title>
    <description>The latest articles on DEV Community by san dev (@san_dev_65a0346580173629d).</description>
    <link>https://dev.to/san_dev_65a0346580173629d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/san_dev_65a0346580173629d"/>
    <language>en</language>
    <item>
      <title>Deploy AWS VPC with ALB, NAT Gateway, and Send Apache Logs to CloudWatch Using CloudWatch Agent</title>
      <dc:creator>san dev</dc:creator>
      <pubDate>Sun, 18 May 2025 20:34:55 +0000</pubDate>
      <link>https://dev.to/san_dev_65a0346580173629d/deploy-aws-vpc-with-alb-nat-gateway-and-send-apache-logs-to-cloudwatch-using-cloudwatch-agent-1non</link>
      <guid>https://dev.to/san_dev_65a0346580173629d/deploy-aws-vpc-with-alb-nat-gateway-and-send-apache-logs-to-cloudwatch-using-cloudwatch-agent-1non</guid>
      <description>&lt;p&gt;In this blog, we'll walk through the process of setting up a scalable and secure AWS infrastructure. The setup includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Virtual Private Cloud (VPC)&lt;/li&gt;
&lt;li&gt;Public and private subnets&lt;/li&gt;
&lt;li&gt;An Internet Gateway&lt;/li&gt;
&lt;li&gt;A NAT Gateway&lt;/li&gt;
&lt;li&gt;An Application Load Balancer (ALB)&lt;/li&gt;
&lt;li&gt;EC2 instances in private subnets&lt;/li&gt;
&lt;li&gt;Apache HTTP server installed on EC2&lt;/li&gt;
&lt;li&gt;CloudWatch Agent to send Apache logs to CloudWatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This guide is ideal for beginners and intermediate users aiming to understand foundational AWS networking and logging.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step 1: VPC and Subnet Design&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a VPC using the "VPC and more" option for visual configuration.&lt;/li&gt;
&lt;li&gt;Select two Availability Zones (e.g., ap-south-1a and ap-south-1b).&lt;/li&gt;
&lt;li&gt;Create four subnets:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt; Two public subnets (one per AZ)&lt;/li&gt;
&lt;li&gt; Two private subnets (one per AZ)&lt;/li&gt;
&lt;li&gt;Use a CIDR block like 11.0.0.0/16 for your VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81bi6fr4t1bk08bqat95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81bi6fr4t1bk08bqat95.png" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step 2: Configure Routing and Internet Access&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Route Tables:

&lt;ul&gt;
&lt;li&gt; One for public subnets (routes to Internet Gateway)&lt;/li&gt;
&lt;li&gt; One or two for private subnets (routes to NAT Gateway)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create an Internet Gateway and attach it to the VPC.&lt;/li&gt;
&lt;li&gt;Create a NAT Gateway in one of the public subnets.&lt;/li&gt;
&lt;li&gt;Update routing tables:

&lt;ul&gt;
&lt;li&gt; Public subnet: 0.0.0.0/0 to Internet Gateway&lt;/li&gt;
&lt;li&gt; Private subnet: 0.0.0.0/0 to NAT Gateway&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u4eglq6dx3z2ieqj4lq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u4eglq6dx3z2ieqj4lq.png" alt="Image description" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrleddj72vxgi466q3vy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrleddj72vxgi466q3vy.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step 3: Launch EC2 Instances&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Launch two Ubuntu Linux EC2 instances in private subnets.&lt;/li&gt;
&lt;li&gt;Choose instance type t2.micro.&lt;/li&gt;
&lt;li&gt;Disable public IP assignment.&lt;/li&gt;
&lt;li&gt;In user data, provide a script to install Apache and a simple HTML response with the hostname:&lt;/li&gt;
&lt;li&gt;Script is is in below Screen shot&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylg40oj8dx988xbzgs7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylg40oj8dx988xbzgs7w.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 4: Configure the Application Load Balancer (ALB)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an Application Load Balancer:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Internet-facing&lt;/li&gt;
&lt;li&gt;IPv4&lt;/li&gt;
&lt;li&gt;Attach to the public subnets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1recllgm8ic5c05wak5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1recllgm8ic5c05wak5q.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Security Group for ALB:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Allow inbound HTTP traffic from the internet&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;we select previously created VPC and 2 AZ inside 2 AZ select previously created  2 public subnets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhlszzgqjnkax8tgipci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhlszzgqjnkax8tgipci.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then we need to create a target group &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy02l1v17ax6dnt62cpuz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy02l1v17ax6dnt62cpuz.png" alt="Image description" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Target type will be instance  , port 80 , same vpc like before , other settings are  unchanged &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo379jq1eqf2a0qadkjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo379jq1eqf2a0qadkjy.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select both the instances for target group and click include as pending below , When you add EC2 instances to an Application Load Balancer (ALB) target group, they are listed as “Registered targets – pending below” or “pending” until the health check passes.&lt;/li&gt;
&lt;li&gt;This is the summery you can check before the create ALB&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydibz7vz07v4xcal7wyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydibz7vz07v4xcal7wyy.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the EC2 Security Group:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb7a4mgpm5q403vcxr2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb7a4mgpm5q403vcxr2u.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In above picture we can see in the common SG of 2 private instance we delete the previous inbound rule and add SG of ALB and add here &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g44dkzkd9hwhrjcdrqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g44dkzkd9hwhrjcdrqa.png" alt="Image description" width="800" height="389"&gt;&lt;/a&gt;&lt;br&gt;
Above diagram is explaining user when access the private ec2 instance the request only go through ALB &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Step 5: Install and Configure CloudWatch Agent&lt;/em&gt;&lt;br&gt;
  So far, after completing my tasks, the ALB isn't working. To fix this, I need to debug the issue, which requires sending Apache logs from the Ubuntu EC2 (in the private subnet) to CloudWatch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For that 1st we create a jump server or Bastian host &lt;/li&gt;
&lt;li&gt;this is public instance in same vpc with a public subnet , in SG we created SG which will accept  traffic from anywhere on port 22 SSH port&lt;/li&gt;
&lt;li&gt;now this private instance again create inbound rule which will take inbound traffic from the SG which created for public subnet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgdbz80o24rwazby8up.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgdbz80o24rwazby8up.png" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In above picture we can see its taking ssh inbound traffic from SG created for jump server &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now we will install the CloudWatch Agent on Ubuntu  ) private instances )
`# Go to a temporary directory
cd /tmp&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Download the CloudWatch agent .deb package from AWS
&lt;/h1&gt;

&lt;p&gt;wget &lt;a href="https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb" rel="noopener noreferrer"&gt;https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Install the downloaded .deb package
&lt;/h1&gt;

&lt;p&gt;sudo dpkg -i -E ./amazon-cloudwatch-agent.deb&lt;br&gt;
&lt;code&gt;&lt;br&gt;
**Verify Installation **&lt;br&gt;
&lt;/code&gt;ls /opt/aws/amazon-cloudwatch-agent/bin/&lt;code&gt;&lt;br&gt;
&lt;/code&gt;/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent --version   ( showing the version )`&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create CloudWatch Agent Configuration File&lt;/li&gt;
&lt;li&gt;Create the file at  sudo vi  /opt/aws/amazon-cloudwatch-agent/bin/config.json&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "logs": {&lt;br&gt;
    "logs_collected": {&lt;br&gt;
      "files": {&lt;br&gt;
        "collect_list": [&lt;br&gt;
          {&lt;br&gt;
            "file_path": "/var/log/apache2/access.log",&lt;br&gt;
            "log_group_name": "ApacheAccessLogs",&lt;br&gt;
            "log_stream_name": "{instance_id}/access_log",&lt;br&gt;
            "timestamp_format": "%d/%b/%Y:%H:%M:%S"&lt;br&gt;
          },&lt;br&gt;
          {&lt;br&gt;
            "file_path": "/var/log/apache2/error.log",&lt;br&gt;
            "log_group_name": "ApacheErrorLogs",&lt;br&gt;
            "log_stream_name": "{instance_id}/error_log",&lt;br&gt;
            "timestamp_format": "%d/%b/%Y:%H:%M:%S"&lt;br&gt;
          }&lt;br&gt;
        ]&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Start CloudWatch Agent&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \&lt;br&gt;
  -a fetch-config \&lt;br&gt;
  -m ec2 \&lt;br&gt;
  -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json \&lt;br&gt;
  -s&lt;/code&gt;&lt;br&gt;
How to Verify It’s Now Running&lt;br&gt;
&lt;code&gt;sudo systemctl status amazon-cloudwatch-agent&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;below is the expected output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr54gmakdxwalbpcmgqcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr54gmakdxwalbpcmgqcx.png" alt="Image description" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Now head to the AWS Console:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to CloudWatch → Log Groups&lt;/li&gt;
&lt;li&gt;Look for:

&lt;ul&gt;
&lt;li&gt;ApacheAccessLogs&lt;/li&gt;
&lt;li&gt;ApacheErrorLogs
&lt;em&gt;Repeat for both private instances&lt;/em&gt;
after some checks and balance Now we can see the ssh msg from both the instance &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh2umw4upvq7n9ra1q3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh2umw4upvq7n9ra1q3h.png" alt="Image description" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2a5y1m9wj7lsk4jhpvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2a5y1m9wj7lsk4jhpvf.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;this is the message return from both the instances with there private IP address , every time I refresh IP is changing it shows that ALB is working&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;and finally in CloudWatch access_log we can see this msg&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;11.0.17.14 - - [17/May/2025:17:11:12 +0000] "GET / HTTP/1.1" 200 289 "-" "ELB-HealthChecker/2.0"   it means load balancer working &lt;br&gt;
it means it can access the logs &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>vpc</category>
      <category>loadbalancer</category>
    </item>
    <item>
      <title>Automating Static Website Deployment to AWS S3 with Terraform and GitHub Actions</title>
      <dc:creator>san dev</dc:creator>
      <pubDate>Wed, 07 May 2025 14:43:12 +0000</pubDate>
      <link>https://dev.to/san_dev_65a0346580173629d/automating-static-website-deployment-to-aws-s3-with-terraform-and-github-actions-24a4</link>
      <guid>https://dev.to/san_dev_65a0346580173629d/automating-static-website-deployment-to-aws-s3-with-terraform-and-github-actions-24a4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
This guide demonstrates how to automate the deployment of a static website to Amazon S3 using Terraform for infrastructure provisioning and GitHub Actions for continuous integration and deployment (CI/CD). By the end, you'll have a streamlined process that updates your website upon each code push.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up AWS Credentials in Terraform Cloud&lt;/strong&gt;&lt;br&gt;
To enable Terraform Cloud to provision resources in your AWS account, you need to securely store your AWS access keys within the platform. &lt;br&gt;
So we 1st create a organization first &lt;br&gt;
go to &lt;a href="https://app.terraform.io/app/organizations" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;br&gt;
then create a new organization &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0moy0khodhe59dsrbara.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0moy0khodhe59dsrbara.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Crete a new workspace&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq8mksgbd5jr17839cwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq8mksgbd5jr17839cwd.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now come to variable and create 2 variable for AWS ACCESS KEY and SECRET ACCESS KEY&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng5n2o9l7mnetlor66e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng5n2o9l7mnetlor66e4.png" alt="Image description" width="800" height="470"&gt;&lt;/a&gt;&lt;br&gt;
now Navigate to Terraform Cloud &amp;gt; Organization Settings &amp;gt; API Tokens &amp;gt; create an API token &amp;gt; give name generate token &amp;gt; save to notepad&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854pfc4kc0y4w1ph16fb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854pfc4kc0y4w1ph16fb.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now come to &lt;strong&gt;Github&lt;/strong&gt;&lt;br&gt;
you can fork this repo &lt;a href="https://github.com/sankha-ghosh/s3-code-pipeline-game.git" rel="noopener noreferrer"&gt;code-pipeline-game&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now come to git-repo -&amp;gt; settings -&amp;gt; secrets &amp;amp; variables -&amp;gt; action&lt;br&gt;
here you can save the &lt;strong&gt;API token&lt;/strong&gt; you have generated from terraform cloud &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8s6tc8u7cj6btndh5pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8s6tc8u7cj6btndh5pq.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This token we will mention here in Github action workflow YAML file&lt;/p&gt;

&lt;p&gt;`name: Deploy website to AWS S3&lt;br&gt;
on:&lt;br&gt;
  push:&lt;br&gt;
    branches:&lt;br&gt;
      - main&lt;br&gt;
jobs:&lt;br&gt;
  Terraform:&lt;br&gt;
    name: 'Terraform'&lt;br&gt;
    runs-on: ubuntu-latest&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
- name: Checkout
  uses: actions/checkout@v4      

- name: Setup Terraform
  uses: hashicorp/setup-terraform@v1
  with:
    cli_config_credentials_token: ${{ secrets.tfc_team_token }}

- name: Terraform Init
  run: terraform init

- name: Terraform Validate
  run: terraform validate 

- name: Terraform Plan
  run: terraform plan

- name: Terraform Apply
  run: terraform apply -auto-approve

- name: Terraform destroy
  run: terraform destroy -auto-approve`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now You can find all the other terraform files in github repo &lt;br&gt;
so we can run the the Github action file and check all the stage &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix9trpa9iq9ytl2rbkuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix9trpa9iq9ytl2rbkuh.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We can see in the image above that our job is completed. Its time to verify from our AWS console.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1ksw38b9c3m0y56928r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1ksw38b9c3m0y56928r.png" alt="Image description" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can confirm now that S3 bucket has been created and the website files uploaded in it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Its time to connect to the website.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will copy the website url from outputs in the workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wsfer82s5zli4dio77e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wsfer82s5zli4dio77e.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Its time to visit the website. Paste the website url into your browser&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdyctm39bf8fdi565i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdyctm39bf8fdi565i9.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>aws</category>
      <category>git</category>
    </item>
    <item>
      <title>Using NFS in Kubernetes – A Simple Guide to Shared Storage</title>
      <dc:creator>san dev</dc:creator>
      <pubDate>Tue, 15 Apr 2025 13:03:03 +0000</pubDate>
      <link>https://dev.to/san_dev_65a0346580173629d/using-nfs-in-kubernetes-a-simple-guide-to-shared-storage-2n62</link>
      <guid>https://dev.to/san_dev_65a0346580173629d/using-nfs-in-kubernetes-a-simple-guide-to-shared-storage-2n62</guid>
      <description>&lt;p&gt;Kubernetes is powerful when it comes to managing containerized workloads, but things get tricky when your applications need shared storage.&lt;br&gt;
That’s where NFS (Network File System) comes in. It's a tried-and-tested way to share files across multiple pods and services in a cluster.&lt;br&gt;
In this blog, we’ll walk through why and when to use NFS in Kubernetes, how to set it up using Persistent Volumes (PV) and Persistent Volume Claims (PVC), and key things to watch out for.&lt;/p&gt;

&lt;p&gt;we will set up a simple NFS server on Ubuntu and make use of it in a nginx container running on a Kubernetes cluster&lt;/p&gt;

&lt;p&gt;NFS is pretty straightforward to set up, you install the server, export a share and away you go. However it seems that there's a lot of struggles around permissions and I have had some issues my self so I thought I'd create a short write up, mostly for my own reference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Launch an EC2 Instance for Your Lab&lt;/strong&gt;&lt;br&gt;
To begin, we need a Linux machine that will act as our NFS server and possibly host a lightweight Kubernetes cluster (like Minikube or K3s).&lt;/p&gt;

&lt;p&gt;For this setup, I'm using an Amazon EC2 instance with the following specs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AMI: Ubuntu 22.04 LTS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instance Type: t2.medium (2 vCPUs, 4 GB RAM)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage: 20 GB (or more, depending on your needs)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Group: Allow SSH (port 22) and NFS (ports 2049, 111)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: we need to check if the existing file system is mounted&lt;/strong&gt; &lt;br&gt;
for that use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fdisk -l 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51auuuxp89tnxoq82s41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51auuuxp89tnxoq82s41.png" alt="Image description" width="751" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;here you can see root partition is  xvda1 &lt;br&gt;
now we will check if its mounted or not for this we will use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount | grep xvda1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz59r2jei5rnz4y7ufz0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz59r2jei5rnz4y7ufz0e.png" alt="Image description" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 3 :Install NFS server&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get install nfs-kernel-server&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Step 4 : Configure exports&lt;br&gt;
We'll add our directory to the exports of the server in the /etc/exports file. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ac0tw9cw870tn5winbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ac0tw9cw870tn5winbg.png" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;br&gt;
Now we need to actually tell the server to export the directory&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exportfs -ar&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;We can verify that the directory has been shared with the -v parameter&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exportfs -v&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16yfyax5i98insn8z8c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16yfyax5i98insn8z8c6.png" alt="Image description" width="800" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 4: Now we will verify  which directories are shared (exported) by an NFS server.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;showmount -e&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblpg49kdy2dxa54zhwvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblpg49kdy2dxa54zhwvh.png" alt="Image description" width="479" height="88"&gt;&lt;/a&gt;&lt;br&gt;
Configure firewall&lt;br&gt;
If the firewall is active we need to tell it to allow NFS traffic&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ufw allow from 34.229.82.109/24 to any port nfs&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkc6y8m960o2r1vj5fwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkc6y8m960o2r1vj5fwc.png" alt="Image description" width="800" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 5:Test mount from container&lt;/strong&gt;&lt;br&gt;
To test the mounting of this directory from a container let's first create a file in the directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxymav0ix9rqruwvuqwhv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxymav0ix9rqruwvuqwhv.png" alt="Image description" width="726" height="169"&gt;&lt;/a&gt;&lt;br&gt;
here I created a file call &lt;strong&gt;myfile.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 6:Now let's try to mount the nfs share from a container/pod.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;we will install a Kubernetes cluster using kubeadm , we can use the below code just run the script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
#!/bin/bash
swapoff -a 
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboot

cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot

sudo sysctl --system
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo mkdir /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet=1.28.1-1.1 kubeadm=1.28.1-1.1 kubectl=1.28.1-1.1 docker.io
sudo apt-mark hold kubelet kubeadm kubectl docker.io

sudo mkdir /etc/containerd
sudo containerd config default &amp;gt; /etc/containerd/config.toml
sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl restart kubelet

kubeadm config images pull
kubeadm init --pod-network-cidr=192.168.0.0/16

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
kubectl create -f custom-resources.yaml 
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

UUID=85538820-9e86-47ea-9d4a-f18019b855c3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;step 7: Now we will deploy a nginx pod&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-16-213:/nfs# cat nginx.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - name: nfs-vol
      mountPath: /var/nfs # The mountpoint inside the container
  volumes:
  - name: nfs-vol
    nfs:
      server: 34.224.65.228 # IP to our NFS server
      path: /nfs # The exported directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;step 8: Now we will check the mount directory which was set into yaml file as /var/nfs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjv9jb5a7hoodia22opd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjv9jb5a7hoodia22opd.png" alt="Image description" width="787" height="100"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;we can see that my.txt and other files which are inside /nfs directory is mounted to inside container /var/nfs directory so its a success&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;A big note with NFS, and what's creating issues for users are the permission and user mappings. Normally an NFS share will not be shared with root access (root_squash) and the user that needs access to the share will need to exist on the NFS server.&lt;/em&gt;&lt;br&gt;
Refer to this NFS &lt;a href="https://nfs.sourceforge.net/nfs-howto/ar01s07.html#pemission_issues" rel="noopener noreferrer"&gt;how-to&lt;/a&gt; for more information&lt;/p&gt;

&lt;h3&gt;
  
  
  Use NFS for the static content
&lt;/h3&gt;

&lt;p&gt;Now that we know that we can mount a NFS share in the container let's see if we can use it to host our static html files&lt;br&gt;
&lt;em&gt;By default nginx serves files from the /usr/share/nginx/html directory which we can verify from our running container&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17g24dmo8bhk7v5vgsv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17g24dmo8bhk7v5vgsv4.png" alt="Image description" width="798" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With this knowledge, let's create a new directory for NFS to export, add a static html file to the directory and mount this directory to the directory inside of the container&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kl9qromldamj67mgo24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kl9qromldamj67mgo24.png" alt="Image description" width="585" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 9: Recreate pod with mountpoint to correct directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmhhrweysaioov48tmks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmhhrweysaioov48tmks.png" alt="Image description" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pvcaxj34dhytqek321e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pvcaxj34dhytqek321e.png" alt="Image description" width="636" height="67"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;As we can see our container sees the html file, now let's try to see if we can get it to work through http as well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First we'll expose our pod so that it we can access it from outside the cluster, and then let's open it up in a browser&lt;/p&gt;

&lt;p&gt;&lt;em&gt;we will create a node port service and expose to web&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cgfvekbh8zs58dnzups.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cgfvekbh8zs58dnzups.png" alt="Image description" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh0sk7cu88d60a37jpyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh0sk7cu88d60a37jpyt.png" alt="Image description" width="800" height="84"&gt;&lt;/a&gt;&lt;br&gt;
what ever changes we have done inside /nfs/index.html file it will affect inside pod&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb65ojd382unkbc29udj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb65ojd382unkbc29udj.png" alt="Image description" width="774" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ym945bnjpkfwmifr0xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ym945bnjpkfwmifr0xr.png" alt="Image description" width="655" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
