<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bhanvendra Singh Gaur</title>
    <description>The latest articles on DEV Community by Bhanvendra Singh Gaur (@bhanvendrasingh).</description>
    <link>https://dev.to/bhanvendrasingh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bhanvendrasingh"/>
    <language>en</language>
    <item>
      <title>What’s New in AWS Free Tier (2025)</title>
      <dc:creator>Bhanvendra Singh Gaur</dc:creator>
      <pubDate>Tue, 15 Jul 2025 12:13:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/whats-new-in-aws-free-tier-2025-2ba5</link>
      <guid>https://dev.to/aws-builders/whats-new-in-aws-free-tier-2025-2ba5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Credit-based Free Level:&lt;/strong&gt; On July 15, 2025, AWS introduced a credit-based “Free Plan” in place of the previous 12-month free-trial model for new accounts. $100 in AWS credits are given to new users automatically upon signup also they can earn an additional $100 by completing onboarding tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob8gynglqzd9feg2q65v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob8gynglqzd9feg2q65v.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Free vs. Paid Plans:&lt;/strong&gt; Users are required to select either a Paid Plan (for production use) or a Free Plan (for exploration/POCs) when creating their accounts. Both plans still have access to Always Free deals and up to $200 in credits, but accounts on the Free Plan are restricted from using some expensive services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always Free Services:&lt;/strong&gt; With monthly usage caps, AWS still provides more than thirty always-free services. AWS Lambda (1M invocations/month, 400K GB‑seconds), Amazon DynamoDB (25 GB storage + provisioned RCU/WCU), Amazon S3 (5 GB Standard storage), Amazon CloudFront (1 TB data out + 10M requests), and Amazon SNS (1 million publishes) are among the core, always-free services. Although the Free Plan duration is capped (see below), these always-free limits are applicable indefinitely (not just for a full year).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New Free Services/Offers:&lt;/strong&gt; AWS expanded the EC2 Free Tier (which covers instances with a corresponding public IP) to include 750 free IPv4 address-hours per month in early 2024. Free credits for educational activities were also introduced by AWS (e.g. $X credit for launching an EC2 instance, using RDS, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Coverage:&lt;/strong&gt; All AWS commercial regions (global regions) are now covered by the Free Tier. Note: Lambda is the primary exception to the general rule that AWS GovCloud (US) is not included in free-tier offers.&lt;/p&gt;

&lt;p&gt;What’s Removed or Deprecated 😒&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vhqjiocp5hpqg87sfja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vhqjiocp5hpqg87sfja.png" alt=" " width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End of 12-Month Free Tier:&lt;/strong&gt; The customary 12-month free trial for new AWS accounts has been discontinued for accounts created on or after July 15, 2025. New accounts instead receive the 6-month Free Plan, which is based on credit. Accounts that were open prior to July 2025 are not required to switch; they continue to receive the same 12-month benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expiration of the Free Plan:&lt;/strong&gt; Under the new Free Plan, accounts end when credits run out or after six months. Unless the user switches to a Paid Plan, a 90-day grace period is followed by the account being closed and resources being erased when a Free Plan expires. Under the previous system, accounts did not automatically close; instead, they rolled over to paid usage after a year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Restrictions:&lt;/strong&gt; Some services that would “immediately consume the entire Free Tier credit” are not available to Free Plan accounts. For instance, purchases from the AWS Marketplace, hardware appliance services, and large dedicated infrastructure offerings are prohibited. (Accounts on the Paid Plan are not subject to these limitations.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CodeCommit:&lt;/strong&gt; CodeCommit is no longer available to new customers. As of mid-2024, CodeCommit is “no longer available to new customers.” Current CodeCommit users can keep using it, but new users won’t be able to access this free Git service.&lt;br&gt;
S3 RRS and Others: The Free Tier is still only available for standard services; for instance, Amazon S3 Reduced Redundancy Storage (RRS) is not covered; only standard storage up to 5 GB is free. (In general, the Free Tier does not include legacy or deprecated services.)&lt;/p&gt;

&lt;p&gt;Free Tier: Before vs. After July 2025&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ca4gercr3mjkxyl4zva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ca4gercr3mjkxyl4zva.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Free Tier changes from July 15, 2025:&lt;/strong&gt; Old model offered 12-month trials and always-free services. New model splits into Free Plan (6-month credit trial with $100+ credit, limited access) and Paid Plan (full access). Always-free offers remain unchanged. Free accounts auto-close after expiry (90-day data retention); paid ones don’t. Region coverage remains, GovCloud mostly excluded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Happens If I Still Have Credits After 6 Months?&lt;/strong&gt;&lt;br&gt;
Unless you upgrade, your Free Plan will expire after six months, even if you still have AWS credits.&lt;/p&gt;

&lt;p&gt;🚫 Free Plan Expiry Rules:&lt;br&gt;
After six months or when the credits run out, whichever comes first, the Free Plan ends.&lt;br&gt;
Once expired:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your AWS account will be automatically closed after it has expired.&lt;/li&gt;
&lt;li&gt;Any resources that are in use will be shut down.&lt;/li&gt;
&lt;li&gt;Data is erased after a 90-day grace period for recovery.&lt;/li&gt;
&lt;li&gt;Credits that are not used are lost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ How to Keep Your Credits&lt;br&gt;
After six months, to continue using your credits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the AWS Billing Console and log in.&lt;/li&gt;
&lt;li&gt;Go to Account Settings.&lt;/li&gt;
&lt;li&gt;Select “Upgrade to Paid Plan.”&lt;/li&gt;
&lt;li&gt;Verify your choice.
Once upgraded:&lt;/li&gt;
&lt;li&gt;After being upgraded, your account doesn’t automatically expire.&lt;/li&gt;
&lt;li&gt;Any remaining AWS credits are yours to keep.&lt;/li&gt;
&lt;li&gt;You have complete access to every AWS service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reminder: Upgrading simply removes restrictions and expiry — you only start paying if your usage goes beyond the free limits or your credits.&lt;/p&gt;

&lt;p&gt;In conclusion, cloud exploration is now more flexible (through credits) but has a shorter lifespan thanks to AWS’s 2025 Free Tier redesign. At the expense of a more stringent 6-month limit and certain service limitations, it encourages active learning and streamlines usage tracking. Students should take note of the updated schedule and save important work before it expires, while developers and start-ups should use the larger initial credits for extensive testing and switch to the Paid Plan as necessary.&lt;/p&gt;

&lt;p&gt;Sources: Official AWS announcements and documentation on Free Tier changes and service pricing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting Up Grafana, Prometheus, and ELK for Monitoring and Logging in AWS EKS</title>
      <dc:creator>Bhanvendra Singh Gaur</dc:creator>
      <pubDate>Tue, 08 Oct 2024 11:10:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/setting-up-grafana-prometheus-and-elk-for-monitoring-and-logging-in-aws-eks-1gn5</link>
      <guid>https://dev.to/aws-builders/setting-up-grafana-prometheus-and-elk-for-monitoring-and-logging-in-aws-eks-1gn5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Monitoring and logging are essential for healthy and functionally optimal Cloud-native apps, particularly in Kubernetes environments (AWS EKS). You may create a strong observability system by combining tools like Grafana for visualization, Prometheus for metrics, and the ELK stack (Elasticsearch, Logstash, and Kibana) for logging. In this article, I'll walk you through setting up logging and monitoring functionality for the apps deployed in Amazon EKS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sjqex0y7pcefts01nj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sjqex0y7pcefts01nj9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Significance of Observability in EKS Applications
&lt;/h2&gt;

&lt;p&gt;Monitoring system and application-level metrics and logging events becomes more crucial as microservices and containerized applications in Kubernetes get more complicated. At the same time, logs offer the specific information required for debugging and root cause investigation, metrics aid in identifying patterns and abnormalities. With a strong observability stack, you can learn about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource utilization  (RAM, Memory &amp;amp; I/O)&lt;/li&gt;
&lt;li&gt;Performance of deployed applications in EKS&lt;/li&gt;
&lt;li&gt;EKS Cluster health&lt;/li&gt;
&lt;li&gt;Notification &amp;amp; Alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before getting started, make sure that you have the following resources and tools installed and running:&lt;br&gt;
EKS cluster running &lt;br&gt;
Helm and Kubectl are installed and configured on your local machine to interact with your EKS cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Install Prometheus to collect metrics
&lt;/h2&gt;

&lt;p&gt;An effective open-source toolbox for alerting and monitoring is called Prometheus. Metrics are scraped from many endpoints, kept in a time-series database, and queryable using PromQL.&lt;/p&gt;
&lt;h3&gt;
  
  
  1.1 Install &amp;amp; Configure Prometheus
&lt;/h3&gt;

&lt;p&gt;Initially, add the Prometheus Helm repository:&lt;br&gt;
&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts&lt;br&gt;
helm repo update&lt;/code&gt;&lt;br&gt;
Then, install Prometheus in the desired namespace in your Kubernetes cluster:&lt;br&gt;
&lt;code&gt;helm install prometheus prometheus-community/prometheus --namespace monitoring --create-namespace&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1.2 Configuring Prometheus
&lt;/h3&gt;

&lt;p&gt;Now, we need to configure it. This is a sample prometheus.yml configuration to scrape desired metrics from your K8s nodes, pods, and application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'kubernetes-cadvisor'
    kubernetes_sd_configs:
      - role: node
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod
  - job_name: 'application'
    metrics_path: '/metrics'
    static_configs:
      - targets: ['&amp;lt;application-service&amp;gt;:&amp;lt;port&amp;gt;'] #endpoint or address where we want to share the data

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1.3 Access &amp;amp; verify Prometheus UI
&lt;/h3&gt;

&lt;p&gt;Using  the following command will help to check whether the scrap point is working properly or not via port-forwarding:&lt;br&gt;
&lt;code&gt;kubectl port-forward service/prometheus-server 9090:80 -n monitoring&lt;/code&gt;&lt;br&gt;
You can evaluate it by opening &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt; in your browser and explore the collected metrics.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Get Grafana Set up for Visualizing Metrics
&lt;/h2&gt;

&lt;p&gt;A popular tool for visualizing time-series data is Grafana. It has a Prometheus connection and offers dynamic dashboards for tracking metrics.&lt;br&gt;
In the previous step, we have already set up the Prometheus that will forward the metrics from the appropriate endpoint to the desired destination in our case we will be forwarding the metrics to Grafana for graphical visualization.&lt;/p&gt;
&lt;h3&gt;
  
  
  2.1 Installing Grafana
&lt;/h3&gt;

&lt;p&gt;For installing Grafana using Helm:&lt;br&gt;
&lt;code&gt;helm repo add grafana https://grafana.github.io/helm-charts&lt;br&gt;
helm install grafana grafana/grafana --namespace monitoring #change the namespace to desired one&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2.2 Accessing Grafana
&lt;/h3&gt;

&lt;p&gt;Now we have the GUI installed, for access using the following command:&lt;br&gt;
&lt;code&gt;kubectl port-forward service/grafana 3000:80 -n monitoring&lt;/code&gt;&lt;br&gt;
You can access Grafana by visiting &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;. The default login credentials are admin/admin (we can change the credentials later).&lt;/p&gt;
&lt;h3&gt;
  
  
  2.3 Configure Grafana to connect with Prometheus
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Once you have Grafana running follow the below steps:&lt;/li&gt;
&lt;li&gt;Initially navigate to Configuration &amp;gt; Data Sources.&lt;/li&gt;
&lt;li&gt;Click Add data source.&lt;/li&gt;
&lt;li&gt;Select Prometheus and set the URL to &lt;a href="http://prometheus-server.monitoring.svc.cluster.local:80" rel="noopener noreferrer"&gt;http://prometheus-server.monitoring.svc.cluster.local:80&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Then you can create dashboards for visualizing EKS cluster and application metrics.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 3: Configuring the Logging ELK Stack
&lt;/h2&gt;

&lt;p&gt;We'll set up Kibana for log viewing, Logstash for log aggregation, and Elasticsearch for storage and indexing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1u59z2k4sxhsts8q5et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1u59z2k4sxhsts8q5et.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3.1 Installing Elasticsearch
&lt;/h3&gt;

&lt;p&gt;To install Elasticsearch via Helm:&lt;br&gt;
helm repo add elastic &lt;a href="https://helm.elastic.co" rel="noopener noreferrer"&gt;https://helm.elastic.co&lt;/a&gt;&lt;br&gt;
helm install elasticsearch elastic/elasticsearch --namespace logging --create-namespace&lt;/p&gt;
&lt;h3&gt;
  
  
  3.2 Installing Logstash
&lt;/h3&gt;

&lt;p&gt;Logstash transmits logs to Elasticsearch after processing logs from different sources (such as Fluentd or Fluent Bit). This is how to set up Logstash:&lt;br&gt;
&lt;code&gt;helm install logstash elastic/logstash --namespace logging&lt;/code&gt;&lt;br&gt;
Next, you need to create a ConfigMap to configure Logstash to interact with elastic search:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: logging
data:
  logstash.conf: |
    input {
      beats {
        port =&amp;gt; 5044
      }
    }
    output {
      elasticsearch {
        hosts =&amp;gt; ["http://elasticsearch:9200"]
        index =&amp;gt; "logs-%{+YYYY.MM.dd}"
      }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the ConfigMap:&lt;br&gt;
&lt;code&gt;kubectl apply -f logstash-config.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3.3 Installing Kibana
&lt;/h3&gt;

&lt;p&gt;To visualize logs in Elasticsearch, install Kibana using Helm:&lt;br&gt;
&lt;code&gt;helm install kibana elastic/kibana --namespace logging&lt;/code&gt;&lt;br&gt;
Access Kibana using port-forwarding:&lt;br&gt;
&lt;code&gt;kubectl port-forward service/kibana 5601:5601 -n logging&lt;/code&gt;&lt;br&gt;
You can now access Kibana at &lt;a href="http://localhost:5601" rel="noopener noreferrer"&gt;http://localhost:5601&lt;/a&gt; and start exploring your logs.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Using Alertmanager and Prometheus to Set Up Alerts
&lt;/h2&gt;

&lt;p&gt;Notifications of critical issues in your application or cluster should be sent to you. To do this, we'll set up Alertmanager to manage notifications and configure Prometheus alerts.&lt;/p&gt;
&lt;h3&gt;
  
  
  4.1 Define Alerting Rules
&lt;/h3&gt;

&lt;p&gt;Create and deploy an alerting rules file (alert.rules.yml) for&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prometheus:
groups:
- name: example
  rules:
  - alert: HighCpuUsage
    expr: sum(rate(container_cpu_usage_seconds_total[1m])) by (container_name) &amp;gt; 0.9
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "High CPU usage detected"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will update Prometheus' configuration to use the rules file.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Set Up Alertmanager
&lt;/h3&gt;

&lt;p&gt;Install Alertmanager using Helm:&lt;br&gt;
&lt;code&gt;helm install alertmanager prometheus-community/alertmanager --namespace monitoring&lt;/code&gt;&lt;br&gt;
Configure Alertmanager to send notifications (e.g., via email):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: monitoring
data:
  alertmanager.yml: |
    global:
      smtp_smarthost: 'smtp.example.com:587'
      smtp_from: 'alertmanager@example.com'
      smtp_auth_username: 'username'
      smtp_auth_password: 'password'
    route:
      receiver: 'email-config'
    receivers:
    - name: 'email-config'
      email_configs:
      - to: 'you@example.com'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Additional Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Security: For Prometheus, Grafana, and ELK, use TLS encryption and Role-Based Access Control (RBAC).&lt;/li&gt;
&lt;li&gt;Data Persistence: Set up Prometheus and Elasticsearch to use persistent storage.&lt;/li&gt;
&lt;li&gt;Cost management: Because log data accumulates quickly, be aware of the costs, particularly when using Elasticsearch.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this configuration, your AWS EKS apps now have a fully functional monitoring and logging system. Your metrics are insightfully visualized by Grafana, while logs are captured and analyzed by the ELK stack. You can get informed about critical events with Prometheus and Alertmanager, which also makes sure your apps are running smoothly and efficiently.&lt;br&gt;
You can make sure that your Kubernetes apps are highly observable, robust, and simple to debug by following these steps. You are welcome to alter this configuration to suit your needs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Case Study: How To Deploy Web App From S3 Bucket To EC2 Instance on AWS Using CodePipeline</title>
      <dc:creator>Bhanvendra Singh Gaur</dc:creator>
      <pubDate>Wed, 10 Jan 2024 13:09:10 +0000</pubDate>
      <link>https://dev.to/bhanvendrasingh/case-study-how-to-deploy-web-app-from-s3-bucket-to-ec2-instance-on-aws-using-codepipeline-2d3n</link>
      <guid>https://dev.to/bhanvendrasingh/case-study-how-to-deploy-web-app-from-s3-bucket-to-ec2-instance-on-aws-using-codepipeline-2d3n</guid>
      <description>&lt;p&gt;In this post, we are going to cover the case study to deploy a web application from an S3 bucket to an EC2 instance using AWS CodePipeline which is a part of CI/CD DevOps practices.&lt;/p&gt;

&lt;p&gt;Topics, we’ll cover :&lt;/p&gt;

&lt;p&gt;Overview of CI/CD&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. CI/CD Tools Offered By AWS&lt;/li&gt;
&lt;li&gt;2. Steps To Deploy Web Application On AWS&lt;/li&gt;
&lt;li&gt;3. Before deploying a web application we should understand the basic concept of Continuous Integration(CI) and Continuous Deployment(CD) what are they and What kind of tools are offered by AWS for DevOps CI/CD practi
ces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overview Of CI/CD&lt;br&gt;
AWS CI/CD architecture&lt;br&gt;
Continuous Integration(CI) and Continous Deployment(CD) gets rid of the traditional manual gate and implements fully automated verification of the acceptance environment to determine the scenario whether the pipeline can continue to production or not.&lt;/p&gt;

&lt;p&gt;Continuous Integration focuses on the software development life cycle (SDLC) of the individual developer in the code repository. This can be executed multiple times with a primary goal to enable early detection of integration bugs, and errors.&lt;/p&gt;

&lt;p&gt;Continuous Delivery focuses on automated code deployment in testing or production environment, taking the approval of updates to achieve automated software release process, pre-emptively discovering deployment issues.&lt;/p&gt;

&lt;p&gt;CI/CD Tools Offered By AWS Used In Case Study&lt;br&gt;
AWS offers an end-to-end CI/CD stack comprised of the following four services:&lt;/p&gt;

&lt;p&gt;AWS CodeCommit – It is a  fully-managed source control service that hosts secure Git-based repositories. CodeCommit makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.Code Commit&lt;/p&gt;

&lt;p&gt;AWS CodeBuild – A fully managed continuous integration service that compiles source code, runs tests and produces software packages that are ready to deploy, on a dynamically created build server.&lt;br&gt;
What is AWS CodeBuild? - AWS CodeBuild&lt;br&gt;
Check out: AWS Storage overview and types of storage options offered what are they intended for.&lt;/p&gt;

&lt;p&gt;AWS CodeDeploy – A fully managed deployment service that automates software deployments to a variety of computing services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.&lt;/p&gt;

&lt;p&gt;aws-codedeploy-flow&lt;br&gt;
AWS CodePipeline – A fully configured continuous delivery service that helps the user to automate their released pipelines for fast and reliable application and infrastructure updates.CI/CD Services Offered By AWS pipeline&lt;br&gt;
Typically in many organizations, there are many tools for code repository but we are using AWS S3 as a code repository.&lt;/p&gt;

&lt;p&gt;Check out: Amazon Elastic File System (EFS) what it is, its features, how it can be helpful.&lt;/p&gt;

&lt;p&gt;Steps To Deploy Web Application Using AWS CodePipeline&lt;br&gt;
We will be performing 4 steps to deploy a web application&lt;/p&gt;

&lt;p&gt;Step 1: Create an S3 bucket for your application&lt;br&gt;
Note: If you don’t have an AWS account check our blog on how to create AWS Free Tier Account.&lt;/p&gt;

&lt;p&gt;1) Open the Amazon S3 console and Choose Create bucket and In Bucket name, enter a name for your bucket, and also don’t forget to enable Versioning.&lt;/p&gt;

&lt;p&gt;2) Next, download the sample code and save it into a folder or directory on your local computer.&lt;br&gt;
Choose one of the following. Choose SampleApp_Windows.zip if you want to follow the steps in this tutorial for Windows Server instances. (Do not Unzip the file while Uploading)&lt;br&gt;
–&amp;gt; If you want to deploy to Amazon Linux instances using CodeDeploy, download the sample application here: SampleApp_Linux.zip.&lt;br&gt;
–&amp;gt; If you want to deploy to Windows Server instances using CodeDeploy, download the sample application here: SampleApp_Windows.zip.&lt;/p&gt;

&lt;p&gt;3) In the S3 console, Upload code in the bucket you created.&lt;/p&gt;

&lt;p&gt;AWS S3 Bucket&lt;/p&gt;

&lt;p&gt;Step 2: Create Amazon EC2 Windows instances and install the CodeDeploy agent&lt;br&gt;
1) Create an IAM role that will be required to grant permission to EC2 instance. Select the policy named AmazonEC2RoleforAWSCodeDeploy to create.&lt;/p&gt;

&lt;p&gt;Instance Role For EC2&lt;br&gt;
2) Launch instance on which our code will be deployed.&lt;/p&gt;

&lt;p&gt;3) Just remember to add the IAM role that we have created. and In Auto-assign Public IP, choose Enable. Expand Advanced Details, and in User data, As text selected, enter the following:&lt;br&gt;
&lt;br&gt;
New-Item -Path c:\temp -ItemType “directory” -Force&lt;br&gt;
powershell.exe -Command Read-S3Object -BucketName bucket-name/latest -Key codedeploy-agent.msi -File c:\temp\codedeploy-agent.msi&lt;br&gt;
Start-Process -Wait -FilePath c:\temp\codedeploy-agent.msi -WindowStyle Hidden&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Note: bucket-name is the name of the S3 bucket that contains the CodeDeploy Resource Kit files for your region. For example, for the US West (Oregon) Region, replace the bucket-name with AWS-code deploy-us-west-2. For a list of bucket names, see Resource Kit Bucket Names by Region.&lt;/p&gt;

&lt;p&gt;4) On the Configure Security Group page, allow port 80 communication so you can access the public instance endpoint. Then follow the default configuration and launch the instance&lt;/p&gt;

&lt;p&gt;Created EC2 instance &lt;/p&gt;

&lt;p&gt;Step 3: Create an application in CodeDeploy&lt;br&gt;
1) Initially create an application in CodeDeploy, and In Compute Platform, choose EC2/On-premises.Choose to Create application.&lt;/p&gt;

&lt;p&gt;2) On the page that displays your application, choose to Create a deployment group. In service, role creates an IAM role under code deploy category. Under Deployment type, choose In-place.&lt;/p&gt;

&lt;p&gt;3) Under Environment configuration, choose Amazon EC2 Instances.&lt;/p&gt;

&lt;p&gt;4) Under Deployment configuration, choose CodeDeployDefault.OneAtaTime.&lt;/p&gt;

&lt;p&gt;5) Under Load Balancer, clear Enable load balancing, leave the defaults then choose to Create a deployment group.&lt;/p&gt;

&lt;p&gt;Application code deploy for CI/CD Services Offered By AWS&lt;br&gt;
Also Check: Our blog post on AWS Certified DevOps Engineer Professional. &lt;/p&gt;

&lt;p&gt;Step 4: Create your first pipeline in CodePipeline&lt;br&gt;
1) Open the CodePipeline console. Choose pipeline settings, Enter your desired name and in Service role, Choose New service role to allow CodePipeline to create a new service role in IAM.  To know more about AWS IAM refer to our blog on AWS Identity And Access Management (IAM).&lt;/p&gt;

&lt;p&gt;2) In the Add source stage, select Source provider, choose Amazon S3. Under the S3 object key, enter the object key with or without a file path, and remember to include the file extension.&lt;/p&gt;

&lt;p&gt;3) In the Add build stage, choose to Skip build stage, and then accept the warning message by choosing Skip again. Choose Next.&lt;/p&gt;

&lt;p&gt;4) In the Add deploy stage, in Deploy provider, choose AWS CodeDeploy.Then enter your application name or choose the application name from the list. In the Deployment group, enter MyDemoDeploymentGroup, or choose it from the list, and then choose Next.&lt;/p&gt;

&lt;p&gt;AWS Pipeline &lt;br&gt;
Congratulations! You just created a simple pipeline in CodePipeline. you can verify that by coping EC2 Public DNS address and then past it into the address bar of your web browser&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Storage Services Offered By AWS</title>
      <dc:creator>Bhanvendra Singh Gaur</dc:creator>
      <pubDate>Thu, 22 Jul 2021 11:54:24 +0000</pubDate>
      <link>https://dev.to/bhanvendrasingh/storage-services-offered-by-aws-1dhd</link>
      <guid>https://dev.to/bhanvendrasingh/storage-services-offered-by-aws-1dhd</guid>
      <description>&lt;p&gt;AWS is currently at the top cloud service providers in the world right now. Currently, AWS provides eight types of storage services. In this article, I will help you to understand the storage services offered by AWS.&lt;/p&gt;

&lt;p&gt;Furthermore, after this article, I will also provide details of each AWS storage service and its best practices. (So, stay tuned!)&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Over the past decade, data storage has been diversified according to need and requirements. Ranging from the requirement of a single person to a multinational company, data storage has become a must-have factor for everyone. Now a day it doesn't matter where you store your data! What's really matters how securely your data is stored.&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl074vmgmhxpjxjcsose.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl074vmgmhxpjxjcsose.jpg" alt="S3 1"&gt;&lt;/a&gt;&lt;br&gt;
Amazon Web Services (AWS) dominates the digital market industry among multiple cloud service providers for a few important reasons, like a flexible, cost-effective, easy-to-use cloud computing platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Cloud Storage?
&lt;/h3&gt;

&lt;p&gt;Let's try to understand cloud storage in &lt;strong&gt;layman's terms&lt;/strong&gt;. From your laptop to your smartphone to your tablet, any files you create or download are typically saved on your device. However, if your device fails (I hope it will not fail) or can’t be accessed, getting your files back can be difficult, if not impossible. Here comes cloud storage in the picture!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud storage&lt;/strong&gt; provides you with an advanced alternative: you can access your files from anywhere, irrespective of your device. Cloud storage will take your files off a device’s hard drive and backs them up securely, storing them remotely in a cloud system &amp;amp; these files are protected, so you don't need to worry. Only you can only access them with a cloud storage account or service. And no matter what happens, you’ll always have a backup of your important data.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuid6tipcuskkizzkld5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuid6tipcuskkizzkld5.jpg" alt="S3 2"&gt;&lt;/a&gt;&lt;br&gt;
Now, before deep diving into storage, let’s take a few minutes to understand some important terms:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Region&lt;/strong&gt; – It is a physical location around the world where we cluster data centers &amp;amp; each AWS Region consists of multiple, isolated and physically separate AZs within a geographic area&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Availability Zone&lt;/strong&gt; – It is a highly available data center within each AWS region. Each availability zone has independent power, cooling, and networking. When an entire availability zone goes down, AWS can failover workloads to one of the other zones in the same region, a capability known as “Multi-AZ” redundancy.&lt;/p&gt;

&lt;p&gt;You can check the full list of Available Regions &amp;amp; AZ from &lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/?p=ngi&amp;amp;loc=0" rel="noopener noreferrer"&gt;AWS global infrastructure&lt;/a&gt;; as AWS is constantly introducing new Regions &amp;amp; AZ, it will be great to have updated information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Services Offered By AWS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhhorxpng09so8wbhpas.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhhorxpng09so8wbhpas.jpg" alt="S3 3"&gt;&lt;/a&gt;&lt;br&gt;
AWS provides low-cost data storage with high durability and high availability. AWS offers a variety of storage choices for backing up information, archiving, and disaster recovery.&lt;/p&gt;

&lt;p&gt;We have compiled a list of the main storage services available on the AWS Cloud, as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Simple Storage Service (Amazon S3)&lt;/li&gt;
&lt;li&gt;Amazon Glacier&lt;/li&gt;
&lt;li&gt;Amazon Elastic File System (Amazon EFS)&lt;/li&gt;
&lt;li&gt;Amazon Elastic Block Store (Amazon EBS)&lt;/li&gt;
&lt;li&gt;Amazon EC2 Instance Storage&lt;/li&gt;
&lt;li&gt;AWS Storage Gateway&lt;/li&gt;
&lt;li&gt;AWS Snowball&lt;/li&gt;
&lt;li&gt;Amazon CloudFront&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  30,000-Foot View Of AWS Storage Services
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35bkondhxv1e4a3l3ng5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35bkondhxv1e4a3l3ng5.jpg" alt="S3 4"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Amazon Simple Storage Service (Amazon S3)&lt;/strong&gt;: It is a storage for the internet. It is designed for large-capacity, low-cost storage provision across multiple geographical regions. Amazon S3 provides developers and IT teams with Secure, Durable, and Highly Scalable object storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Glacier&lt;/strong&gt;: It is an online file storage web service that provides data archiving and backup storage. It is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic File System (Amazon EFS)&lt;/strong&gt;: It is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with AWS cloud services and on-premises resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic Block Store (Amazon EBS)&lt;/strong&gt;: Store provides raw block-level storage attached to Amazon EC2 instances and is used by Amazon Relational Database Service. Amazon EBS provides a range of options for storage performance and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Instance Storage&lt;/strong&gt;: It is also called ephemeral drives that provide temporary block-level storage for many EC2 instance types. This storage consists of a preconfigured and pre-attached block of disk storage on the same physical server that hosts the EC2 instance for which the block provides storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Storage Gateway&lt;/strong&gt;: It is used to provide seamless integration with data security features between your on-premise software appliance and AWS Cloud&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Snowball&lt;/strong&gt;: It offers physical transfer of data between user’s location and AWS data centers, the device used to transfer the data is called Snowball.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt;: CloudFront is a content delivery network used to cache data to an edge location, reducing latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Cloud storage is a important component of cloud computing because it holds the information used by applications. Big data analytics, data warehouses, Internet of Things (IoT), databases, and backup and archive applications all rely on some form of data storage architecture.&lt;/p&gt;

&lt;p&gt;It is more reliable, scalable, and secure than traditional on-premises storage systems. AWS offers a complete range of cloud storage services to support both application and archival compliance requirements. While this gives you a better understanding of the features and characteristics of these cloud services, it is crucial for you to understand your workloads and requirements then decide which storage service is best suited for your needs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In the upcoming Articles' we will be deepdiving into each service individually so stay tuned!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>react</category>
      <category>discuss</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
