<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sourav kumar</title>
    <description>The latest articles on DEV Community by Sourav kumar (@sourav3366).</description>
    <link>https://dev.to/sourav3366</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sourav3366"/>
    <language>en</language>
    <item>
      <title>How Your Microservice Actually Gets Deployed on Kubernetes (Model 1 vs Model 2)</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Mon, 16 Mar 2026 11:13:06 +0000</pubDate>
      <link>https://dev.to/sourav3366/real-world-kuberneteseks-deployment-architectures-end-to-end-guide-b4k</link>
      <guid>https://dev.to/sourav3366/real-world-kuberneteseks-deployment-architectures-end-to-end-guide-b4k</guid>
      <description>&lt;p&gt;If you’re building or operating microservices on Kubernetes (especially EKS), one of the biggest architectural decisions is:&lt;/p&gt;

&lt;p&gt;Where do application code, Helm charts, and deployment configurations live — and how do they flow to production?&lt;/p&gt;

&lt;p&gt;In real production environments, two models dominate:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Model 1 — Same Repo (Service-Owned Deployments)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. Model 2 — Hybrid GitOps (Platform-Driven)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This guide explains both end-to-end: repositories, ECR usage, CI/CD, versioning, folder structures, deployment flow, and when to use each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model 1 — App + Helm in the SAME Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each microservice owns everything needed to run it.&lt;br&gt;
One repo per service contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application source code&lt;/li&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;Helm chart&lt;/li&gt;
&lt;li&gt;Environment values&lt;/li&gt;
&lt;li&gt;CI/CD pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deployment is usually CI-driven (not pure GitOps)&lt;/p&gt;

&lt;p&gt;Typical Repository Structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;payment-service/
├── src/                          # Application source code
│   ├── main.py / index.js / App.java
│   ├── config/
│   └── modules/
│
├── tests/                        # Unit &amp;amp; integration tests
│   ├── unit/
│   └── integration/
│
├── Dockerfile                    # Container build instructions
├── .dockerignore                 # Files excluded from image build
│
├── helm/                         # Helm chart for this service
│   ├── Chart.yaml                # Chart metadata
│   ├── values.yaml               # Default configuration
│   ├── values-dev.yaml           # Dev environment overrides
│   ├── values-staging.yaml       # Staging overrides
│   ├── values-prod.yaml          # Production overrides
│   ├── .helmignore               # Files Helm should ignore
│   │
│   ├── templates/                # Kubernetes manifest templates
│   │   ├── _helpers.tpl          # Template helper functions ⭐
│   │   ├── deployment.yaml       # Deployment resource
│   │   ├── service.yaml          # Service resource
│   │   ├── ingress.yaml          # Ingress resource
│   │   ├── serviceaccount.yaml   # ServiceAccount (if needed)
│   │   ├── hpa.yaml              # Horizontal Pod Autoscaler
│   │   ├── configmap.yaml        # ConfigMap
│   │   ├── secret.yaml           # Secret (optional)
│   │   ├── poddisruptionbudget.yaml
│   │   ├── networkpolicy.yaml    # Network security rules
│   │   ├── NOTES.txt             # Post-install info
│   │
│   └── charts/                   # Subcharts (dependencies)
│       └── (empty or dependencies)
│
├── scripts/                      # Optional helper scripts
│   └── deploy.sh
│
├── README.md                     # Service documentation
├── .gitignore
│
└── .github/
    └── workflows/
        └── ci-cd.yaml            # GitHub Actions pipeline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deep Explanation of Important Helm Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chart.yaml — Chart Metadata&lt;/strong&gt; : Defines the Helm chart identity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;payment-service&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Helm chart for Payment Service&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.2.0&lt;/span&gt;          &lt;span class="c1"&gt;# Chart version&lt;/span&gt;
&lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.2.0"&lt;/span&gt;     &lt;span class="c1"&gt;# Application version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 version = chart version&lt;br&gt;
👉 appVersion = container app version&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;values.yaml — Default Configuration&lt;/strong&gt; : Base config used unless overridden.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;

&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;ecr-url&amp;gt;/payment-service&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest"&lt;/span&gt;
  &lt;span class="na"&gt;pullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Environment-Specific Values Files&lt;/strong&gt;: Override defaults for each environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;values-dev.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev"&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;128Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;values-prod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;

&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.2.0"&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;templates/_helpers.tpl (VERY IMPORTANT)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reusable template functions.&lt;/p&gt;

&lt;p&gt;Used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Naming conventions&lt;/li&gt;
&lt;li&gt;Labels&lt;/li&gt;
&lt;li&gt;Chart metadata&lt;/li&gt;
&lt;li&gt;DRY templates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- define "payment-service.fullname" -&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-payment-service&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Container Image Storage (ECR)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each service has its own ECR repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;123456789.dkr.ecr.ap-south-1.amazonaws.com/payment-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ECR stores Docker images only — not Helm charts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complete Deployment Flow (Model-1, CI-Driven Helm on EKS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Developer Pushes Code Change&lt;/p&gt;

&lt;p&gt;A change is pushed to the service repository:&lt;br&gt;
Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business logic update&lt;/li&gt;
&lt;li&gt;New environment variable&lt;/li&gt;
&lt;li&gt;Resource changes&lt;/li&gt;
&lt;li&gt;Bug fix&lt;/li&gt;
&lt;li&gt;Dependency update&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git event triggers CI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Developer → Git push → main branch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ CI Pipeline Starts&lt;/p&gt;

&lt;p&gt;Triggered by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;Pipeline checks out code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3️⃣ Build Container Image&lt;/p&gt;

&lt;p&gt;Docker image is created from the Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; payment-service:1.5.0 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tagging usually includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic version&lt;/li&gt;
&lt;li&gt;Git SHA&lt;/li&gt;
&lt;li&gt;Build number&lt;/li&gt;
&lt;li&gt;Timestamp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;payment-service:1.5.0
payment-service:sha-7f3a9c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ Authenticate to AWS ECR&lt;/p&gt;

&lt;p&gt;Pipeline logs into ECR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecr get-login-password &lt;span class="se"&gt;\&lt;/span&gt;
 | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; &amp;lt;ecr-url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5️⃣ Push Image to ECR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push &amp;lt;ecr-url&amp;gt;/payment-service:1.5.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now ECR stores the new image.&lt;br&gt;
👉 This is the artifact Kubernetes will run.&lt;/p&gt;

&lt;p&gt;6️⃣ Pipeline Prepares Deployment&lt;/p&gt;

&lt;p&gt;Helm chart references the new image tag.&lt;br&gt;
This may happen in two ways:&lt;/p&gt;

&lt;p&gt;🔹 Method A — Dynamic Override (most common)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade payment-service ./helm &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; image.tag&lt;span class="o"&gt;=&lt;/span&gt;1.5.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; values-prod.yaml
🔹 Method B — Update values file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pipeline edits values file with new tag.&lt;/p&gt;

&lt;p&gt;7️⃣ Helm Upgrade / Install Executes&lt;/p&gt;

&lt;p&gt;Pipeline runs Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; payment-service ./helm &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; values-prod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helm does NOT run containers.&lt;br&gt;
👉 It generates Kubernetes manifests and sends them to the cluster.&lt;/p&gt;

&lt;p&gt;8️⃣ Kubernetes API Server Receives New Spec&lt;/p&gt;

&lt;p&gt;Deployment resource updated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;ecr-url&amp;gt;/payment-service:1.5.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changing image tag updates the Pod template hash.&lt;/p&gt;

&lt;p&gt;9️⃣ Deployment Controller Triggers Rolling Update&lt;/p&gt;

&lt;p&gt;Kubernetes detects a change in Pod template.&lt;/p&gt;

&lt;p&gt;It creates a new ReplicaSet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Old ReplicaSet → payment-service-abc123
New ReplicaSet → payment-service-def456
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔟 Scheduler Assigns Pods to Nodes&lt;/p&gt;

&lt;p&gt;New Pods created but initially Pending.&lt;/p&gt;

&lt;p&gt;Scheduler selects nodes based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Available CPU/memory&lt;/li&gt;
&lt;li&gt;Node selectors&lt;/li&gt;
&lt;li&gt;Taints/tolerations&lt;/li&gt;
&lt;li&gt;Affinity rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1️⃣1️⃣ Kubelet Starts Pod on Selected Node&lt;/p&gt;

&lt;p&gt;Kubelet (agent running on node) receives instruction:&lt;/p&gt;

&lt;p&gt;👉 “Run this container image”&lt;/p&gt;

&lt;p&gt;⭐ THIS IS THE CRITICAL PART&lt;br&gt;
🔥 How Cluster Pulls Image from ECR&lt;br&gt;
1️⃣2️⃣ Node Needs Image Locally&lt;/p&gt;

&lt;p&gt;Container runtime (containerd/Docker) checks:&lt;/p&gt;

&lt;p&gt;👉 Is this image already cached on the node?&lt;/p&gt;

&lt;p&gt;✔️ If YES → use cached image&lt;br&gt;
❌ If NO → pull from registry&lt;br&gt;
1️⃣3️⃣ Authentication to ECR&lt;/p&gt;

&lt;p&gt;In EKS, nodes are allowed to pull from ECR via IAM.&lt;/p&gt;

&lt;p&gt;How authentication happens:&lt;/p&gt;

&lt;p&gt;👉 Node IAM Role (Instance Profile) has permission:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecr:GetAuthorizationToken
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:BatchGetImage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;1️⃣4️⃣ Kubelet Requests Auth Token&lt;/p&gt;

&lt;p&gt;AWS provides temporary registry credentials.&lt;/p&gt;

&lt;p&gt;1️⃣5️⃣ Container Runtime Pulls Image Layers&lt;/p&gt;

&lt;p&gt;Image downloaded layer by layer from ECR.&lt;/p&gt;

&lt;p&gt;ECR → Internet/VPC Endpoint → Node&lt;/p&gt;

&lt;p&gt;Large images take longer.&lt;/p&gt;

&lt;p&gt;🔐 Important: Private Cluster Scenario&lt;/p&gt;

&lt;p&gt;If nodes have no internet:&lt;/p&gt;

&lt;p&gt;👉 Use VPC Interface Endpoint for ECR.&lt;/p&gt;

&lt;p&gt;1️⃣6️⃣ Image Stored Locally on Node&lt;/p&gt;

&lt;p&gt;After download:&lt;/p&gt;

&lt;p&gt;/var/lib/containerd/&lt;/p&gt;

&lt;p&gt;Future Pods may reuse it.&lt;/p&gt;

&lt;p&gt;1️⃣7️⃣ Container Created &amp;amp; Started&lt;/p&gt;

&lt;p&gt;Runtime launches container process.&lt;/p&gt;

&lt;p&gt;Now Pod enters:&lt;/p&gt;

&lt;p&gt;ContainerCreating → Running&lt;br&gt;
1️⃣8️⃣ Health Checks Begin&lt;/p&gt;

&lt;p&gt;Kubernetes executes:&lt;/p&gt;

&lt;p&gt;Readiness probe&lt;/p&gt;

&lt;p&gt;Liveness probe&lt;/p&gt;

&lt;p&gt;Startup probe&lt;/p&gt;

&lt;p&gt;If probes fail → Pod restarted.&lt;/p&gt;

&lt;p&gt;1️⃣9️⃣ Service Starts Routing Traffic&lt;/p&gt;

&lt;p&gt;Once Pod is Ready:&lt;/p&gt;

&lt;p&gt;Added to Endpoints list&lt;/p&gt;

&lt;p&gt;Load balancer can send traffic&lt;/p&gt;

&lt;p&gt;2️⃣0️⃣ Rolling Update Completes&lt;/p&gt;

&lt;p&gt;Old Pods gradually terminated.&lt;/p&gt;

&lt;p&gt;Zero-downtime deployment achieved.&lt;/p&gt;

&lt;p&gt;🏆 Complete Visual Flow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code change pushed
        ↓
CI pipeline triggered
        ↓
Build Docker image
        ↓
Push image to ECR
        ↓
Helm upgrade executed
        ↓
Kubernetes Deployment updated
        ↓
New ReplicaSet created
        ↓
Pods scheduled to nodes
        ↓
Node authenticates to ECR
        ↓
Image pulled to node
        ↓
Container started
        ↓
Health checks pass
        ↓
Service routes traffic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Team Collaboration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers can modify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment variables&lt;/li&gt;
&lt;li&gt;Resource limits&lt;/li&gt;
&lt;li&gt;Feature configs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps/Platform team enforces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PR reviews&lt;/li&gt;
&lt;li&gt;Branch protection&lt;/li&gt;
&lt;li&gt;Deployment policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;strong&gt;Advantages of Model 1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✔ Simple to implement&lt;br&gt;
✔ Fast delivery&lt;br&gt;
✔ Single source of truth&lt;br&gt;
✔ Easy debugging&lt;br&gt;
✔ Ideal for small–mid teams&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⚠ Harder governance at scale&lt;br&gt;
⚠ Risk of inconsistent configs&lt;br&gt;
⚠ Limited multi-cluster control&lt;br&gt;
⚠ Not ideal for hundreds of services&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model 2 — Hybrid GitOps (Enterprise Standard)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Separates application ownership from runtime control.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers own code&lt;/li&gt;
&lt;li&gt;Platform team owns deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deployments are GitOps-driven (Argo CD / Flux).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architecture Components&lt;/li&gt;
&lt;li&gt;Application Repository (Service Team)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source code&lt;/li&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;Sometimes Helm chart templates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🏗️ Platform / GitOps Repository&lt;/p&gt;

&lt;p&gt;Contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment configs&lt;/li&gt;
&lt;li&gt;Helm values per environment&lt;/li&gt;
&lt;li&gt;ArgoCD applications&lt;/li&gt;
&lt;li&gt;Cluster definitions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;App Repository Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;payment-service/
├── src/
├── Dockerfile
├── tests/
└── helm/ (optional)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deployment (GitOps) Repository Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k8s-platform-config/
├── dev/
│   └── payment-service/
│       └── values.yaml
├── staging/
│   └── payment-service/
│       └── values.yaml
├── prod/
│   └── payment-service/
│       └── values.yaml
└── argocd-apps/
    └── payment-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ECR Usage&lt;br&gt;
Same as Model 1:&lt;/p&gt;

&lt;p&gt;👉 ECR stores container images only.&lt;/p&gt;

&lt;p&gt;Some enterprises also store Helm charts as OCI artifacts, but not required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Flow — End-to-End&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Developer Pushes Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To app repository.&lt;/p&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;CI Builds and Pushes Image to ECR&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;docker build -t payment-service:2.1.0 .
docker push &amp;lt;ecr-url&amp;gt;/payment-service:2.1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ &lt;strong&gt;Deployment Repo Updated&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Image tag updated in environment config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;ecr-url&amp;gt;/payment-service&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2.1.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This update may be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual PR&lt;/li&gt;
&lt;li&gt;Automated bot&lt;/li&gt;
&lt;li&gt;Promotion pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4️⃣ &lt;strong&gt;GitOps Tool Detects Change&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD / Flux watches the repo.&lt;/p&gt;

&lt;p&gt;5️⃣ &lt;strong&gt;Automatic Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitOps controller syncs cluster state.&lt;/p&gt;

&lt;p&gt;👉 No direct kubectl or helm from CI.&lt;/p&gt;

&lt;p&gt;🔄 &lt;strong&gt;Versioning in Model 2&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Application Version: Container image tag.&lt;/li&gt;
&lt;li&gt;Deployment Version: Tracked by Git commits in deployment repo.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Environment Promotion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Controlled separately:&lt;br&gt;
Dev → Staging → Prod&lt;br&gt;
Often via PR approvals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Platform team controls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource policies&lt;/li&gt;
&lt;li&gt;Security standards&lt;/li&gt;
&lt;li&gt;Network rules&lt;/li&gt;
&lt;li&gt;Secrets integration&lt;/li&gt;
&lt;li&gt;Multi-cluster routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers cannot accidentally break production infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Model 2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✔ Scales to hundreds of services&lt;br&gt;
✔ Strong governance&lt;br&gt;
✔ Full audit trail&lt;br&gt;
✔ Safer production changes&lt;br&gt;
✔ Multi-environment consistency&lt;br&gt;
✔ True GitOps workflows&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;Trade-offs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⚠ More complex setup&lt;br&gt;
⚠ Requires platform team&lt;br&gt;
⚠ Slower initial development&lt;br&gt;
⚠ Additional repositories to manage&lt;/p&gt;

&lt;p&gt;Model 1 vs Model 3 — Side-by-Side&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt40gq04zkh65oscf3ls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt40gq04zkh65oscf3ls.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to Use Each Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model 1 — Best For&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Startups&lt;/li&gt;
&lt;li&gt;Small–mid teams&lt;/li&gt;
&lt;li&gt;Rapid development&lt;/li&gt;
&lt;li&gt;Few services&lt;/li&gt;
&lt;li&gt;CI-based deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Model 3 — Best For&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large organizations&lt;/li&gt;
&lt;li&gt;Platform engineering culture&lt;/li&gt;
&lt;li&gt;Many microservices&lt;/li&gt;
&lt;li&gt;Multi-cluster setups&lt;/li&gt;
&lt;li&gt;Compliance requirements&lt;/li&gt;
&lt;li&gt;Production at scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is no universally “best” model.&lt;/p&gt;

&lt;p&gt;👉 Organizations typically evolve:&lt;/p&gt;

&lt;p&gt;Startup → Model 1&lt;br&gt;
Growing company → Hybrid approaches&lt;br&gt;
Enterprise → Model 3 (GitOps)&lt;/p&gt;

&lt;p&gt;Understanding both gives you the ability to design systems for any scale.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The Ultimate Guide to Kubernetes Architecture — A Complete CKA-Level Deep Dive</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Mon, 13 Oct 2025 11:21:39 +0000</pubDate>
      <link>https://dev.to/sourav3366/the-ultimate-guide-to-kubernetes-architecture-a-complete-cka-level-deep-dive-151j</link>
      <guid>https://dev.to/sourav3366/the-ultimate-guide-to-kubernetes-architecture-a-complete-cka-level-deep-dive-151j</guid>
      <description>&lt;p&gt;If you’re preparing for the Certified Kubernetes Administrator (CKA) exam or aiming to truly master Kubernetes, understanding the architecture is non-negotiable.&lt;br&gt;
Kubernetes isn’t just a collection of containers; it’s a distributed control system — one of the most elegant and resilient designs in modern cloud computing.&lt;/p&gt;

&lt;p&gt;This blog is your complete study reference — explaining every component, process, call flow, and real-world scenario — so you’ll never have to read another Kubernetes architecture guide again.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Big Picture: Kubernetes Cluster Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35eltaes0vy123k8a94p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35eltaes0vy123k8a94p.png" alt="Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Kubernetes Cluster is made up of two planes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane (Master Node)&lt;/strong&gt;: Responsible for making global decisions — scheduling, monitoring, scaling, and maintaining the desired state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker Nodes&lt;/strong&gt;: Where the actual applications (Pods) run.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Core Concept
&lt;/h3&gt;

&lt;p&gt;Kubernetes operates on a Declarative Model — you tell it what you want, and it constantly works to make the current state match that desired state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CONTROL PLANE COMPONENTS&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Kube-API Server — The Front Door of Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everything in Kubernetes goes through the Kube-API Server — it’s the central communication hub for both internal and external components.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;What It Does&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exposes the Kubernetes API (via RESTful interface).&lt;/li&gt;
&lt;li&gt;Validates and processes incoming requests.&lt;/li&gt;
&lt;li&gt;Persists the cluster state in etcd.&lt;/li&gt;
&lt;li&gt;Acts as a message bus for all controllers and nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Internal Flow: When You Run kubectl apply -f pod.yaml&lt;br&gt;
&lt;/u&gt;&lt;br&gt;
Let’s trace what really happens:&lt;/p&gt;

&lt;p&gt;1️⃣ Request Sent:&lt;/p&gt;

&lt;p&gt;kubectl sends an HTTPS REST call to the Kube-API Server (default port 6443).&lt;/p&gt;

&lt;p&gt;2️⃣ Authentication:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Server checks who you are.&lt;/li&gt;
&lt;li&gt;Common methods:

&lt;ul&gt;
&lt;li&gt;Certificates (client certs signed by cluster CA)&lt;/li&gt;
&lt;li&gt;Bearer tokens (ServiceAccount tokens, OIDC, etc.)&lt;/li&gt;
&lt;li&gt;Static tokens in /etc/kubernetes/pki/&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Example file for basic authentication (not recommended in production):
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/kubernetes/pki/users.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;3️⃣ Authorization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once authenticated, the request goes through RBAC (Role-Based Access Control) or ABAC.&lt;/li&gt;
&lt;li&gt;Example check: “Does this user have permission to create Pods in this namespace?”&lt;/li&gt;
&lt;li&gt;RBAC data is stored in Roles and ClusterRoles:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get clusterrolebinding
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;4️⃣ Admission Control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Final checkpoint before persistence.&lt;/li&gt;
&lt;li&gt;Admission Controllers can mutate (modify) or validate requests.&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;NamespaceLifecycle: Prevents use of deleted namespaces.&lt;/li&gt;
&lt;li&gt;LimitRanger: Ensures Pod requests don’t exceed limits.&lt;/li&gt;
&lt;li&gt;DefaultStorageClass: Assigns default storage class to PVCs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check which admission plugins are active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep admission
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5️⃣ Validation and Persistence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Server validates your Pod spec, then stores it in etcd.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6️⃣ Notification to Controllers/Scheduler:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API Server uses watchers to notify other control plane components (like Scheduler and Controller Manager) that a new Pod exists.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CKA Scenario:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the API Server fails, the entire cluster becomes read-only — existing Pods will run, but you can’t create or delete new resources.&lt;br&gt;
Check its health:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get componentstatuses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;etcd — The Source of Truth&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;etcd is a distributed key-value store that stores all cluster data — think of it as the “brain” of Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;What It Stores&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Everything!&lt;/li&gt;
&lt;li&gt;Cluster state&lt;/li&gt;
&lt;li&gt;Secrets, ConfigMaps&lt;/li&gt;
&lt;li&gt;Node and Pod objects&lt;/li&gt;
&lt;li&gt;RBAC policies&lt;/li&gt;
&lt;li&gt;CRDs (Custom Resources)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It uses the Raft consensus algorithm to ensure strong consistency across all master nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Commands&lt;/strong&gt;&lt;br&gt;
Backup etcd before upgrades or maintenance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd_backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restore from snapshot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd_backup.db \
  --data-dir /var/lib/etcd-from-backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CKA Scenario:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If your cluster becomes unhealthy or components fail to start after an upgrade, restoring etcd from a snapshot is often the safest recovery path.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Kube-Controller-Manager — The Enforcer of Desired State&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This component runs multiple controllers that continuously monitor and reconcile the cluster’s actual state against the desired state (stored in etcd).&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Key Controllers&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node Controller: Detects node failures.&lt;/li&gt;
&lt;li&gt;Replication Controller / ReplicaSet Controller: Ensures Pod count.&lt;/li&gt;
&lt;li&gt;Endpoint Controller: Connects Services to Pods.&lt;/li&gt;
&lt;li&gt;Job Controller: Manages batch job completion.&lt;/li&gt;
&lt;li&gt;Namespace Controller: Cleans up resources on namespace deletion
.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Reconciliation Loop&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Controller queries the API Server for the desired object state.&lt;br&gt;
2️⃣ It compares it with the current state.&lt;br&gt;
3️⃣ If there’s a mismatch, it acts — e.g., spawns or deletes Pods.&lt;/p&gt;

&lt;p&gt;CKA Tip:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Understanding this loop helps in troubleshooting “CrashLoopBackOff” or “Pending” issues — it’s always a reconciliation failure somewhere.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Kube-Scheduler — The Intelligent Matchmaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Assigns Pods to Nodes.&lt;/p&gt;

&lt;p&gt;Process:&lt;/p&gt;

&lt;p&gt;1️⃣ Watches the API Server for new Pods (with no Node assigned).&lt;br&gt;
2️⃣ Filters nodes based on feasibility (CPU, memory, taints, affinity).&lt;br&gt;
3️⃣ Scores feasible nodes based on priorities (spread, image locality, etc.).&lt;br&gt;
4️⃣ Assigns the highest scoring Node to the Pod.&lt;/p&gt;

&lt;p&gt;CKA Tip:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check why a Pod didn’t schedule:&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for messages like:&lt;/p&gt;

&lt;p&gt;“0/3 nodes are available: 1 node(s) had insufficient memory, 2 node(s) had taints.”&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Cloud-Controller-Manager — The Cloud Integrator&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Handles tasks that interact with cloud providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates load balancers for Services of type LoadBalancer.&lt;/li&gt;
&lt;li&gt;Attaches persistent volumes (like AWS EBS, GCP PD).&lt;/li&gt;
&lt;li&gt;Manages node lifecycle in cloud environments.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  WORKER NODE COMPONENTS
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubelet — The Node’s Caretaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubelet runs on every node and ensures containers are up and healthy.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Responsibilities:&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watches API Server for Pods assigned to its Node.&lt;/li&gt;
&lt;li&gt;Works with the Container Runtime (e.g., containerd or CRI-O).&lt;/li&gt;
&lt;li&gt;Monitors containers using liveness and readiness probes.&lt;/li&gt;
&lt;li&gt;Reports status to the API Server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Pod Lifecycle Example:&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Scheduler assigns a Pod → Kubelet receives instruction.&lt;br&gt;
2️⃣ Kubelet pulls images → starts containers.&lt;br&gt;
3️⃣ Runs probes defined in the Pod spec.&lt;br&gt;
4️⃣ Reports health &amp;amp; status.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Kubelet Logs Useful for troubleshooting:&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl -u kubelet -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Kube-Proxy — The Network Router&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Manages network rules (via iptables or IPVS) for service-to-pod and pod-to-pod communication.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Functionality:&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Maintains rules to route traffic from Services to backend Pods.&lt;/p&gt;

&lt;p&gt;Supports TCP, UDP, and SCTP.&lt;/p&gt;

&lt;p&gt;Works closely with the Service abstraction.&lt;/p&gt;

&lt;p&gt;CKA Scenario:&lt;br&gt;
If a Service isn’t reachable, check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide
kubectl get svc
sudo iptables -t nat -L -n | grep &amp;lt;service-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Container Runtime — The Executor&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responsible for running containers.&lt;/li&gt;
&lt;li&gt;Supported runtimes include:

&lt;ul&gt;
&lt;li&gt;containerd&lt;/li&gt;
&lt;li&gt;CRI-O&lt;/li&gt;
&lt;li&gt;Docker (deprecated)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The Kubelet uses CRI (Container Runtime Interface) to talk to the runtime.&lt;/p&gt;

&lt;p&gt;🔄 How a Request Flows Through the Cluster&lt;/p&gt;

&lt;p&gt;Let’s visualize a full lifecycle:&lt;/p&gt;

&lt;p&gt;1️⃣ You run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pod.yam
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ API Server authenticates → authorizes → validates → stores Pod spec in etcd.&lt;br&gt;
3️⃣ Scheduler assigns Pod → updates spec.nodeName.&lt;br&gt;
4️⃣ Kubelet on that Node sees new assignment → pulls image → runs container.&lt;br&gt;
5️⃣ Kube-Proxy updates routing for network access.&lt;br&gt;
6️⃣ Controller Manager ensures Pod stays running and replicas match.&lt;/p&gt;

&lt;p&gt;CKA-Focused Scenarios and Commands&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far80esrha4pbf2bs0w27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far80esrha4pbf2bs0w27.png" alt="Scenarios and commands" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Final Words for CKA Candidates&lt;/p&gt;

&lt;p&gt;To truly master Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand how components communicate.&lt;/li&gt;
&lt;li&gt;Know where configurations live (/etc/kubernetes/manifests/).&lt;/li&gt;
&lt;li&gt;Be comfortable with etcdctl, kubectl describe, and journalctl.&lt;/li&gt;
&lt;li&gt;Always trace the flow of a Pod — from kubectl apply to container runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you grasp this architecture deeply, every CKA troubleshooting or architecture question becomes intuitive.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>cka</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Day-09/100 - Understanding the Difference Between du and df in Linux</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Sun, 02 Mar 2025 14:02:57 +0000</pubDate>
      <link>https://dev.to/sourav3366/day-09100-understanding-the-difference-between-du-and-df-in-linux-1m6k</link>
      <guid>https://dev.to/sourav3366/day-09100-understanding-the-difference-between-du-and-df-in-linux-1m6k</guid>
      <description>&lt;p&gt;When working with Linux, managing disk space is crucial. Two commonly used commands for checking disk usage are du (disk usage) and df (disk free). While they seem similar, they serve different purposes. Let's explore their differences with examples and real-world use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. What is du?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The du (disk usage) command shows how much space files and directories are consuming on the disk. It helps find large files and directories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo du -sh /*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Breaking Down the Command&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;du (Disk Usage) → Reports the amount of space used by files and directories.&lt;/li&gt;
&lt;li&gt;-s (Summarize) → Displays only the total size of each directory, instead of listing sizes of all subdirectories and files.&lt;/li&gt;
&lt;li&gt;-h (Human-readable) → Converts sizes into readable formats like KB, MB, or GB instead of raw bytes.&lt;/li&gt;
&lt;li&gt;/* (All Top-Level Directories) → It calculates the size of each top-level directory under /.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4.0K    /bin
16K     /boot
0       /dev
12M     /etc
120G    /home
4.0G    /lib
0       /proc
50G     /usr
20G     /var
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each line represents the disk usage of a top-level directory.&lt;/li&gt;
&lt;li&gt;The size is in human-readable format (-h flag).&lt;/li&gt;
&lt;li&gt;/home, /usr, and /var typically consume the most space.&lt;/li&gt;
&lt;li&gt;/dev and /proc show 0 because they contain virtual files that don’t take up disk space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;More Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;du -sh /home/user/Documents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2.5G    /home/user/Documents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means the /home/user/Documents directory is using 2.5 GB of disk space.Basically we can find the disk usage of a specific folder also by giving its path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Useful Options available with du :&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;-h (human-readable): Shows sizes in KB, MB, GB.&lt;/li&gt;
&lt;li&gt;-s (summary): Shows only the total size of a directory.&lt;/li&gt;
&lt;li&gt;-a (all files): Displays sizes of both files and directories.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Use Cases of du:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Find the largest directories on your system:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;du -ah / | sort -rh | head -10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sorts directories by size and lists the top 10 largest ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analyze space usage of a specific user:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;du -sh /home/username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps system administrators track user disk usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor growing log files:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;du -sh /var/log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helps find large log files that may need rotation or deletion.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;2. What is df?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The df (disk free) command shows the available and used space on file systems, helping monitor overall disk usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example-1 : when we run on any linux machine&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Filesystem      Size  Used  Avail Use% Mounted on
/dev/sda1       100G   75G   25G  75% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  1.1M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2       200G  150G   50G  75% /home
/dev/sdb1        50G   10G   40G  20% /mnt/external_drive
tmpfs           796M   16K  796M   1% /run/user/1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example-2 : when we run on aws ec2 linux machine&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Filesystem      Size  Used  Avail Use% Mounted on
/dev/root       8.0G  3.2G  4.8G  40% /
devtmpfs        481M     0  481M   0% /dev
tmpfs           495M     0  495M   0% /dev/shm
tmpfs           495M  436K  495M   1% /run
tmpfs           495M     0  495M   0% /sys/fs/cgroup
/dev/loop0       69M   69M     0 100% /snap/core18/2785
/dev/loop1       45M   45M     0 100% /snap/snapd/19457
tmpfs            99M     0   99M   0% /run/user/1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Difference :&lt;/strong&gt;&lt;br&gt;
When you run df -h on an AWS EC2 instance, you often see an entry like /dev/root instead of /dev/sda1. This happens because AWS uses virtualized storage, and the root filesystem is typically mounted from an Elastic Block Store (EBS) volume.&lt;/p&gt;

&lt;p&gt;Useful Options available with df:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;-h (human-readable): Displays sizes in KB, MB, GB.&lt;/li&gt;
&lt;li&gt;-T (filesystem type): Shows filesystem types like ext4, xfs, etc.&lt;/li&gt;
&lt;li&gt;-i (inodes): Displays inode usage instead of disk usage.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Use Cases of df:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Check available space on all mounted filesystems:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -hT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps in monitoring different storage partitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find which partition is nearly full:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h | grep -E 'Filesystem|%'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Useful for proactively cleaning up space before a system runs out of storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check inode usage (for small file-heavy servers):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If inodes are exhausted, no new files can be created even if space is available.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;3. Key Differences Between du and df&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ebiomcgvdl818l2vz9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ebiomcgvdl818l2vz9d.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zy73mtpfdm47re4rob8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zy73mtpfdm47re4rob8.png" alt="Image description" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Why Do du and df Show Different Values?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes, du and df may show different results because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deleted but open files: If a process is using a deleted file, df still counts its space.&lt;/li&gt;
&lt;li&gt;Different mount points: df reports disk-wide usage, while du only shows a selected directory.&lt;/li&gt;
&lt;li&gt;Files stored in different places: Disk compression, symlinks, and snapshots can cause variations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example of Difference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Hello, world!" &amp;gt; testfile.txt
rm testfile.txt

du -sh .
df -h .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though the file is deleted, df might still show the space as used until the process holding it is closed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. When to Use du and df?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use du to find large files and analyze specific directory usage.&lt;/li&gt;
&lt;li&gt;Use df to check overall disk space and monitor filesystem health.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Understanding du and df helps efficiently manage disk space. Use du for detailed file usage and df for a high-level disk overview. Next time you see "No space left on device," use these commands to diagnose the issue!&lt;/p&gt;

&lt;p&gt;I hope this blog makes it easier to understand du and df. Let me know if you want more examples or explanations!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day- 08 /100 - Deploying Multi-Environment Infrastructure with Terraform</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Wed, 26 Feb 2025 14:05:35 +0000</pubDate>
      <link>https://dev.to/sourav3366/day-08-100-deploying-multi-environment-infrastructure-with-terraform-46ai</link>
      <guid>https://dev.to/sourav3366/day-08-100-deploying-multi-environment-infrastructure-with-terraform-46ai</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In real-world DevOps practices, managing multiple environments (development, staging, and production) efficiently is crucial. Terraform, an Infrastructure as Code (IaC) tool, allows us to automate cloud infrastructure provisioning. In this blog, we will build a multi-environment setup (Dev, Stage, and Prod) using Terraform modules, while also implementing best practices like state management with S3 and DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will create three environments (Dev, Stage, and Prod) with different configurations for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 Instances (varying counts per environment)&lt;/li&gt;
&lt;li&gt;S3 Buckets (varying counts per environment)&lt;/li&gt;
&lt;li&gt;VPC with Public &amp;amp; Private Subnets&lt;/li&gt;
&lt;li&gt;Security Groups&lt;/li&gt;
&lt;li&gt;RDS Database&lt;/li&gt;
&lt;li&gt;DynamoDB for state locking&lt;/li&gt;
&lt;li&gt;S3 for Terraform state storage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;📁 Folder Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform-project/
│── modules/
│   ├── ec2/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   ├── s3/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   ├── vpc/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   ├── security-groups/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   ├── rds/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   ├── dynamodb/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   ├── backend.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│   ├── stage/
│   │   ├── main.tf
│   │   ├── backend.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│   ├── prod/
│   │   ├── main.tf
│   │   ├── backend.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│── global/
│   ├── s3-backend/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│── provider.tf
│── variables.tf
│── terraform.tfvars
│── outputs.tf
│── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;1️⃣ Configure Terraform Backend (State Management)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;File: global/s3-backend/main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "terraform_state" {
  bucket = "mycompany-terraform-state"
  acl    = "private"
  versioning {
    enabled = true
  }
  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_dynamodb_table" "terraform_lock" {
  name         = "terraform-lock"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2️⃣ Backend Configuration for Each Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;File: environments/dev/backend.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket         = "mycompany-terraform-state"
    key            = "dev/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-lock"
    encrypt        = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3️⃣ VPC Module&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;File: modules/vpc/main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block = var.vpc_cidr
}

resource "aws_subnet" "public" {
  count = length(var.public_subnets)

  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnets[count.index]
  map_public_ip_on_launch = true
}

resource "aws_subnet" "private" {
  count = length(var.private_subnets)

  vpc_id     = aws_vpc.main.id
  cidr_block = var.private_subnets[count.index]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4️⃣ EC2 Module&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;File: modules/ec2/main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "app" {
  count = var.instance_count

  ami           = var.ami_id
  instance_type = var.instance_type
  subnet_id     = var.subnet_id

  tags = {
    Name = "EC2-${var.environment}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5️⃣ RDS Module&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;File: modules/rds/main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_db_instance" "database" {
  allocated_storage    = var.db_storage
  engine              = "mysql"
  instance_class      = var.db_instance_class
  db_name             = var.db_name
  username           = var.db_username
  password           = var.db_password
  publicly_accessible = false
  skip_final_snapshot = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6️⃣ Dev Environment Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each environment (dev, stage, prod) will have its own terraform.tfvars file inside its respective folder.&lt;br&gt;
File: environments/dev/main.tf&lt;/p&gt;

&lt;p&gt;environments/dev/terraform.tfvars&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC
vpc_cidr        = "10.0.0.0/16"
public_subnets  = ["10.0.1.0/24"]
private_subnets = ["10.0.2.0/24"]

# EC2
instance_count  = 2
ami_id          = "ami-12345678"
instance_type   = "t2.micro"

# RDS
db_storage         = 20
db_instance_class  = "db.t3.micro"
db_name            = "devdb"
db_username        = "admin"
db_password        = "password123"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;environments/stage/terraform.tfvars&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC
vpc_cidr        = "10.1.0.0/16"
public_subnets  = ["10.1.1.0/24"]
private_subnets = ["10.1.2.0/24"]

# EC2
instance_count  = 3
ami_id          = "ami-87654321"
instance_type   = "t3.medium"

# RDS
db_storage         = 50
db_instance_class  = "db.t3.medium"
db_name            = "stagedb"
db_username        = "admin"
db_password        = "securepassword456"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;environments/prod/terraform.tfvars&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC
vpc_cidr        = "10.2.0.0/16"
public_subnets  = ["10.2.1.0/24"]
private_subnets = ["10.2.2.0/24"]

# EC2
instance_count  = 5
ami_id          = "ami-11223344"
instance_type   = "t3.large"

# RDS
db_storage         = 100
db_instance_class  = "db.t3.large"
db_name            = "proddb"
db_username        = "admin"
db_password        = "supersecurepassword789"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, environments/dev/main.tf (or stage/main.tf and prod/main.tf) will only reference variables, making it more reusable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" {
  source          = "../../modules/vpc"
  vpc_cidr        = var.vpc_cidr
  public_subnets  = var.public_subnets
  private_subnets = var.private_subnets
}

module "ec2" {
  source         = "../../modules/ec2"
  instance_count = var.instance_count
  ami_id         = var.ami_id
  instance_type  = var.instance_type
  subnet_id      = module.vpc.public_subnets[0]
  environment    = "dev"
}

module "rds" {
  source           = "../../modules/rds"
  db_storage       = var.db_storage
  db_instance_class = var.db_instance_class
  db_name         = var.db_name
  db_username     = var.db_username
  db_password     = var.db_password
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8️⃣ Apply Terraform&lt;br&gt;
Step 1: Initialize Backend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Plan&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -var-file="terraform.tfvars"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Apply&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -var-file="terraform.tfvars" -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Steps to Run the Terraform Project in All 3 Environments (Dev, Stage, Prod)
&lt;/h2&gt;

&lt;p&gt;Here we will understand how to set up, initialize, and deploy the Terraform project for each environment: Dev, Stage, and Prod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Prerequisites&lt;/strong&gt;&lt;br&gt;
Before running Terraform, ensure you have the following installed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform CLI (≥ v1.0) → Install from Terraform Download&lt;/li&gt;
&lt;li&gt;AWS CLI (≥ v2.0) → Install from AWS CLI&lt;/li&gt;
&lt;li&gt;AWS Credentials Configured (~/.aws/credentials)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;S3 Bucket for Remote Backend (Created in global/s3-backend/)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Initialize Backend (S3 &amp;amp; DynamoDB)&lt;/strong&gt;&lt;br&gt;
Since we are using remote backend for Terraform state, we first create the S3 bucket and DynamoDB table.&lt;/p&gt;

&lt;p&gt;🌍 Navigate to global/s3-backend/ and Apply&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd global/s3-backend/
terraform init
terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates an S3 bucket for storing Terraform state and a DynamoDB table for state locking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3️⃣ Deploy a Specific Environment (Dev, Stage, Prod)&lt;/strong&gt;&lt;br&gt;
Each environment (dev, stage, prod) has its own directory. You must navigate to the specific environment before running Terraform commands.&lt;/p&gt;

&lt;p&gt;▶️ Deploy Dev Environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd environments/dev/
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars" -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This provisions the Dev environment.&lt;/p&gt;

&lt;p&gt;▶️ Deploy Stage Environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd environments/stage/
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars" -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This provisions the Stage environment.&lt;/p&gt;

&lt;p&gt;▶️ Deploy Prod Environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd environments/prod/
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars" -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This provisions the Prod environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4️⃣ Verify Deployments&lt;/strong&gt;&lt;br&gt;
After running Terraform, verify that resources have been created or not either through console or through aws cli commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5️⃣ Destroy Resources (When Needed)&lt;/strong&gt;&lt;br&gt;
To destroy an environment, navigate to its directory and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy -var-file="terraform.tfvars" -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
cd environments/dev/&lt;br&gt;
terraform destroy -var-file="terraform.tfvars" -auto-approve&lt;br&gt;
🔴 ⚠️ WARNING: This permanently deletes all resources in the environment!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1plgb7tota8s071l7oao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1plgb7tota8s071l7oao.png" alt="Summary" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day- 07/100 - User Management in Linux</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Mon, 24 Feb 2025 18:02:28 +0000</pubDate>
      <link>https://dev.to/sourav3366/user-management-in-linux-4ga6</link>
      <guid>https://dev.to/sourav3366/user-management-in-linux-4ga6</guid>
      <description>&lt;p&gt;User management is a crucial aspect of Linux administration, allowing system owners to control access, permissions, and security settings for different users. In this guide, we’ll cover user management fundamentals, starting with understanding ‘sudo’, followed by essential system commands, and finally diving into user management commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding ‘sudo’&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo — short for Superuser Do is a command in Linux that allows a permitted user to execute a command as the superuser (root) or another specified user. It is commonly used to run administrative tasks without switching to the root user entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How sudo Works&lt;/strong&gt;&lt;br&gt;
When you use sudo, the system temporarily grants elevated privileges for that specific command. The user must be in the sudoers file (/etc/sudoers) to execute commands with sudo. By default, sudo asks for the user’s password before executing the command.&lt;/p&gt;

&lt;p&gt;Example command:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features of sudo&lt;/strong&gt;&lt;br&gt;
Security &amp;amp; Control – Users don’t need to log in as root, reducing security risks.&lt;br&gt;
Logging &amp;amp; Auditing – Commands run with sudo are logged in /var/log/auth.log.&lt;br&gt;
Time-Limited Authentication – Once authenticated, sudo allows repeated use for a short period (default: 5 minutes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add a user to the sudoers list:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG sudo username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;usermod → Modifies the user account.&lt;br&gt;
-aG → A combination of two options:&lt;br&gt;
-a (Append): Adds the user to a group without removing existing group memberships.&lt;br&gt;
-G (Groups): Specifies the groups to which the user should be added, here it is added to the sudo group.&lt;/p&gt;

&lt;p&gt;We will learn about ‘groups’ later in this blog.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;shutdown&lt;/code&gt; using normal user won’t work, it works as root user or using sudo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;shutdown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the user is a sudoer, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo shutdown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To restart the system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Essential System Commands for User Information&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before managing users, it’s helpful to gather system information using these commands:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;who&lt;/strong&gt; — shows a list of logged-in users, their terminals, and login times.&lt;br&gt;
&lt;strong&gt;whoami&lt;/strong&gt; — Displays the current logged-in user's username.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7d957tnl0f8w5i3yroq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7d957tnl0f8w5i3yroq.png" alt="Image description" width="602" height="123"&gt;&lt;/a&gt;&lt;br&gt;
There is only one user right now, so &lt;strong&gt;who&lt;/strong&gt; is showing only 1 user otherwise it gives a list of logged-in users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;id&lt;/strong&gt; — it tells the info about user id, group id, for current user.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis1sixo56v58govpzvza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis1sixo56v58govpzvza.png" alt="Image description" width="800" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;check for another user —  &lt;code&gt;id username&lt;/code&gt;&lt;br&gt;
show only UID —  &lt;code&gt;id -u&lt;/code&gt;&lt;br&gt;
show only GID —  &lt;code&gt;id -g&lt;/code&gt;&lt;br&gt;
show only groups —  &lt;code&gt;id -G&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Management Commands&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;useradd&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;add new user – sudo useradd -m user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-m → Creates a home directory (/home/username) for the user.&lt;/p&gt;

&lt;p&gt;What Happens without -m?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new user is created.&lt;/li&gt;
&lt;li&gt;No home directory is created (unlike with -m).&lt;/li&gt;
&lt;li&gt;The user will not have a default working directory under /home/username.&lt;/li&gt;
&lt;li&gt;The user may not have a personal environment setup (e.g., .bashrc, .profile)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see all the users you can check the ‘/etc/passwd’ file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /etc/passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The newly created users - user1, user2 are visible at the end of the file.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr38d3fiytv8i36ytobo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr38d3fiytv8i36ytobo.png" alt="Image description" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;passwd&lt;/strong&gt; – set password for user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo passwd user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;su&lt;/strong&gt; – switch user&lt;/p&gt;

&lt;p&gt;It will ask password and switch user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu60h3b397tymhw0r6zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu60h3b397tymhw0r6zz.png" alt="Image description" width="617" height="231"&gt;&lt;/a&gt;&lt;br&gt;
Notice the username changed from ‘ubuntu’ to ‘user1’.&lt;br&gt;
Use &lt;strong&gt;exit&lt;/strong&gt; to go to primary user.&lt;/p&gt;

&lt;p&gt;There are two ways to switch user: &lt;br&gt;
&lt;code&gt;su username&lt;/code&gt; vs &lt;code&gt;su - username&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su john
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What Happens?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switches to john, but keeps the current shell environment (variables, paths, etc.).&lt;/li&gt;
&lt;li&gt;Does not load john's profile settings (~/.bashrc, ~/.profile).&lt;/li&gt;
&lt;li&gt;Current directory remains unchanged.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;su - john
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What Happens?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Completely switches to john's environment, just like a fresh login.&lt;/li&gt;
&lt;li&gt;Loads john's shell profile (~/.bashrc, ~/.profile).&lt;/li&gt;
&lt;li&gt;Current directory changes to john's home (/home/john).&lt;/li&gt;
&lt;li&gt;Sets PATH, HOME, and other variables specific to john.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;userdel&lt;/strong&gt; — delete user&lt;/p&gt;

&lt;p&gt;Delete ‘user1’:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo userdel user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete ‘user1’ and its home directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo userdel -r user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Force delete ‘user1’ even if the user is logged in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo userdel -f user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have deleted the user using the first command, and home directory is not removed, use the following to delete it manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rm -rf /home/user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;rm → The remove (delete) command in Linux.&lt;br&gt;
-r → Stands for recursive, meaning it deletes directories and all their contents.&lt;br&gt;
-f → Stands for force, meaning it bypasses confirmation prompts and deletes files without asking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;groupadd&lt;/strong&gt; – command is used to create a new group in Linux&lt;/p&gt;

&lt;p&gt;Create a group named devops:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo groupadd devops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a group with a specific GID (Group ID):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo groupadd -g 5001 testers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this command to see all the groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat etc/group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtijq7xm07206vjae8q5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtijq7xm07206vjae8q5.png" alt="Image description" width="291" height="276"&gt;&lt;/a&gt;&lt;br&gt;
Displayed output is cropped.&lt;/p&gt;

&lt;p&gt;There is a group of each user also, when we create a user, a group with same name gets created automatically&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;usermod&lt;/strong&gt; → Recommended for Adding user to Multiple Groups&lt;/p&gt;

&lt;p&gt;Adding 'user1' to 'devops' group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG devops user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding to 'john' multiple groups at the same time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG developers,testers,QA john
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a user to sudo group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG sudo username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;usermod → Modifies the user account.&lt;br&gt;
-aG → A combination of two options:&lt;br&gt;
-a (Append): Adds the user to a group without removing existing group memberships.&lt;br&gt;
-G (Groups): Specifies the groups to which the user should be added, here it is added to the sudo group.&lt;/p&gt;

&lt;p&gt;Change the default/primary group using &lt;strong&gt;-g&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -g QA john
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assign primary and secondary groups in one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -g developers -aG testers,QA john
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Options in &lt;strong&gt;usermod&lt;/strong&gt; command:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ww5rlrolqfy75apkvdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ww5rlrolqfy75apkvdv.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;br&gt;
-d /new/home/directory → Changes the user's home directory to a new location.&lt;br&gt;
-m → Moves all existing files from the old home directory to the new one.&lt;br&gt;
Example: Change john's home directory to /home/devuser and move files&lt;br&gt;
&lt;code&gt;sudo usermod -d /home/devuser -m john&lt;/code&gt;  &lt;/p&gt;

&lt;p&gt;-p is not recommended to set password for a user, use &lt;strong&gt;passwd&lt;/strong&gt; instead&lt;br&gt;
&lt;code&gt;sudo passwd user1&lt;/code&gt; → it will give option to set password &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gpasswd&lt;/strong&gt; → Recommended for Single Group Changes&lt;/p&gt;

&lt;p&gt;Add user1 to devops group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gpasswd -a user1 devops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-a → appends the user1 to devops group without removing other memberships&lt;/p&gt;

&lt;p&gt;Add multiple users to testers group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gpasswd -M user1,user2 testers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2vzf07n59o6cluiceoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2vzf07n59o6cluiceoa.png" alt="Image description" width="533" height="406"&gt;&lt;/a&gt;&lt;br&gt;
see the users in devops and testers&lt;/p&gt;

&lt;p&gt;To check the groups in which user1 is present:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;groups user1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;groupdel&lt;/strong&gt; – delete a group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo groupdel testers – delete testers group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qvi1cz3g8drowolksb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qvi1cz3g8drowolksb7.png" alt="Image description" width="431" height="413"&gt;&lt;/a&gt;&lt;br&gt;
Testers group no more showing up&lt;/p&gt;

&lt;p&gt;It just deletes the group not the users inside that group, you can see the user1, user2 are still there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Proper user and group management in Linux is vital for maintaining security and control over system access. By understanding and utilizing these commands, administrators can efficiently manage users and permissions, ensuring smooth and secure operations. In the next blog, we will cover user permissions and file access management to further enhance security and control.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day-06/100 - Automating Backups with Rotation using ZIP method | Comparision with rsync method</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Thu, 20 Feb 2025 17:55:17 +0000</pubDate>
      <link>https://dev.to/sourav3366/day-06100-automating-backups-with-rotation-using-zip-method-comparision-with-rsync-method-5d2f</link>
      <guid>https://dev.to/sourav3366/day-06100-automating-backups-with-rotation-using-zip-method-comparision-with-rsync-method-5d2f</guid>
      <description>&lt;p&gt;Backups are essential for protecting data from accidental deletion, corruption, or hardware failures. However, managing backups efficiently requires automating both the backup creation process and the rotation (deletion of old backups). In this blog, we'll explore a simple Bash script that performs these tasks seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview of the Backup Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This Bash script automates the process of creating compressed backups of a specified directory and retains only the latest five backups. Older backups are automatically deleted to save storage space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates a zip backup of a specified directory.&lt;/li&gt;
&lt;li&gt;Uses a timestamp to uniquely name each backup.&lt;/li&gt;
&lt;li&gt;Retains only the latest five backups and removes older ones.&lt;/li&gt;
&lt;li&gt;Runs efficiently with minimal user intervention.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Script Breakdown&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash

&amp;lt;&amp;lt; readme
This is a script for backup with 5-day rotation

Usage:
./backup_and_rotation.sh &amp;lt;path to your source&amp;gt; &amp;lt;path to backup folder&amp;gt;
readme

function display_usage {
    echo "Usage: ./backup_and_rotation.sh &amp;lt;path to your source&amp;gt; &amp;lt;path to backup folder&amp;gt;"
}

if [ $# -eq 0 ]; then
    display_usage
    exit 1
fi

source_dir=$1
timestamp=$(date '+%Y-%m-%d-%H-%M-%S')
backup_dir=$2

function create_backup() {
    zip -r "${backup_dir}/backup_${timestamp}.zip" "${source_dir}" &amp;gt; /dev/null

    if [ $? -eq 0 ]; then
        echo "Backup generated successfully for ${timestamp}"
    else
        echo "Backup failed!" &amp;gt;&amp;amp;2
    fi
}

function perform_rotation() {
    # Using glob instead of `ls` to avoid errors if no backups exist
    backups=("${backup_dir}"/backup_*.zip)
    backups=($(ls -t "${backups[@]}" 2&amp;gt;/dev/null))  # Sort backups by timestamp

    echo "Existing backups: ${backups[@]}"

    if [ "${#backups[@]}" -gt 5 ]; then
        echo "Performing rotation for 5 days"

        backups_to_remove=("${backups[@]:5}")  # Keep latest 5 backups
        echo "Removing backups: ${backups_to_remove[@]}"

        for backup in "${backups_to_remove[@]}"; do
            rm -f "${backup}"
        done
    fi
}

create_backup
perform_rotation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Understanding the Script&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Usage Instructions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script includes a usage guide in a commented block to inform users how to execute it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Usage: ./backup_and_rotation.sh &amp;lt;path to your source&amp;gt; &amp;lt;path to backup folder&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that users provide the required arguments: the source directory and the backup folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Checking for User Input&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before proceeding, the script verifies that the required arguments are provided. If not, it displays a usage message and exits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ $# -eq 0 ]; then
    display_usage
    exit 1
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Backup Creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The create_backup function compresses the specified directory into a .zip file, using a timestamp in the filename for uniqueness.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zip -r "${backup_dir}/backup_${timestamp}.zip" "${source_dir}" &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures backups do not overwrite each other and makes it easier to track backup dates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Backup Rotation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script retains only the latest five backups. It sorts the existing backups by timestamp and removes older ones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ "${#backups[@]}" -gt 5 ]; then
    backups_to_remove=("${backups[@]:5}")
    for backup in "${backups_to_remove[@]}"; do
        rm -f "${backup}"
    done
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that old backups do not consume unnecessary storage space.&lt;/p&gt;

&lt;p&gt;Running the Script&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To execute the script, use:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x backup_and_rotation.sh
./backup_and_rotation.sh /path/to/source /path/to/backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
./backup_and_rotation.sh /home/user/Documents /home/user/Backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Comparision&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffax04a0k3r4p7oknzb3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffax04a0k3r4p7oknzb3f.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to Use Zip-based Backup&lt;/strong&gt;&lt;br&gt;
Best for: Situations where you need complete, standalone backups that can be easily transferred, archived, or restored as a whole.&lt;/p&gt;

&lt;p&gt;✅ Use Cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Periodic Full Backups&lt;/strong&gt; – If you need a full copy of the data every time (e.g., weekly full backups).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Storage or Remote Archival&lt;/strong&gt;– When backing up data to cloud storage (Google Drive, S3, etc.) where a single compressed file is easier to manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disaster Recovery&lt;/strong&gt;– If you need to store long-term snapshots of your system that can be restored independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File Versioning and Portability&lt;/strong&gt; – When you need to download or send backups via email, FTP, or portable storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup Before System Updates&lt;/strong&gt; – Creating a complete archive before running major software updates or upgrades.&lt;/p&gt;

&lt;p&gt;⚠ &lt;strong&gt;Downside&lt;/strong&gt;: Requires more storage space and can be slower due to compression overhead.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;When to Use Rsync-based Backup&lt;/strong&gt;&lt;br&gt;
Best for: Situations where efficiency, speed, and incremental backups are needed.&lt;/p&gt;

&lt;p&gt;✅ Use Cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Daily or Frequent Backups&lt;/strong&gt; – Since rsync only copies changed files, it's perfect for daily or hourly backups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server or Web App Backups&lt;/strong&gt; – For Linux servers, web applications, and databases, where only the latest changes matter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Large File Systems&lt;/strong&gt; – If you're backing up large directories, rsync saves time and disk space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network or Remote Backups&lt;/strong&gt; – When syncing files between machines (e.g., from a local machine to a remote server).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local System Synchronization&lt;/strong&gt; – If you need to keep two directories synchronized without unnecessary duplication.&lt;/p&gt;

&lt;p&gt;⚠ &lt;strong&gt;Downside&lt;/strong&gt;: Does not create a single compressed file; you need to manually zip it later if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ui1prabwfrwr1xaoswx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ui1prabwfrwr1xaoswx.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 05/100 - Bash special commands (part-1)</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Thu, 20 Feb 2025 05:41:36 +0000</pubDate>
      <link>https://dev.to/sourav3366/day-05100-bash-special-features-2hl5</link>
      <guid>https://dev.to/sourav3366/day-05100-bash-special-features-2hl5</guid>
      <description>&lt;p&gt;Let's create a Bash script that demonstrates these special features step by step. The script will explain and showcase:&lt;/p&gt;

&lt;p&gt;✅ /dev/null (The "black hole" of Linux)&lt;br&gt;
✅ File descriptors: 0 (stdin), 1 (stdout), 2 (stderr)&lt;br&gt;
✅ Redirections (&amp;gt;, &amp;gt;&amp;gt;, 2&amp;gt;, 2&amp;gt;&amp;amp;1, &amp;amp;&amp;gt; and |)&lt;br&gt;
✅ commmand -v&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Script: bash_special_features.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

&amp;lt;&amp;lt; Information ===================================================

# Bash Special Features Demo
# Author: Sourav Kumar
# Date: $(date +"%Y-%m-%d")
# Description: Demonstrates advanced Bash concepts

Information=======================================================

# Set strict mode for better error handling
set -euo pipefail  # Prevents script from continuing on errors

LOG_FILE="script.log"  # Log file for debugging

# -------------------------------
# Logging Function
# -------------------------------
log() {
    local timestamp
    timestamp=$(date +"%Y-%m-%d %H:%M:%S")
    echo "[$timestamp] $*" | tee -a "$LOG_FILE"
}

# -------------------------------
# Function to Check Command Existence
# -------------------------------
check_command() {
    if command -v "$1" &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
        log "✅ $1 is installed"
    else
        log "❌ $1 is NOT installed. Please install it."
        return 1  # Exit with error status
    fi
}

# -------------------------------
# Start of Script Execution
# -------------------------------
log "🚀 Starting Bash Special Features Demo..."
log "🔹 File Descriptors, Redirections, and /dev/null"
echo "---------------------------------------------------------------"

# 1️⃣ Understanding File Descriptors
log "   - 0 = Standard Input (stdin)"
log "   - 1 = Standard Output (stdout)"
log "   - 2 = Standard Error (stderr)"

# 2️⃣ Demonstrating stdout (1) and stderr (2)
log "Running: ls /home &amp;gt; stdout.txt 2&amp;gt; stderr.txt"
ls /home &amp;gt; stdout.txt 2&amp;gt; stderr.txt
log "   - Standard Output saved in stdout.txt"
log "   - Standard Error saved in stderr.txt"

# 3️⃣ Redirecting both stdout and stderr to the same file
log "Running: ls /nonexistent_path &amp;gt; output.txt 2&amp;gt;&amp;amp;1"
ls /nonexistent_path &amp;gt; output.txt 2&amp;gt;&amp;amp;1 || log "⚠️ Error logged in output.txt"

# 4️⃣ Redirecting both stdout and stderr using &amp;amp;&amp;gt;
log "Running: ls /root &amp;amp;&amp;gt; combined_output.txt"
ls /root &amp;amp;&amp;gt; combined_output.txt || log "⚠️ Error logged in combined_output.txt"

# 5️⃣ Suppressing output using /dev/null
log "Running: ls / &amp;gt; /dev/null"
ls / &amp;gt; /dev/null
log "   - Standard Output is discarded"

log "Running: ls /nonexistent_path 2&amp;gt; /dev/null"
ls /nonexistent_path 2&amp;gt; /dev/null || log "⚠️ Error suppressed"

# 6️⃣ Using /dev/null to test command existence
check_command rsync

# 7️⃣ Using Pipe (|) to send stdout to another command
log "Running: ls / | grep etc"
ls / | grep etc || log "⚠️ 'etc' not found in /"

# -------------------------------
log "✅ Script Execution Completed ✅"
log "Check the output files: stdout.txt, stderr.txt, output.txt, combined_output.txt, and $LOG_FILE"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does /dev/null do?&lt;br&gt;
/dev/null is a special file in Linux that acts as a "black hole."&lt;br&gt;
Anything sent to /dev/null is discarded (i.e., it disappears and does not show up in the terminal).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 1: Redirecting Output to /dev/null&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ls command normally lists files in the current directory.&lt;br&gt;
"&amp;gt; /dev/null" sends the output (file list) into the "black hole," so nothing appears on the screen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2: Suppressing Command Output&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Hello World" &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally, echo "Hello World" prints "Hello World" to the terminal.&lt;br&gt;
With "&amp;gt; /dev/null", the output is discarded, so you see nothing.&lt;/p&gt;

&lt;p&gt;Understanding File Descriptors and Redirection in Bash&lt;br&gt;
In Bash, file descriptors and redirection help control where input and output go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File Descriptors (FD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;FD  Name    Description&lt;br&gt;
0   stdin   Standard Input (Keyboard/Input File)&lt;br&gt;
1   stdout  Standard Output (Terminal/Log File)&lt;br&gt;
2   stderr  Standard Error (Error Messages)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redirection Operators (&amp;gt;, &amp;gt;&amp;gt;)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Operator    Description&lt;br&gt;
"&amp;gt;"      Redirects output to a file (overwrites existing content)&lt;br&gt;
"&amp;gt;&amp;gt;"         Appends output to a file (does not overwrite)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /home &amp;gt; stdout.txt 2&amp;gt; stderr.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Redirect stdout and stderr separately&lt;br&gt;
"&amp;gt;" stdout.txt saves normal output.&lt;br&gt;
"2&amp;gt;" stderr.txt saves errors separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary Table&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrhl3src8adj7borwlpe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrhl3src8adj7borwlpe.png" alt="Image description" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Understanding command -v in Bash&lt;/strong&gt;&lt;br&gt;
The command -v command is used to check if a command exists and determine its path. It is useful in scripting to ensure that required tools are available before executing commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Basic Usage of command -v&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command -v ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/bin/ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This checks whether ls exists in the system and returns its full path.&lt;br&gt;
If the command is not found, it returns nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Checking If a Command Exists in a Script&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if command -v git &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
    echo "Git is installed!"
else
    echo "Git is not installed. Please install it."
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;command -v git checks if git is installed.&lt;/li&gt;
&lt;li&gt;&amp;gt; /dev/null 2&amp;gt;&amp;amp;1 suppresses output.&lt;/li&gt;
&lt;li&gt;If found, it prints "Git is installed!", otherwise, it prints an error message.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Using command -v to Get a Command Path&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Python is located at: $(command -v python3)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output (if installed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Python is located at: /usr/bin/python3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This prints the full path of python3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Assigning the Path to a Variable&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PYTHON_PATH=$(command -v python3)
echo "Python is installed at: $PYTHON_PATH"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stores the path of python3 in PYTHON_PATH.&lt;/li&gt;
&lt;li&gt;Prints the location of Python.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Using command -v with Multiple Commands&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for cmd in docker kubectl terraform; do
    if command -v "$cmd" &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
        echo "$cmd is installed"
    else
        echo "$cmd is not installed"
    fi
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This checks if docker, kubectl, and terraform are installed.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Difference Between command -v, which, and type&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn9z5haj11cpasctsknu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn9z5haj11cpasctsknu.png" alt="Image description" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command -v echo  # Output: /bin/echo
which echo       # Output: /bin/echo
type echo        # Output: echo is a shell builtin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
    </item>
    <item>
      <title>Day-04/100 - Automating Backups with Rsync: A Function-Based Bash Script with Rotation</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Thu, 20 Feb 2025 05:11:44 +0000</pubDate>
      <link>https://dev.to/sourav3366/automate-your-backups-with-a-bash-script-17jl</link>
      <guid>https://dev.to/sourav3366/automate-your-backups-with-a-bash-script-17jl</guid>
      <description>&lt;p&gt;Backups are crucial for data security and disaster recovery. Whether you're safeguarding personal files or managing production environments, automating backups reduces risk and ensures data integrity.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore an industry-standard Bash script for automated, incremental backups using rsync, following best practices such as:&lt;br&gt;
✔ Function-based scripting for readability and reusability&lt;br&gt;
✔ Efficient incremental backups using rsync&lt;br&gt;
✔ Automated backup rotation (retaining only the last 3 backups)&lt;br&gt;
✔ Error handling and logging for reliability&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Rsync for Backups?&lt;/strong&gt;&lt;br&gt;
Unlike full backups (e.g., ZIP-based archives), rsync efficiently synchronizes only changed files, making it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Faster – Transfers only modified data&lt;/li&gt;
&lt;li&gt;Storage-efficient – Keeps incremental backups, saving disk space&lt;/li&gt;
&lt;li&gt;Versatile – Supports remote backups over SSH&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Rsync Backup Script&lt;/strong&gt;&lt;br&gt;
Below is a function-based industry-standard Bash script that automates incremental backups with a 3-day retention policy.&lt;/p&gt;

&lt;p&gt;🔹 Script: backup_and_rotation_rsync.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash

&amp;lt;&amp;lt; readme
This is an industry-standard script for incremental backups using rsync with a 3-day rotation.

Usage:
./backup_and_rotation_rsync.sh &amp;lt;path to your source&amp;gt; &amp;lt;path to backup folder&amp;gt;
readme

# Function to display usage information
function display_usage {
    echo "Usage: ./backup_and_rotation_rsync.sh &amp;lt;path to your source&amp;gt; &amp;lt;path to backup folder&amp;gt;"
}

# Check if the correct number of arguments is provided
if [ $# -ne 2 ]; then
    display_usage
    exit 1
fi

# Assign arguments to variables
source_dir=$1
backup_dir=$2
timestamp=$(date '+%Y-%m-%d-%H-%M-%S')
backup_target="${backup_dir}/backup_${timestamp}"
log_file="${backup_dir}/backup_${timestamp}.log"

# Function to check if rsync is installed
function check_rsync_installed {
    if ! command -v rsync &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
        echo "Error: rsync is not installed. Please install it and try again." &amp;gt;&amp;amp;2
        exit 2
    fi
}

# Function to create a backup using rsync
function create_backup {
    mkdir -p "$backup_target"  # Ensure backup directory exists

    rsync -av --delete "$source_dir/" "$backup_target/" &amp;gt; "$log_file" 2&amp;gt;&amp;amp;1

    if [ $? -eq 0 ]; then
        echo "Backup completed successfully at $backup_target"
    else
        echo "Backup failed! Check log: $log_file" &amp;gt;&amp;amp;2
        exit 3
    fi
}

# Function to perform backup rotation (retain only the last 3 backups)
function perform_rotation {
    backups=("${backup_dir}"/backup_*)  # Find existing backups
    backups=($(ls -td "${backups[@]}" 2&amp;gt;/dev/null))  # Sort backups by timestamp (newest first)

    echo "Existing backups: ${backups[@]}"

    if [ "${#backups[@]}" -gt 3 ]; then
        echo "Performing rotation: Keeping latest 3 backups"

        backups_to_remove=("${backups[@]:3}")  # Keep latest 3 backups
        echo "Removing backups: ${backups_to_remove[@]}"

        for backup in "${backups_to_remove[@]}"; do
            rm -rf "${backup}"
        done
    fi
}

# Run the functions in order
check_rsync_installed
create_backup
perform_rotation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-by-Step Explanation&lt;/strong&gt;&lt;br&gt;
1️⃣ Argument Validation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ $# -ne 2 ]; then
    display_usage
    exit 1
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script requires two arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source directory – Folder to back up&lt;/li&gt;
&lt;li&gt;Backup directory – Where backups will be stored&lt;/li&gt;
&lt;li&gt;If not provided, it displays the correct usage and exits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2️⃣ Checking Rsync Installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function check_rsync_installed {
    if ! command -v rsync &amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
        echo "Error: rsync is not installed. Please install it and try again." &amp;gt;&amp;amp;2
        exit 2
    fi
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since rsync is essential for this script, we check if it's installed. If not, an error message prompts the user to install it.&lt;/p&gt;

&lt;p&gt;3️⃣ Creating the Backup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function create_backup {
    mkdir -p "$backup_target"  # Ensure backup directory exists

    rsync -av --delete "$source_dir/" "$backup_target/" &amp;gt; "$log_file" 2&amp;gt;&amp;amp;1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;rsync -av preserves permissions, timestamps, and symbolic links.&lt;/li&gt;
&lt;li&gt;--delete removes files in the backup that no longer exist in the source.&lt;/li&gt;
&lt;li&gt;Logs are saved for later review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4️⃣ Backup Rotation (Keeping Only Last 3 Backups)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function perform_rotation {
    backups=("${backup_dir}"/backup_*)  
    backups=($(ls -td "${backups[@]}" 2&amp;gt;/dev/null))  # Sort backups by timestamp (newest first)

    if [ "${#backups[@]}" -gt 3 ]; then
        backups_to_remove=("${backups[@]:3}")  # Keep latest 3 backups
        for backup in "${backups_to_remove[@]}"; do
            rm -rf "${backup}"
        done
    fi
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures only the latest 3 backups are retained by:&lt;br&gt;
✅ Sorting backups by timestamp&lt;br&gt;
✅ Keeping the latest 3&lt;br&gt;
✅ Deleting older backups automatically&lt;/p&gt;

&lt;p&gt;⏳ How to Use This Script&lt;/p&gt;

&lt;p&gt;1️⃣ Make the script executable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x backup_and_rotation_rsync.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Run the script with the source and backup folder paths&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./backup_and_rotation_rsync.sh /home/user/documents /backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Check backup logs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /backups/backup_YYYY-MM-DD-HH-MM-SS.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
    </item>
    <item>
      <title>Day -03 /100 - Configuring mail server using Postfix, Dovecot and Mutt</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Tue, 18 Feb 2025 18:56:00 +0000</pubDate>
      <link>https://dev.to/sourav3366/day-03-100-configuring-mail-server-using-postfix-dovecot-and-mutt-2hln</link>
      <guid>https://dev.to/sourav3366/day-03-100-configuring-mail-server-using-postfix-dovecot-and-mutt-2hln</guid>
      <description>&lt;p&gt;We will set up three things in ubuntu machine:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Postfix (It will provide SMTP Implementation, which will work as an email transfer agent - MTA)&lt;/li&gt;
&lt;li&gt;Dovecot (It will provide POP3/IMAP service, which will act as an email delivery agent.)&lt;/li&gt;
&lt;li&gt;Mutt (It will work as an email client and will provide a nice interface to write/send/receive emails.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will set up ubuntu server as an email server, hence we need to assign an adequate hostname and a fully qualified domain name.&lt;/p&gt;

&lt;p&gt;To change the hostname, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo hostnamectl set-hostname "mailserver-name"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn2i0o9t52ue40ye5gp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn2i0o9t52ue40ye5gp6.png" alt="Image description" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open another terminal and you will see the hostname has changed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e2h0zkqsohu3y8m3fk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e2h0zkqsohu3y8m3fk2.png" alt="Image description" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will change the fully qualified domain name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynskv7l7ngy9vib8zlab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynskv7l7ngy9vib8zlab.png" alt="Image description" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the terminal and find your IP address by running - ifconfig (run command shown in image if "ifconfig" not found)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k7x0ta8vgr8uh9g48wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k7x0ta8vgr8uh9g48wa.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivfdzu5xpys9htgpfeys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivfdzu5xpys9htgpfeys.png" alt="Image description" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the IP address and paste it in the "/etc/hosts" file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn78t7nwumgjdf8jf3l2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn78t7nwumgjdf8jf3l2.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The hostname and fully qualified domain name are changed successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx12eu5jpfk6wl7rvomr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx12eu5jpfk6wl7rvomr.png" alt="Image description" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 - Postfix Installation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postfix -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwvl2mib6hh1ill7c0ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwvl2mib6hh1ill7c0ic.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84fao00k752khg57ryqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84fao00k752khg57ryqt.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Change the system mail name from "mail.sourav.com" to "sourav.com" and press Enter.&lt;/p&gt;

&lt;p&gt;Verify the installation of Postfix using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status postfix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37n0tj095qsr9byi8kci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37n0tj095qsr9byi8kci.png" alt="Image description" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's change the location in the configuration file where the emails will be saved.&lt;/p&gt;

&lt;p&gt;We will create a folder "Maildir" and will give its path&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo postconf "home_mailbox = Maildir/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jos6fpootenxt7xly1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jos6fpootenxt7xly1u.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 - "Dovecot" Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install dovecot-imapd dovecot-pop3d dovecot-core
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the installation using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status dovecot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z6518sg7i85972ph185.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z6518sg7i85972ph185.png" alt="Image description" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To reload any service use this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload dovecot
sudo systemctl reload postfix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's change the location in the configuration file for dovecot also.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/dovecot/conf.d/10-mail.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Comment the old mail location and uncomment the new one as shown in the image:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkmg1rivt6c4t4l0pb73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkmg1rivt6c4t4l0pb73.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the changed location using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;doveconf -n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flztir9v6i29cl99wfijc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flztir9v6i29cl99wfijc.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever a new user signs up on the mail server, we want the "Maildir" to be created automatically in the user's home directory. This "Maildir" will work as the Inbox for the user.&lt;/p&gt;

&lt;p&gt;To achieve this, we will go to /etc/skel/&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;- Whatever we create in this directory whether a file or folder, gets copied to home when a new user signs up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /etc/skel
sudo mkdir -p Maildir/cur Maildir/new Maildir/tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All 3 folders are created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2ymfm9hvani3drwupjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2ymfm9hvani3drwupjd.png" alt="Image description" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 - Installing mutt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mutt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we want that if anybody signs up then he/she should have access to mutt so that they can get a nice interface to send/receive/write emails.&lt;/p&gt;

&lt;p&gt;So we will make all configurations of mutt in the same location - /etc/skel&lt;/p&gt;

&lt;p&gt;By doing this every user will get access to mutt without installing it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /etc/skel/.mutt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasg9snyse4gkf6ud1ddy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasg9snyse4gkf6ud1ddy.png" alt="Image description" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created a hidden directory mutt.&lt;/p&gt;

&lt;p&gt;Now go inside .mutt and create a file "muttrc"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd .mutt/
sudo nano muttrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and paste this content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set imap_user = ""
set imap_pass = ""
set folder = imaps://mail
set spoolfile = +INBOX
set realname = ''
set from = "$imap_user"
set use_from = yes
set sort=reverse-date
mailboxes = INBOX
set timeout=1
set sidebar_visible = yes
source ~/.mutt/mutt_colors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create another file mutt_colors using command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano mutt_colors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and copy paste the content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Colours for items in the index
color index brightcyan black ~N
# Hmm, don't like this.
# color index brightgreen black "~N (~x byers.world)|(~x byers.x)|(~x langly.levallois123.axialys.net)|(~x the.earth.li)"
color index brightyellow black ~F
color index black green ~T
color index brightred black ~D
mono index bold ~N
mono index bold ~F
mono index bold ~T
mono index bold ~D

# Highlights inside the body of a message.

# URLs
color body brightgreen black "(http|ftp|news|telnet|finger)://[^ \"\t\r\n]*"
color body brightgreen black "mailto:[-a-z_0-9.]+@[-a-z_0-9.]+"
mono body bold "(http|ftp|news|telnet|finger)://[^ \"\t\r\n]*"
mono body bold "mailto:[-a-z_0-9.]+@[-a-z_0-9.]+"

# email addresses
color body brightgreen black "[-a-z_0-9.%$]+@[-a-z_0-9.]+\\.[-a-z][-a-z]+"
mono body bold "[-a-z_0-9.%$]+@[-a-z_0-9.]+\\.[-a-z][-a-z]+"

# header
color header green black "^from:"
color header green black "^to:"
color header green black "^cc:"
color header green black "^date:"
color header yellow black "^newsgroups:"
color header yellow black "^reply-to:"
# color header brightcyan black "^subject:"
color header yellow black "^subject:"
color header red black "^x-spam-rule:"
color header green black "^x-mailer:"
color header yellow black "^message-id:"
color header yellow black "^Organization:"
color header yellow black "^Organisation:"
color header yellow black "^User-Agent:"
color header yellow black "^message-id: .*pine"
color header yellow black "^X-Fnord:"
color header yellow black "^X-WebTV-Stationery:"
color header yellow black "^X-Message-Flag:"
color header yellow black "^X-Spam-Status:"
color header yellow black "^X-SpamProbe:"
color header red black "^X-SpamProbe: SPAM"

# Coloring quoted text - coloring the first 7 levels:
color quoted cyan black
color quoted1 yellow black
color quoted2 red black
color quoted3 green black
color quoted4 cyan black
color quoted5 yellow black
color quoted6 red black
color quoted7 green black


# Default color definitions
#color hdrdefault white green
color signature brightmagenta black
color indicator black cyan
color attachment black green
color error red black
color message white black
color search brightwhite magenta
# color status brightyellow blue
color status blue black
color tree brightblue black
color normal white black
color tilde green black
color bold brightyellow black
color underline magenta black
color markers brightcyan black

# Colour definitions when on a mono screen
mono bold bold
mono underline underline
mono indicator reverse
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now set the permission for the directory so that the content in this directory can be copied automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod 700 -R /etc/skel/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4 - Creating users&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create two users with names user1 and user2 and set passwords for them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo adduser --gecos "" user1
sudo adduser --gecos "" user2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie6vjhlwzbarsfyc8dv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie6vjhlwzbarsfyc8dv3.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will switch to user1 and will view the files in the home directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5ph3mwptfesw9wsd9lq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5ph3mwptfesw9wsd9lq.png" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdetrdsr9hsnx1g108f2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdetrdsr9hsnx1g108f2.png" alt="Image description" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will go to this file and we will update the username, password, and real name (signing up) so that the user doesn't have to give the password every time he is sending or receiving mail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano .mutt/muttrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbm0cz36yguhpitwv5u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbm0cz36yguhpitwv5u5.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open two terminals - one for user1 and one for user2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj8cr2pdqh6u81hm2mp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj8cr2pdqh6u81hm2mp7.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mutt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it will open the certificate window. Then press "a" to accept the certificate. Do this for user2 also.&lt;/p&gt;

&lt;p&gt;Inboxes for both users are open now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xksipxsbtyz9mm168r0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xksipxsbtyz9mm168r0.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To send email from user1 to user2:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to user1 terminal where inbox is open&lt;/li&gt;
&lt;li&gt;Press "m" to create a new email.&lt;/li&gt;
&lt;li&gt;Write the email id of user2 "&lt;a href="mailto:user2@sourav.com"&gt;user2@sourav.com&lt;/a&gt;" in the To section and press Enter&lt;/li&gt;
&lt;li&gt;Give any subject, press Enter.&lt;/li&gt;
&lt;li&gt;Write the email in the editor, press Ctrl + X, then "y" then Enter.&lt;/li&gt;
&lt;li&gt;Press "y" to send the email.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F672jgfr43fgb762y5hql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F672jgfr43fgb762y5hql.png" alt="Image description" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to user2 terminal where inbox is open.&lt;/li&gt;
&lt;li&gt;Press Enter on the new email received, this will open the email sent by user1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn36v2vcnsbxh4wpiep9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn36v2vcnsbxh4wpiep9o.png" alt="Image description" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, we have setup the mail server successfully!!&lt;/p&gt;

</description>
      <category>linux</category>
      <category>smtp</category>
      <category>bash</category>
      <category>devops</category>
    </item>
    <item>
      <title>Day-02 / 100 - Archive Older Files</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Mon, 17 Feb 2025 18:28:46 +0000</pubDate>
      <link>https://dev.to/sourav3366/archive-older-files-p67</link>
      <guid>https://dev.to/sourav3366/archive-older-files-p67</guid>
      <description>&lt;h2&gt;
  
  
  Project Requirement
&lt;/h2&gt;

&lt;p&gt;In the given directory, if you find files more than a given size ex: 20MB or files older than given days ex: 10 days&lt;/p&gt;

&lt;p&gt;Compress those files and move in a ‘archive’ folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Why are we making this script?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We are creating this Bash script to automate the process of managing large and old files in a given directory. &lt;/li&gt;
&lt;li&gt;Over time, directories accumulate large and outdated files, consuming unnecessary disk space. Manually identifying and archiving these files is time-consuming and inefficient. &lt;/li&gt;
&lt;li&gt;This script helps automate the process by finding files based on size and age, compressing them, and moving them to an archive folder for better storage management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Purpose of the script
&lt;/h2&gt;

&lt;p&gt;The purpose of this script is to improve disk space management and system performance by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identifying large files exceeding a specific size (e.g., 20MB)&lt;/li&gt;
&lt;li&gt;Finding old files that have not been modified for a specified number of days (e.g., 10 days)&lt;/li&gt;
&lt;li&gt;Compressing files to reduce storage consumption&lt;/li&gt;
&lt;li&gt;Moving archived files to a separate directory for better organization&lt;/li&gt;
&lt;li&gt;Automating cleanup to avoid manual intervention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This script is useful in DevOps workflows for log file management, backup automation, and system maintenance. 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps of script:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Defining Variables&lt;/li&gt;
&lt;li&gt;Checking if the Base Directory Exists&lt;/li&gt;
&lt;li&gt;Create ‘archive’ folder if not already present&lt;/li&gt;
&lt;li&gt;Find all the files with size more than 20MB or being stored for more than 10 days&lt;/li&gt;
&lt;li&gt;Compress each file and Move the compressed files in ‘archive’ folder&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;1. Defining Variables&lt;/strong&gt;&lt;br&gt;
At the beginning of the script, we define a few key variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BASE=/home/sourav/tutorials/find_command  # The base directory to search files in
DAYS=10  # (Unused in this script but could be used for age-based filtering)
DEPTH=1  # The depth level for the find command
RUN=0    # A control flag (currently set to always archive)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;BASE&lt;/strong&gt; : Defines the directory where we want to search for large files.&lt;br&gt;
&lt;strong&gt;DAYS&lt;/strong&gt;: Unused in this script, but it might be intended for filtering files based on modification time.&lt;br&gt;
&lt;strong&gt;DEPTH&lt;/strong&gt;: Restricts how deep the find command should search in subdirectories.&lt;br&gt;
&lt;strong&gt;RUN&lt;/strong&gt;: This flag is currently set to 0, meaning the script will always run the archiving logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Checking if the Base Directory Exists&lt;/strong&gt;&lt;br&gt;
Before performing any operations, the script verifies if the target directory exists:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ ! -d $BASE ]
then
    echo "directory does not exist: $BASE"
    exit 1
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;if [ ! -d $BASE ]: Checks if $BASE is not a directory.&lt;/li&gt;
&lt;li&gt;If the directory does not exist, the script prints an error message and exits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Create ‘archive’ folder if not already present&lt;/strong&gt;&lt;br&gt;
If an archive folder does not exist inside $BASE, the script creates one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ ! -d $BASE/archive ]
then
    mkdir $BASE/archive
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;mkdir $BASE/archive: Creates the archive folder to store compressed files.&lt;/li&gt;
&lt;li&gt;This ensures that files have a designated place for storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Find all the files with size more than 20MB or being store for more than 10 days&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core functionality of this script is to locate files larger than 20MB or Files older than 10 days and compress them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in $(find "$BASE" -maxdepth "$DEPTH" -type f \( -size +20M -o -mtime +"$DAYS" \))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;find $BASE → Starts searching from the base directory.&lt;/li&gt;
&lt;li&gt;-maxdepth $DEPTH → Limits search depth to avoid scanning deep subdirectories.&lt;/li&gt;
&lt;li&gt;-type f → Ensures only files (not directories) are selected.&lt;/li&gt;
&lt;li&gt;(...) → Groups conditions to apply the OR (-o) logic properly.&lt;/li&gt;
&lt;li&gt;-size +20M → Finds files larger than 20MB.&lt;/li&gt;
&lt;li&gt;-o (OR operator) → Ensures files matching either condition are selected.&lt;/li&gt;
&lt;li&gt;-mtime +"$DAYS" → Finds files older than $DAYS days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Compressing and Moving the Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inside the loop, the script checks the RUN variable before proceeding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ $RUN -eq 0 ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since RUN is set to 0, it executes the archiving process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "[ $(date "+%Y-%m-%d %H:%M:%S") ] archiving $i ==&amp;gt; $BASE/archive"
gzip $i || exit 1
mv $i.gz $BASE/archive || exit 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;date "+%Y-%m-%d %H:%M:%S": Prints the timestamp for logging purposes.&lt;/li&gt;
&lt;li&gt;gzip $i || exit 1: Compresses the file using gzip. If compression fails, the script exits.&lt;/li&gt;
&lt;li&gt;mv $i.gz $BASE/archive || exit 1: Moves the compressed file (.gz format) to the archive directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;whole script&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BASE=/home/sourav/tutorials/find_command
DAYS=10
DEPTH=1
RUN=0

# Check if the directory is present or not
if [ ! -d "$BASE" ]; then
    echo "directory does not exist: $BASE"
    exit 1
fi

# Create 'archive' folder if not present
if [ ! -d "$BASE/archive" ]; then
    mkdir "$BASE/archive"
fi

# Find files that are either larger than 20MB OR older than 10 days
for i in $(find "$BASE" -maxdepth "$DEPTH" -type f \( -size +20M -o -mtime +"$DAYS" \)); 
do
    if [ "$RUN" -eq 0 ]; then
        echo "[ $(date "+%Y-%m-%d %H:%M:%S") ] archiving $i ==&amp;gt; $BASE/archive"
        gzip "$i" || exit 1
        mv "$i.gz" "$BASE/archive" || exit 1
    fi
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logic handles these condtions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A 5MB file that is 12 days old → ✅ Archived (because it's old).&lt;/li&gt;
&lt;li&gt;A 25MB file that is 3 days old → ✅ Archived (because it's large).&lt;/li&gt;
&lt;li&gt;A 5MB file that is 3 days old → ❌ Not Archived (doesn't match either condition).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Save this as archive_script.sh, then give it execute permissions and run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x archive_script.sh
./archive_script.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 Before Running the Script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls -lh /home/sourav3366/tutorials/find_command
-rw-r--r-- 1 sourav3366 users  25M Feb 12  test1.log
-rw-r--r-- 1 sourav3366 users  5M  Feb 1   test2.log
-rw-r--r-- 1 sourav3366 users  30M Feb 15  test3.log
-rw-r--r-- 1 sourav3366 users  10M Feb 10  test4.log

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output of the Script Execution&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ 2025-02-17 14:30:45 ] Archiving /home/sourav3366/tutorials/find_command/test1.log ==&amp;gt; /home/sourav3366/tutorials/find_command/archive
[ 2025-02-17 14:30:46 ] Archiving /home/sourav3366/tutorials/find_command/test2.log ==&amp;gt; /home/sourav3366/tutorials/find_command/archive
[ 2025-02-17 14:30:47 ] Archiving /home/sourav3366/tutorials/find_command/test3.log ==&amp;gt; /home/sourav3366/tutorials/find_command/archive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📂 After Running the Script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls -lh /home/sourav3366/tutorials/find_command
-rw-r--r-- 1 sourav3366 users  10M Feb 10  test4.log
drwxr-xr-x 2 sourav3366 users  4K  Feb 17  archive  # Archive folder created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls -lh /home/sourav3366/tutorials/find_command/archive
-rw-r--r-- 1 sourav3366 users  2M  Feb 17  test1.log.gz
-rw-r--r-- 1 sourav3366 users  500K Feb 17  test2.log.gz
-rw-r--r-- 1 sourav3366 users  2.5M Feb 17  test3.log.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Final Outcome&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfy03zj6c0324zxr7hbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfy03zj6c0324zxr7hbs.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bash</category>
      <category>devops</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Day- 01/100 - Monitoring Free RAM Space with a Bash Script</title>
      <dc:creator>Sourav kumar</dc:creator>
      <pubDate>Sun, 16 Feb 2025 16:05:40 +0000</pubDate>
      <link>https://dev.to/sourav3366/monitoring-free-ram-space-with-a-bash-script-2of2</link>
      <guid>https://dev.to/sourav3366/monitoring-free-ram-space-with-a-bash-script-2of2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Monitoring free RAM is crucial for maintaining system performance, especially in production environments where resource constraints can impact applications. In this blog, we'll create a simple Bash script to monitor available RAM and send alerts if it falls below a specified threshold.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Before proceeding, ensure you have:&lt;/li&gt;
&lt;li&gt;A Linux-based system (Ubuntu, CentOS, etc.)&lt;/li&gt;
&lt;li&gt;Basic knowledge of Bash scripting&lt;/li&gt;
&lt;li&gt;Access to free, awk, and mail commands&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Understanding Free RAM Command
&lt;/h2&gt;

&lt;p&gt;To check available RAM on a Linux system, we use the free command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzxgoccorss2dy8ex5d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzxgoccorss2dy8ex5d0.png" alt="This is use of free command" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we want a total column of memory and swap then we can use "free -mt"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxioiourxwjb426920d27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxioiourxwjb426920d27.png" alt="This is use of free -mt" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But this is not human readable format . To make it human readable we can use " -h " command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvgcqg19uppdh9rugebi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvgcqg19uppdh9rugebi.png" alt="This is use of of -h " width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can take out only the total row also using "grep" ( global regular expression print)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkdx5pr4ydge0s50471.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkdx5pr4ydge0s50471.png" alt="This is use of grep" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here after grep we used expression "Total" means it will all rows which will have "Total" string in it.&lt;/p&gt;

&lt;p&gt;We can also use awk after grep to take out fixed column also like, if i want to takeout only "free" column then i can use "awk '{print $4}' as free column number is 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv8iqbjaj4n7c652ufax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv8iqbjaj4n7c652ufax.png" alt="This is use of awk" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will focus on the free column to determine how much memory is completely unused by the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing the Bash Script
&lt;/h2&gt;

&lt;p&gt;Let's create a Bash script that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retrieves free RAM.&lt;/li&gt;
&lt;li&gt;Compares it with a threshold value.&lt;/li&gt;
&lt;li&gt;Sends an alert if RAM is below the threshold.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the script file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch monitor_free_ram.sh
chmod +x monitor_free_ram.sh
vi monitor_free_ram.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Add the script content&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Define RAM threshold (in MB)
THRESHOLD=500  # Adjust as needed

# Get free RAM
FREE_RAM=$(free -mt | grep "Total" | awk '{print $4}')

# Log current status
echo "Free RAM: $FREE_RAM MB"

# Check if free RAM is below threshold
if [[ "$FREE_RAM" -lt "$THRESHOLD" ]]; then
    MESSAGE="Warning: Low Free RAM! Only $FREE_RAM MB free."
    echo "$MESSAGE"

    # Send email alert (Requires configured mail service)
    echo "$MESSAGE" | mail -s "Low Free RAM Alert" 
    souravbit3366@gmail.com
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How the Script Works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We set a threshold value (e.g., 500MB).&lt;/li&gt;
&lt;li&gt;The script extracts the free RAM using free -m and awk.&lt;/li&gt;
&lt;li&gt;If the free RAM falls below the threshold, a warning message is printed and emailed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automating the Script with Cron Job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To run this script automatically, schedule a cron job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this line to check every 5 minutes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*/5 * * * * /path/to/monitor_free_ram.sh &amp;gt;&amp;gt; /var/log/ram_monitor.log 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This simple script helps monitor system memory and prevent performance issues. We can extend it by logging alerts, integrating it with monitoring tools like Prometheus, or sending notifications via Slack or Telegram.&lt;/p&gt;

</description>
      <category>bash</category>
      <category>script</category>
      <category>monitoring</category>
      <category>ram</category>
    </item>
  </channel>
</rss>
