<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Muhammad Kamran Kabeer</title>
    <description>The latest articles on DEV Community by Muhammad Kamran Kabeer (@muhammadkamrankabeeross).</description>
    <link>https://dev.to/muhammadkamrankabeeross</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muhammadkamrankabeeross"/>
    <language>en</language>
    <item>
      <title>I Built a Fully Automated DevOps Workstation Using Ansible (Terraform • VirtualBox • AWS CLI)</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Fri, 01 May 2026 06:02:48 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/i-built-a-fully-automated-devops-workstation-using-ansible-terraform-virtualbox-aws-cli-2e2b</link>
      <guid>https://dev.to/muhammadkamrankabeeross/i-built-a-fully-automated-devops-workstation-using-ansible-terraform-virtualbox-aws-cli-2e2b</guid>
      <description>&lt;h1&gt;
  
  
  🚀 I Built a Fully Automated DevOps Workstation Using Ansible
&lt;/h1&gt;

&lt;p&gt;I recently built a fully automated &lt;strong&gt;DevOps workstation setup&lt;/strong&gt; using Ansible that provisions an entire Linux development environment in a single command.&lt;/p&gt;

&lt;p&gt;This project is designed for DevOps learners and automation practice, and runs on a lightweight Xubuntu system (Dell Latitude E7440).&lt;/p&gt;




&lt;h1&gt;
  
  
  🧠 What Problem I Solved
&lt;/h1&gt;

&lt;p&gt;Setting up a DevOps environment manually takes time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing tools one by one&lt;/li&gt;
&lt;li&gt;Configuring VirtualBox&lt;/li&gt;
&lt;li&gt;Setting up Terraform, Vagrant, AWS CLI&lt;/li&gt;
&lt;li&gt;Fixing dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I automated everything using &lt;strong&gt;Ansible&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  ⚙️ Tech Stack
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Ansible (Automation Engine)&lt;/li&gt;
&lt;li&gt;Terraform (IaC)&lt;/li&gt;
&lt;li&gt;Vagrant (VM Automation)&lt;/li&gt;
&lt;li&gt;VirtualBox (Virtualization)&lt;/li&gt;
&lt;li&gt;AWS CLI (Cloud Access)&lt;/li&gt;
&lt;li&gt;Linux (Xubuntu)&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🏗️ Architecture
&lt;/h1&gt;

&lt;p&gt;Host Machine → Ansible Playbook → DevOps Tools → VirtualBox VMs → Jenkins / Docker / Monitoring Stack&lt;/p&gt;




&lt;h1&gt;
  
  
  ⚡ Features
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;One-command setup&lt;/li&gt;
&lt;li&gt;Idempotent automation&lt;/li&gt;
&lt;li&gt;Lightweight Linux optimization&lt;/li&gt;
&lt;li&gt;Cloud-ready CLI environment&lt;/li&gt;
&lt;li&gt;Auto dependency handling&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🚀 How It Works
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
git clone https://github.com/muhammadkamrankabeer-oss/devops-workstation-automation.git
cd devops-workstation-automation
ansible-playbook -i inventory setup.yml --ask-become-pass

📚 What I Learned
Infrastructure as Code (IaC)
Linux system automation
Real-world DevOps workflow design
GitHub Actions CI basics
Virtualization and cloud CLI setup
🔥 Future Improvements
Docker VM automation
Jenkins CI/CD pipeline
Prometheus + Grafana monitoring stack
Terraform cloud provisioning
👨‍💻 Author

Muhammad Kamran Kabeer
DevOps Learner | Linux Enthusiast | Automation Explorer

⭐ Project Link

https://github.com/muhammadkamrankabeer-oss/devops-workstation-automation

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>showdev</category>
      <category>terraform</category>
    </item>
    <item>
      <title>🌱 Reducing Carbon Footprint with Lightweight Docker Containers A DevOps Experiment on Image Optimization for Sustainability</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Fri, 17 Apr 2026 16:38:02 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/reducing-carbon-footprint-with-lightweight-docker-containers-a-devops-experiment-on-image-2068</link>
      <guid>https://dev.to/muhammadkamrankabeeross/reducing-carbon-footprint-with-lightweight-docker-containers-a-devops-experiment-on-image-2068</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu6ykiday83qixj35rz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu6ykiday83qixj35rz.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;🧭 Introduction&lt;/p&gt;

&lt;p&gt;In modern DevOps workflows, containerization has become a standard practice. However, most developers focus only on functionality and performance — while ignoring the hidden cost of large container images.&lt;/p&gt;

&lt;p&gt;In this project, I explored a simple but important question:&lt;/p&gt;

&lt;p&gt;Can we make Docker images more environmentally efficient by making them smaller?&lt;/p&gt;

&lt;p&gt;To answer this, I built and compared two Docker images for the same application and measured the difference in size and efficiency.&lt;/p&gt;

&lt;p&gt;🧪 Project Overview&lt;/p&gt;

&lt;p&gt;I created a simple Flask web application and containerized it using two different approaches:&lt;/p&gt;

&lt;p&gt;A standard Python-based Docker image&lt;br&gt;
A lightweight Alpine-based Docker image&lt;/p&gt;

&lt;p&gt;Both containers run the same application, but their internal structure is very different.&lt;/p&gt;

&lt;p&gt;🧱 The Application&lt;/p&gt;

&lt;p&gt;A minimal Flask app:&lt;/p&gt;

&lt;p&gt;from flask import Flask&lt;/p&gt;

&lt;p&gt;app = Flask(&lt;strong&gt;name&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;@app.route("/")&lt;br&gt;
def home():&lt;br&gt;
    return "Eco Docker Demo Running 🌱"&lt;/p&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    app.run(host="0.0.0.0", port=5000)&lt;br&gt;
🐳 Docker Strategy&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Standard Image (Baseline)
FROM python:3.10&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;WORKDIR /app&lt;br&gt;
COPY app /app&lt;/p&gt;

&lt;p&gt;RUN pip install flask&lt;/p&gt;

&lt;p&gt;CMD ["python", "app.py"]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lightweight Image (Optimized)
FROM python:3.10-alpine&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;WORKDIR /app&lt;br&gt;
COPY app /app&lt;/p&gt;

&lt;p&gt;RUN pip install --no-cache-dir flask&lt;/p&gt;

&lt;p&gt;CMD ["python", "app.py"]&lt;br&gt;
📊 Results &amp;amp; Comparison&lt;/p&gt;

&lt;p&gt;After building both images, I analyzed their sizes:&lt;/p&gt;

&lt;p&gt;Image Type  Size&lt;br&gt;
Standard Python Image   ~1.6 GB&lt;br&gt;
Alpine Optimized Image  ~97.9 MB&lt;br&gt;
📉 Key Result&lt;/p&gt;

&lt;p&gt;👉 The Alpine-based image is approximately 16x smaller&lt;/p&gt;

&lt;p&gt;🌍 Why This Matters (The Eco Perspective)&lt;/p&gt;

&lt;p&gt;While this may seem like a small optimization, container size directly impacts:&lt;/p&gt;

&lt;p&gt;💾 Storage usage in registries&lt;br&gt;
🌐 Bandwidth consumption during deployment&lt;br&gt;
⚡ CI/CD pipeline efficiency&lt;br&gt;
🔋 Indirect energy usage in cloud infrastructure&lt;/p&gt;

&lt;p&gt;At scale, these small improvements can contribute to lower energy consumption across systems.&lt;/p&gt;

&lt;p&gt;🚀 Running the Project&lt;/p&gt;

&lt;p&gt;To run the optimized container:&lt;/p&gt;

&lt;p&gt;docker run -p 5000:5000 eco-alpine&lt;/p&gt;

&lt;p&gt;Then open:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;Eco Docker Demo Running 🌱&lt;br&gt;
📸 Visual Evidence (Add Your Screenshots)&lt;/p&gt;

&lt;p&gt;Include:&lt;/p&gt;

&lt;p&gt;Docker image size comparison&lt;br&gt;
Terminal build output&lt;br&gt;
Browser output of Flask app&lt;br&gt;
🧠 What I Learned&lt;/p&gt;

&lt;p&gt;This experiment helped me understand:&lt;/p&gt;

&lt;p&gt;How base images affect container size&lt;br&gt;
Why Alpine Linux is widely used in DevOps&lt;br&gt;
The hidden cost of large container images&lt;br&gt;
How optimization can align with sustainability goals&lt;br&gt;
🔮 Final Thoughts&lt;/p&gt;

&lt;p&gt;Optimization in DevOps is not just about speed — it's also about efficiency and responsibility.&lt;br&gt;
During optimization, I explored AI-assisted suggestions (including Google Gemini) to evaluate Docker base images and improve efficiency.&lt;br&gt;
Even small improvements, like switching base images, can contribute to a more sustainable digital ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  🌍 Impact
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how small optimizations in container design can contribute to reducing unnecessary compute usage and resource consumption in cloud environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/muhammadkamrankabeer-oss/eco-docker" rel="noopener noreferrer"&gt;View the Code on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>earthday</category>
      <category>docker</category>
      <category>sustainability</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How I Built a Self-Healing Database on a 10-Year-Old Laptop</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Fri, 17 Apr 2026 04:41:21 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/how-i-built-a-self-healing-database-on-a-10-year-old-laptop-28g8</link>
      <guid>https://dev.to/muhammadkamrankabeeross/how-i-built-a-self-healing-database-on-a-10-year-old-laptop-28g8</guid>
      <description>&lt;h1&gt;
  
  
  How I Built a Self-Healing Database on a 10-Year-Old Laptop (Using Docker + Ansible)
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;A practical experiment in resilience engineering on aging hardware—with modern DevOps tools.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🚀 Introduction
&lt;/h2&gt;

&lt;p&gt;Running production-grade systems on old hardware sounds like a bad idea… until you treat it as a lab.&lt;/p&gt;

&lt;p&gt;I set out to build a &lt;strong&gt;self-healing database system&lt;/strong&gt; on a &lt;strong&gt;10-year-old laptop&lt;/strong&gt;—but this time with a more modern approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;8 GB RAM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SSD (thankfully!)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Docker for isolation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ansible for automation&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal wasn’t raw performance. It was &lt;strong&gt;resilience, repeatability, and recovery&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What “Self-Healing” Meant in This Project
&lt;/h2&gt;

&lt;p&gt;In this setup, &lt;em&gt;self-healing&lt;/em&gt; means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detecting failures automatically&lt;/li&gt;
&lt;li&gt;Restarting or replacing failed components&lt;/li&gt;
&lt;li&gt;Recovering corrupted or lost state&lt;/li&gt;
&lt;li&gt;Rebuilding the system with minimal manual intervention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And most importantly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything should be recoverable using code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🖥️ Why This Setup Works (Even on Old Hardware)
&lt;/h2&gt;

&lt;p&gt;The SSD made a huge difference compared to traditional HDD setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster I/O → better database responsiveness&lt;/li&gt;
&lt;li&gt;Quicker container restarts&lt;/li&gt;
&lt;li&gt;Improved log handling and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With &lt;strong&gt;8 GB RAM&lt;/strong&gt;, I had just enough room to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run multiple containers&lt;/li&gt;
&lt;li&gt;Simulate primary + replica&lt;/li&gt;
&lt;li&gt;Keep monitoring lightweight&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The system is composed of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Primary database container&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replica database container&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitoring container / scripts&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backup service&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ansible playbooks (control layer)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything runs locally but is logically separated using Docker.&lt;/p&gt;




&lt;h2&gt;
  
  
  🐳 Containerized Database Setup
&lt;/h2&gt;

&lt;p&gt;I used Docker to run isolated database instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Docker?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Clean environment separation&lt;/li&gt;
&lt;li&gt;Easy restarts and redeployments&lt;/li&gt;
&lt;li&gt;Fault isolation&lt;/li&gt;
&lt;li&gt;Reproducibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example (Simplified)
&lt;/h3&gt;



&lt;p&gt;```yaml id="k2js9a"&lt;br&gt;
version: '3'&lt;br&gt;
services:&lt;br&gt;
  db_primary:&lt;br&gt;
    image: postgres:latest&lt;br&gt;
    ports:&lt;br&gt;
      - "5432:5432"&lt;/p&gt;

&lt;p&gt;db_replica:&lt;br&gt;
    image: postgres:latest&lt;br&gt;
    ports:&lt;br&gt;
      - "5433:5432"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Each container behaves like an independent node.

---

## 🔁 Replication Strategy

Even on a single laptop, I implemented logical replication:

* Primary handles writes
* Replica syncs asynchronously
* Replica stays ready for failover

### Key Idea

If the primary fails:

* Promote the replica
* Spin up a new replica using automation

---

## 🤖 Automation with Ansible

This is where things got interesting.

Instead of manually fixing things, I used **Ansible playbooks** to:

* Provision containers
* Configure replication
* Restart failed services
* Rebuild broken nodes

### Example Playbook Task



```yaml id="p9dl2x"
- name: Ensure database container is running
  docker_container:
    name: db_primary
    image: postgres:latest
    state: started
    restart_policy: always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, recovery becomes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Run a playbook → system fixes itself&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  👀 Health Monitoring
&lt;/h2&gt;

&lt;p&gt;I implemented lightweight monitoring using scripts + container checks:&lt;/p&gt;

&lt;h3&gt;
  
  
  What I monitored:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Container health/status&lt;/li&gt;
&lt;li&gt;Database connectivity&lt;/li&gt;
&lt;li&gt;Replication lag&lt;/li&gt;
&lt;li&gt;Disk usage&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Basic Logic
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If container stops → restart it&lt;/li&gt;
&lt;li&gt;If DB not responding → recreate container&lt;/li&gt;
&lt;li&gt;If replication breaks → reconfigure replica via Ansible&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔧 Self-Healing Mechanisms
&lt;/h2&gt;

&lt;p&gt;Here’s how the system heals itself:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Container Restart (First Line of Defense)
&lt;/h3&gt;

&lt;p&gt;Docker restart policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically restart failed containers&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Ansible Reconciliation
&lt;/h3&gt;

&lt;p&gt;If something drifts from the desired state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Re-run playbooks&lt;/li&gt;
&lt;li&gt;Recreate containers&lt;/li&gt;
&lt;li&gt;Reapply configs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mimics &lt;strong&gt;Infrastructure as Code recovery&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Replica Promotion
&lt;/h3&gt;

&lt;p&gt;If primary fails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop primary container&lt;/li&gt;
&lt;li&gt;Redirect traffic to replica&lt;/li&gt;
&lt;li&gt;Promote replica to primary&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Rebuild Failed Node
&lt;/h3&gt;

&lt;p&gt;Using Ansible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Destroy broken container&lt;/li&gt;
&lt;li&gt;Recreate it&lt;/li&gt;
&lt;li&gt;Resync from current primary&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. Backup + Restore
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Periodic volume backups&lt;/li&gt;
&lt;li&gt;Fast restore using Docker volumes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if both containers fail, recovery is still possible.&lt;/p&gt;




&lt;h2&gt;
  
  
  💾 Storage Strategy (SSD Advantage)
&lt;/h2&gt;

&lt;p&gt;Using an SSD improved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WAL/log write speed&lt;/li&gt;
&lt;li&gt;Backup performance&lt;/li&gt;
&lt;li&gt;Container startup time&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker Volumes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Persistent storage for database data&lt;/li&gt;
&lt;li&gt;Survives container restarts&lt;/li&gt;
&lt;li&gt;Easily backed up&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔥 Failure Testing
&lt;/h2&gt;

&lt;p&gt;I intentionally broke the system multiple times:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker kill&lt;/code&gt; on primary&lt;/li&gt;
&lt;li&gt;Deleted volumes&lt;/li&gt;
&lt;li&gt;Simulated corruption&lt;/li&gt;
&lt;li&gt;Stopped replication&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Containers restarted automatically&lt;/li&gt;
&lt;li&gt;Ansible restored desired state quickly&lt;/li&gt;
&lt;li&gt;Replica promotion worked reliably&lt;/li&gt;
&lt;li&gt;Full recovery was possible from backups&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📉 Trade-Offs
&lt;/h2&gt;

&lt;p&gt;This setup isn’t perfect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Downsides:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Single physical machine = single point of failure&lt;/li&gt;
&lt;li&gt;Limited RAM → careful tuning required&lt;/li&gt;
&lt;li&gt;SSD wear over time&lt;/li&gt;
&lt;li&gt;Not truly “distributed”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  But still valuable because:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It simulates real-world failure scenarios&lt;/li&gt;
&lt;li&gt;Teaches recovery patterns&lt;/li&gt;
&lt;li&gt;Builds DevOps discipline&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 Key Lessons
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Docker + Ansible is a powerful combo
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Docker handles runtime&lt;/li&gt;
&lt;li&gt;Ansible handles desired state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, they approximate orchestration.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Self-healing = automation + observability
&lt;/h3&gt;

&lt;p&gt;Without monitoring, automation is blind.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Old hardware is a great teacher
&lt;/h3&gt;

&lt;p&gt;Failures happen more often → faster learning.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Infrastructure as Code is the real backup
&lt;/h3&gt;

&lt;p&gt;If you can rebuild everything from playbooks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You’re already halfway to self-healing.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🌱 What I’d Do Next
&lt;/h2&gt;

&lt;p&gt;To push this further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add Prometheus + Grafana for observability&lt;/li&gt;
&lt;li&gt;Introduce alerting (email/Slack)&lt;/li&gt;
&lt;li&gt;Use Docker Swarm or Kubernetes&lt;/li&gt;
&lt;li&gt;Move to multi-node setup (even with cheap machines)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project reinforced a simple idea:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reliability is not about powerful hardware—it’s about good design.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Even on a 10-year-old laptop, using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;Smart recovery strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…you can build a system that &lt;strong&gt;fails gracefully and recovers automatically&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  📌 GitHub Repo
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/muhammadkamrankabeer-oss/MK_Labs/tree/main/Lab4_Database" rel="noopener noreferrer"&gt;https://github.com/muhammadkamrankabeer-oss/MK_Labs/tree/main/Lab4_Database&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you’ve experimented with self-healing systems or run labs on constrained hardware, I’d love to hear how you approached it!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ansible</category>
      <category>docker</category>
      <category>database</category>
    </item>
    <item>
      <title>Building a Zero-Downtime Web Cluster on a Dell Latitude</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Mon, 13 Apr 2026 15:45:47 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/building-a-zero-downtime-web-cluster-on-a-dell-latitude-4np7</link>
      <guid>https://dev.to/muhammadkamrankabeeross/building-a-zero-downtime-web-cluster-on-a-dell-latitude-4np7</guid>
      <description>&lt;p&gt;The Problem: The "Single Point of Failure"&lt;br&gt;
Most small businesses host their websites on a single server. If that server crashes, their business stops. In this lab, I solved that problem by building a Distributed System using Nginx and Ansible.&lt;/p&gt;

&lt;p&gt;The Architecture: The Traffic Cop&lt;br&gt;
I used a Load Balancer strategy to ensure that even if a server dies, the website stays live.&lt;/p&gt;

&lt;p&gt;Front-End: Nginx Load Balancer (Port 8888)&lt;/p&gt;

&lt;p&gt;Back-End: Two Nginx Workers (Ports 8081 &amp;amp; 8082)&lt;/p&gt;

&lt;p&gt;Key Technical Win: Fault Tolerance&lt;br&gt;
The highlight of this lab was the Chaos Test. By manually stopping one of the web server containers, I verified that the Load Balancer instantly redirected all traffic to the healthy node. The result? Zero downtime for the user.&lt;/p&gt;

&lt;p&gt;Tools Used:&lt;br&gt;
Ansible: To automate the deployment and ensure the configuration is repeatable.&lt;/p&gt;

&lt;p&gt;Docker: To isolate the services and simulate a multi-server environment on my Dell E7440.&lt;/p&gt;

&lt;p&gt;Check out the Standalone Code:&lt;br&gt;
🔗 &lt;a href="https://github.com/muhammadkamrankabeer-oss/MK_Labs/tree/main/Lab3_Standalone" rel="noopener noreferrer"&gt;https://github.com/muhammadkamrankabeer-oss/MK_Labs/tree/main/Lab3_Standalone&lt;/a&gt; &lt;/p&gt;

</description>
      <category>devops</category>
      <category>distributedsystems</category>
      <category>showdev</category>
      <category>sre</category>
    </item>
    <item>
      <title>How I Automated a Monitoring Stack on my Dell Latitude using Ansible &amp; Docker</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Sun, 12 Apr 2026 09:16:23 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/how-i-automated-a-monitoring-stack-on-my-dell-latitude-using-ansible-docker-5b73</link>
      <guid>https://dev.to/muhammadkamrankabeeross/how-i-automated-a-monitoring-stack-on-my-dell-latitude-using-ansible-docker-5b73</guid>
      <description>&lt;p&gt;The Vision&lt;br&gt;
As part of my Technical Lab Roadmap, I am moving away from manual configurations. In the world of modern DevOps, if you have to do it twice, you should automate it. Today’s goal: Transforming my Xubuntu-powered Dell Latitude into a fully monitored node using Infrastructure as Code (IaC).&lt;/p&gt;

&lt;p&gt;The Architecture: A Three-Tier Observability Stack&lt;br&gt;
To monitor a system effectively, you need a pipeline. Data must be generated, collected, and visualized. Here is how I structured this lab:&lt;/p&gt;

&lt;p&gt;Generation (Node Exporter): A lightweight Go-based binary that exposes hardware metrics (CPU load, RAM usage, Disk I/O) via a web endpoint.&lt;/p&gt;

&lt;p&gt;Collection (Prometheus): The "brain" of the operation. It's a time-series database that "scrapes" the metrics from the exporter at defined intervals.&lt;/p&gt;

&lt;p&gt;Visualization (Grafana): The "eyes." It queries Prometheus to turn raw numbers into pulsing, real-time graphs.&lt;/p&gt;

&lt;p&gt;The "Aha!" Moment: Solving Networking Hurdles&lt;br&gt;
The biggest challenge was connectivity. When running Prometheus inside a Docker container, it views localhost as itself, not my laptop.&lt;/p&gt;

&lt;p&gt;The Solution:&lt;/p&gt;

&lt;p&gt;The Bridge: I used the Docker Gateway IP (172.17.0.1) to allow the container to look "outside" to the host hardware.&lt;/p&gt;

&lt;p&gt;The Guard: Xubuntu’s UFW (Uncomplicated Firewall) initially blocked these requests. I had to explicitly allow traffic on port 9100 from the Docker interface.&lt;/p&gt;

&lt;p&gt;The Implementation: Ansible Playbook&lt;br&gt;
Instead of 20 terminal commands, I consolidated the entire setup into one Ansible Playbook. This ensures Idempotency—I can run this on any machine and get the exact same result.&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;name: Deploy Monitoring Stack
hosts: localhost
connection: local
become: yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;tasks:&lt;br&gt;
    - name: Run Node Exporter (The Sensor)&lt;br&gt;
      community.docker.docker_container:&lt;br&gt;
        name: node-exporter&lt;br&gt;
        image: prom/node-exporter:latest&lt;br&gt;
        state: started&lt;br&gt;
        restart_policy: always&lt;br&gt;
        ports:&lt;br&gt;
          - "9100:9100"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Prometheus (The Brain)
  community.docker.docker_container:
    name: prometheus
    image: prom/prometheus:latest
    state: started
    recreate: yes
    volumes:
      - "./prometheus.yml:/etc/prometheus/prometheus.yml"
      - "./alert_rules.yml:/etc/prometheus/alert_rules.yml"
    ports:
      - "9091:9090"

- name: Run Grafana (The Visuals)
  community.docker.docker_container:
    name: grafana
    image: grafana/grafana:latest
    state: started
    ports:
      - "3000:3000"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Going Pro: Proactive Alerting&lt;br&gt;
Monitoring is useless if you have to stare at the screen all day. I integrated Alertmanager with a custom rule:&lt;/p&gt;

&lt;p&gt;If CPU usage exceeds 85% for more than 2 minutes, fire a CRITICAL alert.&lt;/p&gt;

&lt;p&gt;This moves the lab from "Basic Monitoring" to "Incident Response Readiness."&lt;/p&gt;

&lt;p&gt;Key Takeaways for Students &amp;amp; Peers&lt;br&gt;
Infrastructure is Code: Never install manually what you can automate.&lt;/p&gt;

&lt;p&gt;Firewalls Matter: If your data isn't flowing, check your UFW/Iptables first.&lt;/p&gt;

&lt;p&gt;Start Small: I’m doing this on an 8GB RAM Dell laptop. You don't need a cloud budget to learn high-level DevOps.&lt;/p&gt;

&lt;p&gt;Check out the full Source Code:&lt;br&gt;
🔗 &lt;a href="https://github.com/muhammadkamrankabeer-oss/Lab2_Monitoring_Automation" rel="noopener noreferrer"&gt;https://github.com/muhammadkamrankabeer-oss/Lab2_Monitoring_Automation&lt;/a&gt; &lt;/p&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>docker</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Stop Leaving Your Servers Open: Hardening Linux in 5 Minutes with Ansible</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Sat, 11 Apr 2026 04:40:31 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/stop-leaving-your-servers-open-hardening-linux-in-5-minutes-with-ansible-46a2</link>
      <guid>https://dev.to/muhammadkamrankabeeross/stop-leaving-your-servers-open-hardening-linux-in-5-minutes-with-ansible-46a2</guid>
      <description>&lt;p&gt;Hello, World! I’m Muhammad Kamran Kabeer.&lt;/p&gt;

&lt;p&gt;As an IT Instructor and the founder of MK EduOps Solutions, I often see students and small businesses focus on "getting things to work" while completely ignoring "getting things secured."Today, I’m sharing Lab 1 from my new series: The Hardened Gateway. We will use Ansible to automate the security of a Linux server on a Dell Latitude E7440 (or any Ubuntu/Debian machine).&lt;/p&gt;

&lt;p&gt;🛡️ Why "Default Deny"?&lt;br&gt;
Most people try to block "bad" ports. The professional way is to deny everything and only open what you need. This is the "Zero-Trust" mindset.&lt;/p&gt;

&lt;p&gt;🛠️ The Automation Code&lt;br&gt;
Here is the Ansible block I use to secure my lab environments:&lt;/p&gt;

&lt;p&gt;YAML&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;name: Lab 1 - The Hardened Gateway&lt;br&gt;
hosts: localhost&lt;br&gt;
become: yes&lt;br&gt;
tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Ensure UFW is installed
apt: { name: ufw, state: present }&lt;/li&gt;
&lt;li&gt;name: Set Default Policies to DENY
community.general.ufw: { state: enabled, policy: deny, direction: incoming }&lt;/li&gt;
&lt;li&gt;name: Allow Essential Traffic
community.general.ufw: { rule: allow, port: "{{ item }}", proto: tcp }
loop: ['22', '80', '443', '81']
🚀 The Result
Running this ensures that only SSH and Web traffic can enter. Everything else—unsecured databases, internal APIs, or forgotten services—is hidden from the world.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Check out the full lab repository here:&lt;a href="https://github.com/muhammadkamrankabeer-oss/MK-EduOps-Labs" rel="noopener noreferrer"&gt;https://github.com/muhammadkamrankabeer-oss/MK-EduOps-Labs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>linux</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How I Automated a Self-Healing WordPress Lab using Ansible &amp; Docker</title>
      <dc:creator>Muhammad Kamran Kabeer</dc:creator>
      <pubDate>Thu, 09 Apr 2026 16:21:08 +0000</pubDate>
      <link>https://dev.to/muhammadkamrankabeeross/how-i-automated-a-self-healing-wordpress-lab-using-ansible-docker-20m7</link>
      <guid>https://dev.to/muhammadkamrankabeeross/how-i-automated-a-self-healing-wordpress-lab-using-ansible-docker-20m7</guid>
      <description>&lt;p&gt;The Challenge&lt;/p&gt;

&lt;p&gt;As an IT educator, I wanted a lab environment that was stable, professional, and "self-healing." If a student (or a bug) crashes the site, I want it back up in seconds without manual work.&lt;br&gt;
The Solution&lt;/p&gt;

&lt;p&gt;I built a stack on my Dell Latitude E7440 using:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Vagrant &amp;amp; Debian: To create a clean, isolated sandbox.

Ansible: To automate the configuration (Infrastructure as Code).

Docker: To run WordPress and MariaDB.

Nginx Proxy Manager: To give it a professional URL (http://wordpress.test) instead of messy IP addresses.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Why this matters&lt;/p&gt;

&lt;p&gt;By using Docker's restart_policy: always, if the WordPress container fails, the system "heals" itself immediately.&lt;/p&gt;

&lt;p&gt;[INSERT YOUR TWO SCREENSHOTS HERE: NPM Dashboard and WordPress Welcome Page]&lt;br&gt;
Work with me!&lt;/p&gt;

&lt;p&gt;I am a professional educator and DevOps practitioner. If you need:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Technical Writing: I can turn your complex code into clear tutorials.

Lab Setup: I can help you automate your teaching environments.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Feel free to reach out here or on LinkedIn!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
