<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oluchi Oraekwe</title>
    <description>The latest articles on DEV Community by Oluchi Oraekwe (@oluchi_oraekwe_b0bf2c5abc).</description>
    <link>https://dev.to/oluchi_oraekwe_b0bf2c5abc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oluchi_oraekwe_b0bf2c5abc"/>
    <language>en</language>
    <item>
      <title>Monitoring and Alerting for Blue/Green Deployment</title>
      <dc:creator>Oluchi Oraekwe</dc:creator>
      <pubDate>Tue, 09 Dec 2025 14:44:10 +0000</pubDate>
      <link>https://dev.to/oluchi_oraekwe_b0bf2c5abc/monitoring-and-alerting-for-blue-green-deployment-59np</link>
      <guid>https://dev.to/oluchi_oraekwe_b0bf2c5abc/monitoring-and-alerting-for-blue-green-deployment-59np</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous article, we explored the &lt;strong&gt;Blue/Green&lt;/strong&gt; deployment strategy using Nginx as a reverse proxy to maintain service availability whenever one of the upstream servers fails. You can read that article here: &lt;a href="https://dev.to/oluchi_oraekwe_b0bf2c5abc/blue-green-deployment-with-nginx-upstreams-99p"&gt;Blue/Green Deployment with Nginx Upstreams&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this follow-up article, we extend the concept by adding &lt;strong&gt;monitoring and alerting mechanisms&lt;/strong&gt; to the deployment.&lt;br&gt;
In DevOps, monitoring and alerting are critical for maintaining system reliability and availability. When a server fails or becomes unstable, the alerting system notifies the responsible team immediately so the issue can be resolved quickly, maintaining high availability.&lt;/p&gt;

&lt;p&gt;Here, we introduce a &lt;strong&gt;Watcher Service&lt;/strong&gt; that runs as a sidecar container in the Docker Compose stack. It monitors Nginx logs in real time and sends alerts to Slack whenever:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A failover event occurs  whereby the traffic pool switches from Blue to Green&lt;/li&gt;
&lt;li&gt;The traffic pool switches back from Green to Blue&lt;/li&gt;
&lt;li&gt;There is a high error rate over a given period&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  1. Formatting Nginx Logs
&lt;/h2&gt;

&lt;p&gt;To enable meaningful monitoring, the Nginx access logs will have to include specific upstream details such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application pool (&lt;code&gt;x-app-pool&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Release version (&lt;code&gt;x-release-id&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Upstream status&lt;/li&gt;
&lt;li&gt;Upstream request and response times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These logs are stored in a shared volume so that the watcher application can also access them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log_format main '$remote_addr - $remote_user [$time_local] '
        '"$request" status=$status body_bytes_sent=$body_bytes_sent '
        'pool=$upstream_http_x_app_pool release=$upstream_http_x_release_id '
        'upstream_status=$upstream_status upstream_addr=$upstream_addr '
        'request_time=$request_time upstream_response_time=$upstream_response_time';

error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log main;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Creating the Log Watcher Application
&lt;/h2&gt;

&lt;p&gt;The watcher script performs these major tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitor logs in real time&lt;/strong&gt; and detect pool switches (Blue to Green and Green back to Blue)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detect and alert on high failure rates&lt;/strong&gt; (e.g., Blue server repeatedly failing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement alert cooldown&lt;/strong&gt; to prevent spamming your Slack channel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are the major sections of the watcher.py application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Imports and Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time
import os
import re
import requests
import logging
from collections import deque

LOG_FILE = os.getenv("NGINX_LOG_PATH")
SLACK_WEBHOOK_URL = os.getenv("SLACK_WEBHOOK_URL")
ERROR_RATE_THRESHOLD = float(os.getenv("ERROR_RATE_THRESHOLD", "2.0"))
WINDOW_SIZE = int(os.getenv("WINDOW_SIZE", "200"))
CHECK_INTERVAL = int(os.getenv("CHECK_INTERVAL", "10"))
ALERT_COOLDOWN = float(os.getenv("ALERT_COOLDOWN", "300"))

logging.basicConfig(
    level=logging.INFO,
    format="[%(asctime)s] %(levelname)s: %(message)s",
    datefmt="%H:%M:%S",
)

pattern = re.compile(
    r'(?P&amp;lt;ip&amp;gt;\S+) - - \[(?P&amp;lt;time&amp;gt;[^\]]+)\] "(?P&amp;lt;method&amp;gt;\S+) (?P&amp;lt;path&amp;gt;\S+) \S+" '
    r'status=(?P&amp;lt;status&amp;gt;\d+) [^ ]* pool=(?P&amp;lt;pool&amp;gt;\S+) release=(?P&amp;lt;release&amp;gt;\S+) '
    r'upstream_status=(?P&amp;lt;upstream_status&amp;gt;[0-9,\s]+) upstream_addr=(?P&amp;lt;upstream_addr&amp;gt;[0-9\.:,\s]+) '
    r'request_time=(?P&amp;lt;request_time&amp;gt;[\d\.]+) upstream_response_time=(?P&amp;lt;upstream_response_time&amp;gt;[\d\.,\s]+)'
)

recent = deque(maxlen=WINDOW_SIZE)
last_pool = os.getenv("ACTIVE_POOL", "blue")
last_check = 0.0
last_alert_time = {"failover": 0, "switch": 0, "error_rate": 0}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Slack Notification Function
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def send_slack_alert(message: str):
    if not SLACK_WEBHOOK_URL:
        logging.warning("SLACK_WEBHOOK_URL not set. Cannot send alert.")
        return
    try:
        response = requests.post(SLACK_WEBHOOK_URL, json={"text": message})
        response.raise_for_status()
        logging.info("Slack alert sent.")
    except requests.exceptions.RequestException as e:
        logging.error(f"Failed to send Slack alert: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  High Error Rate Detection
&lt;/h3&gt;

&lt;p&gt;This function checks the percentage of upstream &lt;code&gt;5xx&lt;/code&gt; errors over a defined window and threshold. Once the rate is higher than the threshold, it will send an alert notification to a Slack channel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def check_alert():
    now = time.time()

    while recent and now - recent[0][0] &amp;gt; WINDOW_SIZE * CHECK_INTERVAL:
        recent.popleft()

    total = len(recent)
    if total == 0:
        return

    errors = sum(1 for _, _, upstream_status in recent if upstream_status.startswith("5"))
    rate = (errors / total) * 100

    if rate &amp;gt;= ERROR_RATE_THRESHOLD and now - last_alert_time["error_rate"] &amp;gt; ALERT_COOLDOWN:
        last_alert_time["error_rate"] = now
        send_slack_alert(
            f"🚨 *High Error Rate Detected!*\n"
            f"• Error rate: `{rate:.2f}%`\n"
            f"• Threshold: `{ERROR_RATE_THRESHOLD}%`\n"
            f"• Total requests: `{total}`\n"
            f"• Active Pool: `{last_pool}`\n"
            f"• Time: {time.strftime('%Y-%m-%d %H:%M:%S')}`"
        )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Traffic Switch &amp;amp; Failover Monitoring
&lt;/h3&gt;

&lt;p&gt;This function watches Nginx logs in real-time by reading the server logs and detects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failover events&lt;/li&gt;
&lt;li&gt;Pool changes&lt;/li&gt;
&lt;li&gt;Upstream server changes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def monitor_log():
    global last_pool, last_check
    logging.info("Starting log watcher...")
    send_slack_alert("👀 Log watcher started. Monitoring Nginx logs...")

    while not os.path.exists(LOG_FILE):
        time.sleep(5)

    with open(LOG_FILE, "r") as f:
        f.seek(0, 2)
        while True:
            try:
                line = f.readline()
                if not line:
                    time.sleep(CHECK_INTERVAL)
                    continue

                match = pattern.search(line)
                if not match:
                    continue

                data = match.groupdict()
                pool = data["pool"]
                release = data["release"]
                upstream_status = data["upstream_status"]
                upstream = data["upstream_addr"]

                status_list = [s.strip() for s in upstream_status.split(",")]
                addr_list = [a.strip() for a in upstream.split(",")]

                latest_status = status_list[-1]
                previous_status = status_list[0]
                previous_upstream = addr_list[0]
                current_upstream = addr_list[-1]

                recent.append((time.time(), pool, upstream_status))

                # Failover detection
                if upstream_status.startswith("5") and time.time() - last_alert_time["failover"] &amp;gt; ALERT_COOLDOWN:
                    last_alert_time["failover"] = time.time()
                    send_slack_alert(
                        f"⚠️ *Failover Detected!*\n"
                        f"• Previous Pool: `{last_pool}`\n"
                        f"• New Pool: `{pool}`\n"
                        f"• Release: `{release}`\n"
                        f"• Upstream: `{current_upstream}`\n"
                        f"• Status: `{latest_status}`"
                    )

                # Traffic pool switch
                elif pool != last_pool and time.time() - last_alert_time["switch"] &amp;gt; ALERT_COOLDOWN:
                    last_alert_time["switch"] = time.time()
                    send_slack_alert(
                        f"🔄 *Traffic Switch Detected!*\n"
                        f"• `{last_pool}` → `{pool}`\n"
                        f"• Release: `{release}`\n"
                        f"• Upstream: `{current_upstream}`\n"
                        f"• Status: `{latest_status}`"
                    )

                last_pool = pool

                if time.time() - last_check &amp;gt;= CHECK_INTERVAL:
                    check_alert()
                    last_check = time.time()

            except Exception as e:
                logging.error(f"Error processing log line: {e}")
                time.sleep(2)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Main Application Entry Point
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == "__main__":
    try:
        monitor_log()
    except KeyboardInterrupt:
        send_slack_alert("Log monitor stopped manually by user.")
        logging.info("Log watcher stopped manually.")
    except Exception as e:
        send_slack_alert(f"Log monitor stopped due to error: `{e}`")
        logging.error(f"Log watcher stopped due to error: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this &lt;code&gt;requests&lt;/code&gt; to your requirements.txt file.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Running the Watcher in a Sidecar Container
&lt;/h2&gt;

&lt;p&gt;The watcher runs alongside Nginx using a sidecar pattern. A sidecar container is a secondary container that runs alongside a main application container. The shared log directory allows the watcher application to read Nginx logs in real time, as shown in the part of the Docker Compose file below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;monitor:
    image: python:3.11-slim
    container_name: monitor
    volumes:
      - ./logs:/var/log/nginx
      - ./log_watcher/watcher.py:/app/watcher.py
      - ./log_watcher/requirements.txt:/app/requirements.txt
    environment:
      - SLACK_WEBHOOK_URL=${SLACK_WEBHOOK_URL}
      - ACTIVE_POOL=${ACTIVE_POOL}
      - ERROR_RATE_THRESHOLD=${ERROR_RATE_THRESHOLD}
      - WINDOW_SIZE=${WINDOW_SIZE}
      - ALERT_COOLDOWN=${ALERT_COOLDOWN}
      - MAINTENANCE_MODE=${MAINTENANCE_MODE}
      - CHECK_INTERVAL=${CHECK_INTERVAL}
      - NGINX_LOG_PATH=/var/log/nginx/access.log
    working_dir: /app
    command: &amp;gt;
      /bin/sh -c "pip install -r requirements.txt &amp;amp;&amp;amp; python watcher.py"

networks:
  default:
    driver: bridge
volumes:
  nginx_logs:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Run the Docker Compose file
&lt;/h2&gt;

&lt;p&gt;After adding the monitor service in the Docker Compose file, see the &lt;a href="https://dev.to/oluchi_oraekwe_b0bf2c5abc/blue-green-deployment-with-nginx-upstreams-99p"&gt;previous&lt;/a&gt; article to see the existing Docker Compose file. Start up the application by running &lt;code&gt;docker compose up -d --build&lt;/code&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Simulate Monitoring
&lt;/h2&gt;

&lt;p&gt;To simulate the monitoring and alerting, we will use a bash script below that will send requests to the &lt;code&gt;http://localhost:8080/version&lt;/code&gt; endpoint 120 times and after every 3 requests, and it will introduce chaos in the system by triggering the chaos endpoint &lt;code&gt;POST http://localhost:8081/chaos/start?mode=error&lt;/code&gt;. This will ensure that there is enough number of default server failures that will be above the threshold in order to trigger a high failure alert message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

BASE_NGINX="http://localhost:8080/version"
CHAOS_ON="http://localhost:8081/chaos/start?mode=error"
CHAOS_OFF="http://localhost:8081/chaos/stop"
TOTAL_REQUESTS=120
TOGGLE_INTERVAL=3  # Toggle error every 3 requests
IN_ERROR_MODE=false

echo "Starting Chaos Simulation..."
echo "Sending $TOTAL_REQUESTS requests to $BASE_NGINX with chaos toggled every $TOGGLE_INTERVAL requests"
echo ""

for ((i=1; i&amp;lt;=TOTAL_REQUESTS; i++)); do
  # Toggle chaos mode every N requests
  if (( i % TOGGLE_INTERVAL == 0 )); then
    if [ "$IN_ERROR_MODE" = true ]; then
      echo -e "\n[$i] Turning OFF error mode on Blue..."
      if curl -s -X POST "$CHAOS_OFF" &amp;gt; /dev/null; then
        IN_ERROR_MODE=false
      else
        echo "Failed to stop chaos."
      fi
    else
      echo -e "\n[$i] Turning ON error mode on Blue..."
      if curl -s -X POST "$CHAOS_ON" &amp;gt; /dev/null; then
        IN_ERROR_MODE=true
      else
        echo "Failed to start chaos."
      fi
    fi
  fi

  # Send request to Nginx (load balancer)
  HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$BASE_NGINX")
  if [[ "$HTTP_STATUS" == "200" ]]; then
    echo "[$i] Status: $HTTP_STATUS"
  else
    echo "[$i] Error: $HTTP_STATUS"
  fi

  # Sleep 0.1 second
  sleep 0.1
done

# Ensure chaos mode is turned off at the end
echo -e "\n Stopping any remaining chaos mode..."
curl -s -X POST "$CHAOS_OFF" &amp;gt; /dev/null || echo "Cleanup failed."

echo -e "\n Simulation complete!"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The screenshots for the high error rate, failover and server switch when the blue server is healthy are shown below&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7rj3zz61wgg2whyk57f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7rj3zz61wgg2whyk57f.png" alt="high error rate" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyv88fa5me3nck92k0n3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyv88fa5me3nck92k0n3i.png" alt="fail over" width="768" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80fruhqtkjtqsfdyvpxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80fruhqtkjtqsfdyvpxy.png" alt="server switch" width="747" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This monitoring and alerting system enhances &lt;strong&gt;Blue/Green&lt;/strong&gt; deployments by providing real-time insights into application stability. It ensures that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failovers are detected immediately&lt;/li&gt;
&lt;li&gt;Traffic switches are tracked&lt;/li&gt;
&lt;li&gt;High error rates trigger alerts&lt;/li&gt;
&lt;li&gt;Your team can respond quickly to maintain maximum uptime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is just a simplified monitoring and alerting system. Advanced observability tools like Grafana Loki, Prometheus, etc, can be used for more complex systems.&lt;/p&gt;

</description>
      <category>python</category>
      <category>alert</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Blue/Green Deployment with Nginx Upstream</title>
      <dc:creator>Oluchi Oraekwe</dc:creator>
      <pubDate>Tue, 09 Dec 2025 11:46:57 +0000</pubDate>
      <link>https://dev.to/oluchi_oraekwe_b0bf2c5abc/blue-green-deployment-with-nginx-upstreams-99p</link>
      <guid>https://dev.to/oluchi_oraekwe_b0bf2c5abc/blue-green-deployment-with-nginx-upstreams-99p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Ensuring minimal downtime and maintaining continuous service availability are crucial aspects of delivering software applications to users. Users expect a seamless experience at all times, even when failures occur behind the scenes. Servers or applications may fail due to high traffic, malfunctioning code, or unexpected system errors. When these situations arise, it is essential to keep services running with as little interruption as possible.&lt;/p&gt;

&lt;p&gt;In this article, we will explore how to maintain application availability using the &lt;strong&gt;Blue/Green&lt;/strong&gt; Deployment strategy. As the name suggests, this method relies on two environments: Blue and Green. These environments may be two separate servers or two instances of the same application running on a single server.&lt;/p&gt;

&lt;p&gt;To demonstrate this concept, we will run two versions of our application, blue and green, inside two separate Docker containers. The Blue server will act as the primary server, while the Green server will function as the backup. The following sections provide a walkthrough of the setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx Server
&lt;/h2&gt;

&lt;p&gt;Nginx is a high-performance, open source web server commonly used for serving static content, reverse proxying, load balancing, and caching. In this setup, Nginx will function as a reverse proxy and a load balancer, receiving all user requests and forwarding them to either the Blue or Green application server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w9mk46hauiu17b13jnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w9mk46hauiu17b13jnk.png" alt="architecture" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the configuration file (&lt;code&gt;nginx.conf.template&lt;/code&gt;), the upstream servers are defined using Nginx’s upstream block. There are two &lt;code&gt;upstream&lt;/code&gt; configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;blue_primary&lt;/code&gt;: Blue is active, Green is backup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;green_primary&lt;/code&gt;: Green is active, Blue is backup&lt;br&gt;
The &lt;code&gt;ACTIVE_POOL&lt;/code&gt; environment variable controls which upstream group is selected.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user  nginx;
worker_processes  auto;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {

    log_format main '$remote_addr - $remote_user [$time_local] '
        '"$request" status=$status body_bytes_sent=$body_bytes_sent '
        'pool=$upstream_http_x_app_pool release=$upstream_http_x_release_id '
        'upstream_status=$upstream_status upstream_addr=$upstream_addr '
        'request_time=$request_time upstream_response_time=$upstream_response_time';

    error_log /var/log/nginx/error.log warn;
    access_log /var/log/nginx/access.log main;

    upstream blue_primary {
        server app_blue:4000 max_fails=1 fail_timeout=5s;
        server app_green:4000 backup;
    }

    upstream green_primary {
        server app_green:4000 max_fails=1 fail_timeout=5s;
        server app_blue:4000 backup;
    }

    map "${ACTIVE_POOL}" $active_backend {
        default     blue_primary;
        blue        blue_primary;
        green       green_primary;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://$active_backend;
            proxy_pass_header X-App-Pool;
            proxy_pass_header X-Release-Id;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            proxy_connect_timeout 1s;
            proxy_send_timeout 1s;
            proxy_read_timeout 1s;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
            proxy_next_upstream_tries 2;
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For simulation purposes, the proxy timeouts were set to very low values to trigger fast failover during testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upstream Servers
&lt;/h2&gt;

&lt;p&gt;In Nginx terminology, &lt;em&gt;upstream servers&lt;/em&gt; are the backend servers that Nginx forwards client requests to. Nginx does not execute application logic; rather, it simply accepts incoming traffic and routes it to one or more upstream servers defined in its configuration.&lt;/p&gt;

&lt;p&gt;An upstream group may contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One or multiple application servers&lt;/li&gt;
&lt;li&gt;A load-balancing strategy&lt;/li&gt;
&lt;li&gt;Failover rules&lt;/li&gt;
&lt;li&gt;Backup servers that only activate when the primary becomes unavailable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a &lt;strong&gt;Blue/Green&lt;/strong&gt; deployment, the upstream servers represent the two versions of the application. Nginx sends traffic to the active server (Blue or Green), based on the &lt;code&gt;ACTIVE_POOL&lt;/code&gt; variable, while the other server remains on standby. This guarantees seamless switching and minimises downtime during failures or deployments.&lt;/p&gt;

&lt;p&gt;The upstream servers in our setup are two &lt;strong&gt;identical Docker containers&lt;/strong&gt; created from the same Docker image. This ensures consistency and prevents users from experiencing different behaviours during failover.&lt;/p&gt;

&lt;p&gt;Below is the Docker Compose configuration managing Nginx, Blue, and Green containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  nginx:
    image: nginx:latest
    container_name: nginx
    volumes:
      - ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
      - ./logs:/var/log/nginx
    ports:
      - "8080:80"
    environment:
      - ACTIVE_POOL=${ACTIVE_POOL}
    command: &amp;gt;
       /bin/sh -c "envsubst '$$ACTIVE_POOL' &amp;lt; /etc/nginx/nginx.conf.template &amp;gt; /etc/nginx/nginx.conf &amp;amp;&amp;amp; nginx -g 'daemon off;'"
    depends_on:
      - app_blue
      - app_green

  app_blue:
    image: ${BLUE_IMAGE}
    container_name: app_blue
    ports:
      - "8081:${PORT}"
    environment:
      - PORT=${PORT}
      - APP_POOL=${ACTIVE_POOL}
      - RELEASE_ID=${RELEASE_ID_BLUE}

  app_green:
    image: ${GREEN_IMAGE}
    container_name: app_green
    ports:
      - "8082:${PORT}"
    environment:
      - PORT=${PORT}
      - APP_POOL=green
      - RELEASE_ID=${RELEASE_ID_GREEN}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this demonstration, the headers &lt;code&gt;app_pool&lt;/code&gt; and &lt;code&gt;release_id&lt;/code&gt;help differentiate between the two servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step by Step Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Set up Docker Compose
&lt;/h3&gt;

&lt;p&gt;Create your Docker services as shown in the Docker Compose file above: Blue (active) and Green (backup), both running identical applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Define environment variables
&lt;/h3&gt;

&lt;p&gt;These ensure unique identifiers for each server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BLUE_IMAGE=yimikaade/wonderful:devops-stage-two
GREEN_IMAGE=yimikaade/wonderful:devops-stage-two
ACTIVE_POOL=blue
RELEASE_ID_BLUE=blue:v1.0.0
RELEASE_ID_GREEN=green:v1.0.0
PORT=4000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Configure Nginx
&lt;/h3&gt;

&lt;p&gt;Use the provided template to dynamically generate the active configuration using &lt;code&gt;envsubst&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Start the services
&lt;/h3&gt;

&lt;p&gt;Ensure Docker and Docker Compose are running, then start the system:&lt;br&gt;
&lt;code&gt;docker compose up -d --build&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Test the Nginx endpoint
&lt;/h3&gt;

&lt;p&gt;Visit: &lt;code&gt;http://&amp;lt;IP&amp;gt;:8080&lt;/code&gt;. If you are running on your local machine, the IP is &lt;code&gt;localhost&lt;/code&gt; or &lt;code&gt;127.0.0.1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxgd9rlyff4ganxt2h0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxgd9rlyff4ganxt2h0.png" alt="home page" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  6. Check the version headers
&lt;/h3&gt;

&lt;p&gt;Access: &lt;code&gt;http://&amp;lt;IP&amp;gt;:8080/version&lt;/code&gt; to check the version and the server headers. The response is shown below;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q7v0zgya3sjtsugqhio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3q7v0zgya3sjtsugqhio.png" alt="blue server" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  7. Introduce failure on the Blue server
&lt;/h3&gt;

&lt;p&gt;Trigger chaos mode: &lt;code&gt;POST http://&amp;lt;IP&amp;gt;:8081/chaos/start?mode=timeout&lt;/code&gt; or &lt;code&gt;POST http://&amp;lt;IP&amp;gt;:8081/chaos/start?mode=error&lt;/code&gt; to render the blue server not to be reached by nginx&lt;br&gt;
Example response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "message": "Simulation mode 'error' activated"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Verify automatic failover
&lt;/h3&gt;

&lt;p&gt;Check the version endpoint again; you should now see the Green server is responding to requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqtxtz8pw4fj6oun9ozz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqtxtz8pw4fj6oun9ozz.png" alt="green server" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Stop the chaos simulation
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;POST http://&amp;lt;IP&amp;gt;:8081/chaos/stop&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
The Blue server becomes active again to serve requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Shut down your environment
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Shut down your environment&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This walkthrough demonstrates how to implement a Blue/Green deployment strategy with automatic failover using Nginx as a reverse proxy and load balancer. For failover to work correctly, ensure that the &lt;code&gt;backup&lt;/code&gt; server is marked as backup in the Nginx configuration.&lt;/p&gt;

&lt;p&gt;In the next article, I will explain how to &lt;strong&gt;monitor which server is active&lt;/strong&gt; at any given time.&lt;/p&gt;

&lt;p&gt;Before concluding, I want to briefly introduce &lt;strong&gt;backend.im&lt;/strong&gt;, a developer-friendly platform for deploying applications using the Claude Code CLI directly from your desktop. It integrates seamlessly with Claude CLI, allowing you to provision infrastructure and deploy code with just a few commands. I will cover this in more detail soon.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>loadbalancer</category>
      <category>dockercompose</category>
      <category>devops</category>
    </item>
    <item>
      <title>Creating Bash Scripts for User Management</title>
      <dc:creator>Oluchi Oraekwe</dc:creator>
      <pubDate>Mon, 01 Jul 2024 19:47:55 +0000</pubDate>
      <link>https://dev.to/oluchi_oraekwe_b0bf2c5abc/creating-bash-scripts-for-user-management-47ce</link>
      <guid>https://dev.to/oluchi_oraekwe_b0bf2c5abc/creating-bash-scripts-for-user-management-47ce</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A Bash script is a file containing a sequence of commands that are executed on a Bash shell. It allows performing a series of actions or automating tasks. This article will examine how to use a Bash script to create users dynamically by reading a CSV file. A CSV (Comma Separated Values) file contains data separated by commas or other delimiters.&lt;/p&gt;

&lt;p&gt;In Linux systems, multiple users can access the same machine, necessitating the creation of various users with access to the machine. This article demonstrates a Bash script for creating users from a CSV file, assigning them to groups, and generating random passwords.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;The main purpose of this article is to show how to create users in a Linux system using a simple bash script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash Script Overview
&lt;/h2&gt;

&lt;p&gt;We will follow a sequence of steps to create the Bash script. You can clone the full code from &lt;a href="https://github.com/chukwukelu2023/task-2-devops.git" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading the CSV File
&lt;/h3&gt;

&lt;p&gt;The first step is to read the CSV file. The Bash script will take the CSV file as an input argument. The script will throw an error and exit if no input is provided. Below is the code snippet for checking the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CSV_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Check if the file exists&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CSV_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"File not found!"&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Directory and File to Store Users
&lt;/h3&gt;

&lt;p&gt;After reading the file, we need to create a directory and file to store the usernames and passwords of any new users created which will be accessible only to the file owner. To avoid creating a directory that already exists, we first check for its existence before creating a new one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PASSWD_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/secure"&lt;/span&gt;
&lt;span class="nv"&gt;PASSWD_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user_passwords.csv"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_DIR&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;chmod &lt;/span&gt;600 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_DIR&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reading the Usernames and Group Names
&lt;/h3&gt;

&lt;p&gt;After checking the text file and creating the file for storing the users, we will iterate through the rows to create users and assign them to groups. While creating the users, a random password is generated for each user. The code snippet below shows how users are created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/secure/user_management.txt"&lt;/span&gt;

&lt;span class="c"&gt;# Split the user and group by ";"&lt;/span&gt;
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;';'&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; user &lt;span class="nb"&gt;groups&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$user&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$groups&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;# Check if the user exists&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; user &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt; already exists."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
        &lt;span class="c"&gt;# Generate random password&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; user with username &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt; already exist."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;

        &lt;span class="c"&gt;# Create user and assign password&lt;/span&gt;
        useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | chpasswd

        &lt;span class="c"&gt;# Store the created user and password&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_DIR&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$PASSWD_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; user with username: &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt; cretaed by user &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;whoami&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;

    &lt;span class="c"&gt;# Split the groups by comma and add user to each group&lt;/span&gt;
    &lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;','&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-ra&lt;/span&gt; GROUP_ARRAY &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;group &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GROUP_ARRAY&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="c"&gt;# check for the existense of group before creating group&lt;/span&gt;
         &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;getent group &lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; group: &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt; already exists."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;
         &lt;span class="k"&gt;else
            &lt;/span&gt;groupadd &lt;span class="nv"&gt;$group&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; group: &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt; created by user &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;whoami&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;  &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;
        &lt;span class="k"&gt;fi&lt;/span&gt;

        &lt;span class="c"&gt;# check for the existense of user in a group before addding the user&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;getent group &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-qw&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; user: &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt; is already in group: &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;
        &lt;span class="k"&gt;else
            &lt;/span&gt;adduser &lt;span class="nv"&gt;$user&lt;/span&gt; &lt;span class="nv"&gt;$group&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y-%m-%d %H:%M:%S"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; user with username: &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt; was added to group :&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt; by user &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;whoami&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$logPath&lt;/span&gt;
        &lt;span class="k"&gt;fi


done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CSV_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Important Notes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Separators:&lt;/strong&gt; The CSV file uses &lt;code&gt;;&lt;/code&gt; to separate usernames and groups, and &lt;code&gt;,&lt;/code&gt; to separate multiple groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging:&lt;/strong&gt; All actions taken concerning user creation are logged to a file with timestamps.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Running the Script
&lt;/h3&gt;

&lt;p&gt;To run the script, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure you are running on a Linux system with root privileges or use the &lt;code&gt;sudo&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;Clone the repository and navigate to the directory.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a sample CSV file &lt;code&gt;users.csv&lt;/code&gt; as shown below.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mary; developer,sys-admin
paul; sys-admin
peter; operations
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Execute the script as shown add &lt;strong&gt;sudo&lt;/strong&gt; if you are not a root user:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash create_users.sh users.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After running the script, new users will be created, and their details will be stored in &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt;. All actions will be logged in &lt;code&gt;/var/secure/user_management.txt&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The steps outlined in this article are not exhaustive other sources of information such as documentation can be used to complement your article.&lt;br&gt;
To learn more about bash scripting join us at &lt;a href="https://hng.tech/internship" rel="noopener noreferrer"&gt;HNG Internship&lt;/a&gt; or subscribe and become a premium member by joining &lt;a href="https://hng.tech/premium" rel="noopener noreferrer"&gt;HNG Premium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
