<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tanvir Rahman</title>
    <description>The latest articles on DEV Community by Tanvir Rahman (@tanvirrahman).</description>
    <link>https://dev.to/tanvirrahman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tanvirrahman"/>
    <language>en</language>
    <item>
      <title>Automating Harbor Registry Cleanup with Kubernetes</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Mon, 26 May 2025 10:35:00 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/automating-harbor-registry-cleanup-with-kubernetes-360a</link>
      <guid>https://dev.to/tanvirrahman/automating-harbor-registry-cleanup-with-kubernetes-360a</guid>
      <description>&lt;p&gt;Container registries like Harbor can quickly accumulate unused images, consuming valuable storage space. Manual cleanup is tedious, so this script will safely automates the process while respecting images currently in use by Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;After running Harbor in production for several months, we noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage usage growing exponentially&lt;/li&gt;
&lt;li&gt;Hundreds of outdated image tags&lt;/li&gt;
&lt;li&gt;No easy way to identify which images were actually in use&lt;/li&gt;
&lt;li&gt;Fear of breaking production by deleting important images&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Script
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
"""
Harbor Image Cleanup Script

This script cleans up old/unused images from Harbor registry while ensuring images
used by Kubernetes deployments are preserved.

Features:
- Connects to Harbor API to list and delete images
- Checks Kubernetes deployments to identify images in use
- Applies configurable retention policies (by age or count)
- Provides detailed logging
- Can be run manually or as a cron job

Requirements:
- Python 3.6+
- Required packages: requests, kubernetes, pyyaml, argparse, logging
"""

import os
import sys
import time
import re
import json
import base64
import logging
import argparse
import datetime
import yaml
import requests
from requests.auth import HTTPBasicAuth
from kubernetes import client, config
from kubernetes.client.rest import ApiException
from typing import Dict, List, Set, Tuple, Optional, Any

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.StreamHandler(sys.stdout),
        logging.FileHandler('harbor-cleanup.log')
    ]
)
logger = logging.getLogger('harbor-cleanup')

class HarborClient:
    """Client for interacting with Harbor API"""

    def __init__(self, harbor_url: str, username: str, password: str, verify_ssl: bool = True):
        """
        Initialize Harbor client

        Args:
            harbor_url: URL of Harbor instance (e.g., https://harbor.example.com)
            username: Harbor username
            password: Harbor password
            verify_ssl: Whether to verify SSL certificates
        """
        self.harbor_url = harbor_url.rstrip('/')
        self.api_url = f"{self.harbor_url}/api"
        self.username = username
        self.password = password
        self.verify_ssl = verify_ssl
        self.session = requests.Session()
        self.session.auth = HTTPBasicAuth(username, password)
        self.session.verify = verify_ssl
        # Add X-Xsrftoken header to all requests
        self.session.headers.update({'X-Xsrftoken': 'token'})

        # Test connection
        try:
            self.ping()
            logger.info(f"Successfully connected to Harbor at {harbor_url}")
        except Exception as e:
            logger.error(f"Failed to connect to Harbor: {str(e)}")
            raise

    def ping(self) -&amp;gt; bool:
        """Test connection to Harbor API"""
        response = self.session.get(f"{self.harbor_url}/api/ping")
        response.raise_for_status()
        return True

    def get_projects(self) -&amp;gt; List[Dict]:
        """Get list of projects from Harbor"""
        response = self.session.get(f"{self.harbor_url}/api/projects")
        response.raise_for_status()
        return response.json()

    def get_project_id(self, project_name: str) -&amp;gt; int:
        """
        Get project ID from project name

        Args:
            project_name: Name of the Harbor project

        Returns:
            Project ID as integer

        Raises:
            ValueError: If project not found
        """
        projects = self.get_projects()
        for project in projects:
            if project.get('name') == project_name:
                return project.get('project_id')
        raise ValueError(f"Project {project_name} not found")

    def get_repositories(self, project_name: str) -&amp;gt; List[Dict]:
        """
        Get list of repositories in a project

        Args:
            project_name: Name of the Harbor project

        Returns:
            List of repository objects
        """
        project_id = self.get_project_id(project_name)
        response = self.session.get(f"{self.harbor_url}/api/repositories?project_id={project_id}")
        response.raise_for_status()
        return response.json()

    def get_tags(self, repository_name: str) -&amp;gt; List[Dict]:
        """
        Get list of tags in a repository

        Args:
            repository_name: Full name of repository (project/repo)

        Returns:
            List of tag objects
        """
        # repository_name should be in format "project/repository"
        response = self.session.get(f"{self.harbor_url}/api/repositories/{repository_name}/tags")
        response.raise_for_status()
        return response.json()

    def delete_tag(self, repository_name: str, tag: str) -&amp;gt; bool:
        """
        Delete a tag from Harbor

        Args:
            repository_name: Full name of repository (project/repo)
            tag: Tag to delete

        Returns:
            True if deletion was successful
        """
        response = self.session.delete(
            f"{self.harbor_url}/api/repositories/{repository_name}/tags/{tag}"
        )
        response.raise_for_status()
        logger.info(f"Deleted tag {repository_name}:{tag}")
        return True


class KubernetesClient:
    """Client for interacting with Kubernetes API"""

    def __init__(self, kubeconfig: Optional[str] = None, context: Optional[str] = None):
        """
        Initialize Kubernetes client

        Args:
            kubeconfig: Path to kubeconfig file (defaults to ~/.kube/config)
            context: Kubernetes context to use
        """
        try:
            if kubeconfig:
                config.load_kube_config(kubeconfig, context=context)
            else:
                config.load_kube_config(context=context)
            self.apps_v1 = client.AppsV1Api()
            self.core_v1 = client.CoreV1Api()
            logger.info("Successfully connected to Kubernetes")
        except Exception as e:
            logger.error(f"Failed to connect to Kubernetes: {str(e)}")
            raise

    def get_images_in_use(self) -&amp;gt; Set[str]:
        """
        Get set of all images currently in use in the cluster

        Returns:
            Set of image references (including tags/digests)
        """
        images = set()

        # Check deployments
        try:
            deployments = self.apps_v1.list_deployment_for_all_namespaces()
            for deployment in deployments.items:
                for container in deployment.spec.template.spec.containers:
                    images.add(container.image)
                if deployment.spec.template.spec.init_containers:
                    for container in deployment.spec.template.spec.init_containers:
                        images.add(container.image)
        except ApiException as e:
            logger.error(f"Error getting deployments: {str(e)}")

        # Check statefulsets
        try:
            statefulsets = self.apps_v1.list_stateful_set_for_all_namespaces()
            for statefulset in statefulsets.items:
                for container in statefulset.spec.template.spec.containers:
                    images.add(container.image)
                if statefulset.spec.template.spec.init_containers:
                    for container in statefulset.spec.template.spec.init_containers:
                        images.add(container.image)
        except ApiException as e:
            logger.error(f"Error getting statefulsets: {str(e)}")

        # Check daemonsets
        try:
            daemonsets = self.apps_v1.list_daemon_set_for_all_namespaces()
            for daemonset in daemonsets.items:
                for container in daemonset.spec.template.spec.containers:
                    images.add(container.image)
                if daemonset.spec.template.spec.init_containers:
                    for container in daemonset.spec.template.spec.init_containers:
                        images.add(container.image)
        except ApiException as e:
            logger.error(f"Error getting daemonsets: {str(e)}")

        # Check pods (for CronJobs and Jobs)
        try:
            pods = self.core_v1.list_pod_for_all_namespaces()
            for pod in pods.items:
                for container in pod.spec.containers:
                    images.add(container.image)
                if pod.spec.init_containers:
                    for container in pod.spec.init_containers:
                        images.add(container.image)
        except ApiException as e:
            logger.error(f"Error getting pods: {str(e)}")

        logger.info(f"Found {len(images)} unique images in use in Kubernetes")
        return images


class HarborCleanup:
    """Main class for Harbor cleanup operations"""

    def __init__(
        self,
        harbor_url: str,
        harbor_username: str,
        harbor_password: str,
        kubeconfig: Optional[str] = None,
        kube_context: Optional[str] = None,
        dry_run: bool = False,
        projects: Optional[List[str]] = None,
        exclude_projects: Optional[List[str]] = None,
        keep_days: Optional[int] = None,
        keep_tags: Optional[int] = None,
        skip_in_use: bool = True,
        always_keep_tags: Optional[List[str]] = None,
        verify_ssl: bool = True,
    ):
        """
        Initialize Harbor cleanup

        Args:
            harbor_url: URL of Harbor instance
            harbor_username: Harbor username
            harbor_password: Harbor password
            kubeconfig: Path to kubeconfig file
            kube_context: Kubernetes context to use
            dry_run: If True, don't actually delete anything
            projects: List of projects to clean up (if None, clean all)
            exclude_projects: List of projects to exclude from cleanup
            keep_days: Keep images newer than this many days
            keep_tags: Keep this many most recent tags per repository
            skip_in_use: Skip images in use by Kubernetes
            always_keep_tags: List of tag patterns to always keep (e.g., ['latest', 'stable', 'prod-*'])
            verify_ssl: Whether to verify SSL certificates
        """
        self.harbor_client = HarborClient(harbor_url, harbor_username, harbor_password, verify_ssl)
        self.kube_client = KubernetesClient(kubeconfig, kube_context) if skip_in_use else None
        self.dry_run = dry_run
        self.projects_filter = projects
        self.exclude_projects = exclude_projects or []
        self.keep_days = keep_days
        self.keep_tags = keep_tags
        self.skip_in_use = skip_in_use
        self.always_keep_tags = always_keep_tags or ["latest", "stable", "master"]

        if not self.keep_days and not self.keep_tags:
            logger.warning("No retention policy specified, defaulting to keeping 30 days")
            self.keep_days = 30

        if self.dry_run:
            logger.info("DRY RUN MODE: No images will be deleted")

    def should_keep_tag(self, tag_name: str) -&amp;gt; bool:
        """
        Check if a tag should be kept based on tag patterns

        Args:
            tag_name: Name of the tag

        Returns:
            True if the tag should be kept
        """
        for pattern in self.always_keep_tags:
            if '*' in pattern:
                # Convert glob pattern to regex
                regex_pattern = pattern.replace('.', '\\.').replace('*', '.*')
                if re.match(f"^{regex_pattern}$", tag_name):
                    return True
            elif pattern == tag_name:
                return True
        return False

    def run(self):
        """Run the cleanup process"""
        # Get all images in use by Kubernetes
        k8s_images = self.kube_client.get_images_in_use() if self.skip_in_use else set()

        # Normalize Kubernetes image references to match Harbor format
        normalized_k8s_images = self._normalize_k8s_images(k8s_images)

        # Get projects to clean
        all_projects = self.harbor_client.get_projects()
        projects_to_clean = []

        for project in all_projects:
            project_name = project['name']

            if self.projects_filter and project_name not in self.projects_filter:
                logger.debug(f"Skipping project {project_name} (not in filter)")
                continue

            if project_name in self.exclude_projects:
                logger.debug(f"Skipping project {project_name} (in exclude list)")
                continue

            projects_to_clean.append(project_name)

        logger.info(f"Found {len(projects_to_clean)} projects to clean")

        # Process each project
        for project_name in projects_to_clean:
            self._clean_project(project_name, normalized_k8s_images)

    def _normalize_k8s_images(self, k8s_images: Set[str]) -&amp;gt; Dict[str, Set[str]]:
        """
        Normalize Kubernetes image references to match Harbor format

        Args:
            k8s_images: Set of image references from Kubernetes

        Returns:
            Dict mapping repository names to sets of tags/digests
        """
        normalized = {}
        harbor_domain = self.harbor_client.harbor_url.replace('https://', '').replace('http://', '')

        for image in k8s_images:
            # Skip images not from our Harbor
            if not image.startswith(harbor_domain):
                continue

            # Extract repository and tag/digest
            if '@sha256:' in image:
                repo, digest = image.split('@sha256:', 1)
                digest = f"sha256:{digest}"
                repo = repo.replace(f"{harbor_domain}/", '')

                if repo not in normalized:
                    normalized[repo] = set()
                normalized[repo].add(digest)
            else:
                if ':' in image[image.find('/')+1:]:  # Make sure we're not splitting on the port
                    repo, tag = image.rsplit(':', 1)
                    repo = repo.replace(f"{harbor_domain}/", '')

                    if repo not in normalized:
                        normalized[repo] = set()
                    normalized[repo].add(tag)
                else:
                    # Image without tag (implicitly 'latest')
                    repo = image.replace(f"{harbor_domain}/", '')

                    if repo not in normalized:
                        normalized[repo] = set()
                    normalized[repo].add('latest')

        return normalized

    def _clean_project(self, project_name: str, k8s_images: Dict[str, Set[str]]):
        """
        Clean a single Harbor project

        Args:
            project_name: Name of the Harbor project
            k8s_images: Dict mapping repository names to sets of tags/digests
        """
        logger.info(f"Cleaning project: {project_name}")

        # Get repositories in the project
        try:
            repositories = self.harbor_client.get_repositories(project_name)
        except Exception as e:
            logger.error(f"Error getting repositories for project {project_name}: {str(e)}")
            return

        logger.info(f"Found {len(repositories)} repositories in project {project_name}")

        for repo in repositories:
            repo_name = repo['name']  # Should be in format "project/repo"
            self._clean_repository(repo_name, k8s_images)

    def _clean_repository(self, repo_name: str, k8s_images: Dict[str, Set[str]]):
        """
        Clean a single Harbor repository

        Args:
            repo_name: Full name of the repository (project/repo)
            k8s_images: Dict mapping repository names to sets of tags/digests
        """
        logger.info(f"Cleaning repository: {repo_name}")

        # Get tags in the repository
        try:
            tags = self.harbor_client.get_tags(repo_name)
        except Exception as e:
            logger.error(f"Error getting tags for repository {repo_name}: {str(e)}")
            return

        # Sort tags by creation time (newest first)
        tags.sort(key=lambda x: x.get('created', ''), reverse=True)

        # Keep track of how many tags we've kept
        kept_count = 0

        for i, tag in enumerate(tags):
            tag_name = tag['name']
            created_time = tag.get('created')

            # Check if tag is in use by Kubernetes
            in_use = False
            if self.skip_in_use and repo_name in k8s_images:
                if tag_name in k8s_images[repo_name]:
                    in_use = True
                    logger.info(f"Keeping {repo_name}:{tag_name} (in use by Kubernetes)")

            # Check if we should keep this tag based on retention policies
            should_keep = False

            # Check if tag matches the always-keep patterns
            if self.should_keep_tag(tag_name):
                should_keep = True
                logger.info(f"Keeping {repo_name}:{tag_name} (matches always-keep pattern)")

            # Check age-based retention
            if not should_keep and self.keep_days is not None and created_time:
                created_date = datetime.datetime.fromisoformat(created_time.replace('Z', '+00:00'))
                age_days = (datetime.datetime.now(datetime.timezone.utc) - created_date).days
                if age_days &amp;lt; self.keep_days:
                    should_keep = True
                    logger.info(f"Keeping {repo_name}:{tag_name} (age: {age_days} days, keep_days: {self.keep_days})")

            # Check count-based retention
            if not should_keep and self.keep_tags is not None:
                if kept_count &amp;lt; self.keep_tags:
                    should_keep = True
                    logger.info(f"Keeping {repo_name}:{tag_name} (keep_tags: {self.keep_tags})")

            # Delete or keep tag
            if in_use or should_keep:
                kept_count += 1
            else:
                logger.info(f"Deleting {repo_name}:{tag_name}")

                if not self.dry_run:
                    try:
                        self.harbor_client.delete_tag(repo_name, tag_name)
                    except Exception as e:
                        logger.error(f"Error deleting tag {repo_name}:{tag_name}: {str(e)}")


def get_harbor_credentials_from_kube() -&amp;gt; Tuple[Optional[str], Optional[str], Optional[str]]:
    """
    Attempt to extract Harbor credentials from Kubernetes context

    Returns:
        Tuple of (harbor_url, username, password)
    """
    # Try to load current context
    try:
        config.load_kube_config()
        v1 = client.CoreV1Api()

        # First, try to find harbor-auth secret in harbor namespace
        try:
            secret = v1.read_namespaced_secret("harbor-auth", "harbor")
            harbor_url = base64.b64decode(secret.data.get("url", "")).decode("utf-8")
            username = base64.b64decode(secret.data.get("username", "")).decode("utf-8")
            password = base64.b64decode(secret.data.get("password", "")).decode("utf-8")
            return harbor_url, username, password
        except:
            pass

        # Try to find any Harbor credentials in any namespace
        namespaces = v1.list_namespace()
        for ns in namespaces.items:
            ns_name = ns.metadata.name
            try:
                secrets = v1.list_namespaced_secret(ns_name)
                for secret in secrets.items:
                    if "harbor" in secret.metadata.name.lower():
                        data = secret.data
                        if data:
                            # Try to find url, username, password
                            harbor_url = None
                            username = None
                            password = None

                            for key in data:
                                value = base64.b64decode(data[key]).decode("utf-8")
                                key_lower = key.lower()

                                if "url" in key_lower or "host" in key_lower:
                                    harbor_url = value
                                elif "user" in key_lower:
                                    username = value
                                elif "pass" in key_lower:
                                    password = value

                            if harbor_url and username and password:
                                return harbor_url, username, password
            except:
                continue
    except:
        pass

    return None, None, None


def parse_args():
    """Parse command line arguments"""
    parser = argparse.ArgumentParser(
        description="Clean up old/unused images from Harbor registry"
    )

    # Harbor connection options
    harbor_group = parser.add_argument_group("Harbor Connection Options")
    harbor_group.add_argument(
        "--harbor-url", help="Harbor URL (e.g., https://harbor.example.com)"
    )
    harbor_group.add_argument("--harbor-username", help="Harbor username")
    harbor_group.add_argument("--harbor-password", help="Harbor password")
    harbor_group.add_argument(
        "--no-verify-ssl", action="store_true", help="Skip SSL certificate verification"
    )

    # Kubernetes options
    k8s_group = parser.add_argument_group("Kubernetes Options")
    k8s_group.add_argument("--kubeconfig", help="Path to kubeconfig file")
    k8s_group.add_argument("--kube-context", help="Kubernetes context to use")
    k8s_group.add_argument(
        "--no-skip-in-use", action="store_true", 
        help="Don't skip images in use by Kubernetes"
    )

    # Cleanup options
    cleanup_group = parser.add_argument_group("Cleanup Options")
    cleanup_group.add_argument(
        "--dry-run", action="store_true", help="Don't actually delete anything"
    )
    cleanup_group.add_argument(
        "--projects", nargs="*", help="Projects to clean up (if not specified, clean all)"
    )
    cleanup_group.add_argument(
        "--exclude-projects", nargs="*", help="Projects to exclude from cleanup"
    )
    cleanup_group.add_argument(
        "--keep-days", type=int, help="Keep images newer than this many days"
    )
    cleanup_group.add_argument(
        "--keep-tags", type=int, help="Keep this many most recent tags per repository"
    )
    cleanup_group.add_argument(
        "--always-keep-tags", nargs="*", 
        default=["latest", "stable", "master"],
        help="Tag patterns to always keep (supports wildcards, e.g., 'prod-*')"
    )

    # Other options
    parser.add_argument(
        "--config", help="Path to YAML config file (overrides command line options)"
    )
    parser.add_argument(
        "--use-kube-auth", action="store_true",
        help="Extract Harbor credentials from Kubernetes secrets"
    )
    parser.add_argument(
        "--verbose", "-v", action="count", default=0, help="Increase verbosity"
    )

    args = parser.parse_args()

    # Handle verbosity
    if args.verbose == 1:
        logger.setLevel(logging.INFO)
    elif args.verbose &amp;gt;= 2:
        logger.setLevel(logging.DEBUG)

    # Load config file if specified
    if args.config:
        try:
            with open(args.config, 'r') as f:
                config_data = yaml.safe_load(f)

            # Update args with config file values
            for key, value in config_data.items():
                if value is not None:
                    setattr(args, key.replace('-', '_'), value)
        except Exception as e:
            logger.error(f"Error loading config file: {str(e)}")

    # Try to extract Harbor credentials from Kubernetes if requested
    if args.use_kube_auth and not (args.harbor_url and args.harbor_username and args.harbor_password):
        harbor_url, username, password = get_harbor_credentials_from_kube()
        if harbor_url and username and password:
            logger.info("Successfully extracted Harbor credentials from Kubernetes")
            args.harbor_url = args.harbor_url or harbor_url
            args.harbor_username = args.harbor_username or username
            args.harbor_password = args.harbor_password or password
        else:
            logger.warning("Failed to extract Harbor credentials from Kubernetes")

    # Validate required options
    if not args.harbor_url:
        parser.error("Harbor URL is required")
    if not args.harbor_username:
        parser.error("Harbor username is required")
    if not args.harbor_password:
        parser.error("Harbor password is required")

    return args


def main():
    """Main entry point"""
    args = parse_args()

    cleanup = HarborCleanup(
        harbor_url=args.harbor_url,
        harbor_username=args.harbor_username,
        harbor_password=args.harbor_password,
        kubeconfig=args.kubeconfig,
        kube_context=args.kube_context,
        dry_run=args.dry_run,
        projects=args.projects,
        exclude_projects=args.exclude_projects,
        keep_days=args.keep_days,
        keep_tags=args.keep_tags,
        skip_in_use=not args.no_skip_in_use,
        always_keep_tags=args.always_keep_tags,
        verify_ssl=not args.no_verify_ssl,
    )

    try:
        cleanup.run()
        logger.info("Cleanup completed successfully")
    except KeyboardInterrupt:
        logger.info("Cleanup interrupted by user")
        sys.exit(1)
    except Exception as e:
        logger.error(f"Cleanup failed: {str(e)}")
        sys.exit(1)


if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;A Python script that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Integrates with Harbor's API&lt;/li&gt;
&lt;li&gt;Checks Kubernetes for active deployments&lt;/li&gt;
&lt;li&gt;Applies configurable retention policies&lt;/li&gt;
&lt;li&gt;Provides safety mechanisms like dry-run mode&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Sample configuration showing main features
&lt;/span&gt;&lt;span class="n"&gt;cleanup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HarborCleanup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;harbor_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://harbor.example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;harbor_username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;admin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;harbor_password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;secret&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;keep_days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;# Keep images newer than 30 days
&lt;/span&gt;    &lt;span class="n"&gt;keep_tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c1"&gt;# Keep 5 most recent tags per repo
&lt;/span&gt;    &lt;span class="n"&gt;skip_in_use&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# Don't delete images used by Kubernetes
&lt;/span&gt;    &lt;span class="n"&gt;always_keep_tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;     &lt;span class="c1"&gt;# Protected tag patterns
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stable&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prod-*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;           &lt;span class="c1"&gt;# Safety first!
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Safety Mechanisms
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dry Run Mode&lt;/strong&gt; - Preview deletions without actually removing anything
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./harbor-cleanup.py --dry-run --keep-days 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Integration&lt;/strong&gt; - Automatically detects in-use images
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Gets images from:
# - Deployments
# - StatefulSets
# - DaemonSets
# - Pods (for Jobs/CronJobs)
k8s_images = kube_client.get_images_in_use()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Protected Tags&lt;/strong&gt; - Never delete important tags like latest or prod-*&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dual Retention Policies&lt;/strong&gt; - Combine age-based and count-based rules&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Install dependencies:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install pyyaml kubernetes requests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run a test dry-run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./harbor-cleanup.py \
  --use-kube-auth \
  --dry-run \
  --keep-days 30 \
  --keep-tags 5 \
  --always-keep-tags latest stable master "prod-*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For production run (after verifying dry-run):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./harbor-cleanup.py \
  --harbor-url https://harbor.example.com \
  --harbor-username $HARBOR_USER \
  --harbor-password $HARBOR_PASS \
  --keep-days 30 \
  --keep-tags 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Usage
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Configuration File
&lt;/h2&gt;

&lt;p&gt;Instead of command-line args, use a YAML config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# config.yaml
harbor-url: https://harbor.example.com
harbor-username: admin
keep-days: 30
keep-tags: 5
exclude-projects:
  - infrastructure
  - legacy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./harbor-cleanup.py --config config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script has helped us reduce Harbor storage usage by 70% while maintaining all production-critical images. The Kubernetes integration gives us confidence we won't break running applications.&lt;/p&gt;

&lt;p&gt;Would love to hear about your registry cleanup experiences in the comments!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>harbor</category>
      <category>linux</category>
    </item>
    <item>
      <title>Secure Your Linux Server with Fail2Ban (Step-by-Step Guide)</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Sun, 25 May 2025 11:09:00 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/secure-your-linux-server-with-fail2ban-step-by-step-guide-3193</link>
      <guid>https://dev.to/tanvirrahman/secure-your-linux-server-with-fail2ban-step-by-step-guide-3193</guid>
      <description>&lt;p&gt;Brute force attacks on SSH, mail and web services are a constant threat to any internet-facing server. Fail2Ban is a powerful, lightweight tool that helps mitigate these attacks by automatically banning IPs that exhibit malicious behavior.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk through the &lt;strong&gt;complete installation and configuration of Fail2Ban&lt;/strong&gt; on a Linux server to secure SSH and Postfix (mail server). We’ll also set up email alerts and configure persistent bans.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧰 Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Linux server (Debian/Ubuntu or CentOS/RHEL)&lt;/li&gt;
&lt;li&gt;Root or sudo access&lt;/li&gt;
&lt;li&gt;SSH access to the server&lt;/li&gt;
&lt;li&gt;(Optional) A mail server (Postfix) if you want to monitor email service&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ 1. Update System Packages
&lt;/h2&gt;

&lt;p&gt;Before installing anything, make sure your package manager is up-to-date.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;  &lt;span class="c"&gt;# For Debian/Ubuntu&lt;/span&gt;
&lt;span class="c"&gt;# or&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum update &lt;span class="nt"&gt;-y&lt;/span&gt;                      &lt;span class="c"&gt;# For CentOS/RHEL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📦 2. Install Fail2Ban and Dependencies
&lt;/h2&gt;

&lt;p&gt;Install Fail2Ban using your system’s package manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; fail2ban            &lt;span class="c"&gt;# Debian/Ubuntu&lt;/span&gt;
&lt;span class="c"&gt;# or&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; fail2ban            &lt;span class="c"&gt;# CentOS/RHEL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📁 3. Create Custom Jail Configuration
&lt;/h2&gt;

&lt;p&gt;Fail2Ban uses &lt;em&gt;jails&lt;/em&gt; to define which services to monitor. Let’s back up the default config and create a custom one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup the Default Config
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp&lt;/span&gt; /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Modify Basic Settings
&lt;/h3&gt;

&lt;p&gt;Set a 24-hour ban time, 10-minute find time, and allow a maximum of 5 retries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/bantime  = 10m/bantime  = 24h/;s/findtime  = 10m/findtime  = 600/;s/maxretry = 5/maxretry = 5/'&lt;/span&gt; /etc/fail2ban/jail.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📄 4. Configure Jails for SSH and Postfix
&lt;/h2&gt;

&lt;p&gt;Now, let’s create a separate jail configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/fail2ban/jail.d/custom.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 5

[postfix]
enabled = true
port = 25
filter = postfix
logpath = /var/log/mail.log
maxretry = 5
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📧 5. Configure Email Notifications
&lt;/h2&gt;

&lt;p&gt;Ensure &lt;code&gt;mailutils&lt;/code&gt; is installed and the &lt;code&gt;mail&lt;/code&gt; command is available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;which mail
&lt;span class="c"&gt;# Should return: /usr/bin/mail&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add your email settings in &lt;code&gt;jail.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/fail2ban/jail.local &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'

[DEFAULT]
banaction = iptables-multiport
banaction_allports = iptables-allports
loglevel = DEBUG
ignoreself = false
action = %(action_mwl)s
sender = fail2ban@yourdomain.com
destemail = admin@yourdomain.com
mta = sendmail
&lt;/span&gt;&lt;span class="no"&gt;
EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;yourdomain.com&lt;/code&gt; and the email addresses with your actual values.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 6. Enable and Start Fail2Ban
&lt;/h2&gt;

&lt;p&gt;Enable the service to start on boot and start it now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;fail2ban
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start fail2ban
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status fail2ban
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Restart Fail2Ban to Apply All Changes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart fail2ban
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧪 7. Testing &amp;amp; Monitoring
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Simulate a Ban
&lt;/h3&gt;

&lt;p&gt;Try multiple failed SSH login attempts from another IP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check Logs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /var/log/fail2ban.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  View Real-Time Jail Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;fail2ban-client status sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Unban an IP
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;fail2ban-client &lt;span class="nb"&gt;set &lt;/span&gt;sshd unbanip &amp;lt;IP_ADDRESS&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔧 Optional: Adjust UFW if Installed
&lt;/h2&gt;

&lt;p&gt;If you use &lt;code&gt;ufw&lt;/code&gt;, remove conflicting rules and allow SSH manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw delete limit 22/tcp
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw delete limit 22/tcp &lt;span class="s2"&gt;"v6"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 22/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ✅ Summary
&lt;/h2&gt;

&lt;p&gt;Here’s what we’ve accomplished:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Updated system packages&lt;/li&gt;
&lt;li&gt;✅ Installed and configured Fail2Ban&lt;/li&gt;
&lt;li&gt;✅ Created custom jails for SSH and Postfix&lt;/li&gt;
&lt;li&gt;✅ Set ban time (24h), find time (10m), max retry (5)&lt;/li&gt;
&lt;li&gt;✅ Configured email alerts&lt;/li&gt;
&lt;li&gt;✅ Enabled persistent bans and logging&lt;/li&gt;
&lt;li&gt;✅ Tested the setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your server is now better protected against brute force attacks. 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  📌 Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add additional jails (e.g., Nginx, Apache) if you run web servers&lt;/li&gt;
&lt;li&gt;Whitelist internal or VPN IPs (/etc/fail2ban/jail.d/whitelist.conf)&lt;/li&gt;
&lt;li&gt;Monitor logs regularly for suspicious activity&lt;/li&gt;
&lt;li&gt;Fine-tune thresholds if you experience false positives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay safe and secure! 🔐&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
      <category>fail2ban</category>
      <category>security</category>
    </item>
    <item>
      <title>📬 Setting Up Postfix with SendGrid Relay on Ubuntu: A Step-by-Step Guide</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Mon, 19 May 2025 10:14:20 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/setting-up-postfix-with-sendgrid-relay-on-ubuntu-a-step-by-step-guide-29c4</link>
      <guid>https://dev.to/tanvirrahman/setting-up-postfix-with-sendgrid-relay-on-ubuntu-a-step-by-step-guide-29c4</guid>
      <description>&lt;p&gt;If you’ve ever tried to send email directly from a Linux server, you know it can be a hassle—especially when dealing with deliverability and authentication. Fortunately, with Postfix and a reliable SMTP provider like SendGrid, you can offload the heavy lifting and ensure your emails land safely in inboxes.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll walk you through how to configure Postfix to relay mail through SendGrid, complete with authentication, sender rewriting, and testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛠️ Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Linux server (tested on Ubuntu)&lt;/li&gt;
&lt;li&gt;A SendGrid account and SMTP credentials&lt;/li&gt;
&lt;li&gt;Root or sudo access&lt;/li&gt;
&lt;li&gt;Basic comfort with the command line&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;✅ Step 1: Install Postfix&lt;/strong&gt;&lt;br&gt;
Let’s start by installing Postfix non-interactively to avoid configuration prompts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DEBIAN_FRONTEND=noninteractive sudo apt-get install -y postfix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🔐 Step 2: Set Up SASL Authentication for SendGrid&lt;/strong&gt;&lt;br&gt;
To authenticate with SendGrid’s SMTP server, we’ll create a file containing your credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "[smtp.sendgrid.net]:587 username:password" | sudo tee /etc/postfix/sasl_passwd
sudo chmod 600 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace username:password with your actual SendGrid credentials (your username is usually "apikey", and the password is your generated API key).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📨 Step 3: Configure Sender Canonical Mapping&lt;/strong&gt;&lt;br&gt;
Sometimes you want all outgoing emails to appear as if they came from a specific address. This is where sender_canonical comes in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "/.+@mail\.example\.com/ fromAddress@example.com
/^root@.*/ fromAddress@example.com
/@.*/ fromAddress@example.com" | sudo tee /etc/postfix/sender_canonical
sudo postmap /etc/postfix/sender_canonical
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These regex patterns help normalize the sender addresses. For instance, even if an internal process sends an email as &lt;a href="mailto:root@mail.example.com"&gt;root@mail.example.com&lt;/a&gt;, it will appear to come from &lt;a href="mailto:fromAddress@example.com"&gt;fromAddress@example.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✉️ Step 4: (Optional) Create a Generic Mapping File&lt;/strong&gt;&lt;br&gt;
If you’re rewriting sender addresses more broadly, this is useful. Otherwise, you can skip it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "username@example.com relayuser@sendgrid.net" | sudo tee /etc/postfix/generic
sudo postmap /etc/postfix/generic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure this matches your intended email addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚙️ Step 5: Configure Postfix (main.cf)&lt;/strong&gt;&lt;br&gt;
Now for the core config. We’ll overwrite the main.cf file to define everything, including SendGrid relay, network settings, and sender rewriting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tee /etc/postfix/main.cf &amp;gt; /dev/null &amp;lt;&amp;lt; 'EOL'
# Basic Settings
compatibility_level = 2
queue_directory = /var/spool/postfix
command_directory = /usr/sbin
daemon_directory = /usr/lib/postfix/sbin
data_directory = /var/lib/postfix
mail_owner = postfix

# Network Settings
myhostname = mail.example.com
mydomain = example.com
myorigin = example.com
inet_interfaces = all
inet_protocols = ipv4
mydestination = $myhostname, localhost.$mydomain, localhost

# SendGrid Relay Configuration
relayhost = [smtp.sendgrid.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = none

# Additional Settings
smtpd_banner = $myhostname ESMTP
biff = no
append_dot_mydomain = no
readme_directory = no

# Sender Settings
smtp_generic_maps = hash:/etc/postfix/generic

# SMTP Client Restrictions

# Message Size Limits
message_size_limit = 10240000
mailbox_size_limit = 0

# Local Recipient Settings
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
local_recipient_maps = proxy:unix:passwd.byname $alias_maps
local_transport = local:$myhostname

# Network Security
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
sender_canonical_maps = regexp:/etc/postfix/sender_canonical
EOL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adjust myhostname, mydomain, and &lt;a href="mailto:fromAddress@example.com"&gt;fromAddress@example.com&lt;/a&gt; to match your real domain and sending address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;👥 Step 6: Update Aliases and Restart Postfix&lt;/strong&gt;&lt;br&gt;
Make sure aliases are recognized and apply the config changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo newaliases &amp;amp;&amp;amp; sudo systemctl restart postfix &amp;amp;&amp;amp; sudo systemctl status postfix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see Active: active (running) in the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧪 Step 7: Send a Test Email&lt;/strong&gt;&lt;br&gt;
Let’s test if your relay setup is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "This is a test email from Postfix with SendGrid relay" | mail -s "Test Email from Postfix" -r fromAddress@example.com toAddress@example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure mailutils is installed if this fails:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y mailutils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;📄 Step 8: Monitor Logs&lt;/strong&gt;&lt;br&gt;
To debug or confirm delivery, keep an eye on the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tail -f /var/log/mail.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This log is your go-to source for tracking down any delivery issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📂 Bonus: View Postfix Config and Files&lt;/strong&gt;&lt;br&gt;
Here’s a snapshot of how your /etc/postfix/ directory should look after setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -la /etc/postfix/ &amp;amp;&amp;amp; echo -e "\nMain config file contents:" &amp;amp;&amp;amp; cat /etc/postfix/main.cf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see key files like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;main.cf – your main configuration&lt;/li&gt;
&lt;li&gt;sasl_passwd &amp;amp; sasl_passwd.db – credentials for SendGrid&lt;/li&gt;
&lt;li&gt;sender_canonical &amp;amp; sender_canonical.db – for rewriting sender addresses&lt;/li&gt;
&lt;li&gt;generic &amp;amp; generic.db – for optional address rewriting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;✅ Summary&lt;/strong&gt;&lt;br&gt;
You’ve now got a reliable, authenticated Postfix setup using SendGrid’s SMTP relay. Here's what you've accomplished:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installed and configured Postfix&lt;/li&gt;
&lt;li&gt;Authenticated with SendGrid securely&lt;/li&gt;
&lt;li&gt;Rewritten sender addresses as needed&lt;/li&gt;
&lt;li&gt;Sent and verified test emails&lt;/li&gt;
&lt;li&gt;Monitored logs and verified configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup is great for sending system alerts, contact form emails, or even automating reports via cron.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>postfix</category>
      <category>sendgrid</category>
      <category>mailserver</category>
    </item>
    <item>
      <title>🚨 Auto-Reboot Your Server on High CPU / Memory Load (With Safety Checks)</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Thu, 15 May 2025 06:39:33 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/auto-reboot-your-server-on-high-cpu-load-with-safety-checks-16od</link>
      <guid>https://dev.to/tanvirrahman/auto-reboot-your-server-on-high-cpu-load-with-safety-checks-16od</guid>
      <description>&lt;p&gt;Sometimes a Linux server can get overwhelmed by sustained high CPU / Memory load — due to runaway processes, DDoS attacks or rogue scripts. Manually catching and fixing it in real-time isn't always possible. This guide will show you how to automatically reboot your server when CPU load is critically high for multiple minutes — but safely.&lt;/p&gt;

&lt;p&gt;We’ll create a lightweight bash script, track load over time, and schedule it with cron to run every minute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📜 Step 1: Create the Auto-Reboot Script&lt;/strong&gt;&lt;br&gt;
Let’s start by writing the script that checks system load and initiates a reboot after 3 consecutive high-load readings.&lt;/p&gt;

&lt;p&gt;✏️ Create or Edit the Script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /usr/local/bin/reboot-on-high-load.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste this logic into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Configuration
CPU_THRESHOLD=0.8                # 80% load per CPU core
MEM_THRESHOLD=90                 # 90% memory usage threshold
MAX_RETRIES=3                    # Reboot if high load/memory persists N times
CHECK_FILE="/tmp/highload.counter"
LOG_FILE="/var/log/reboot-load.log"
MAX_LOG_SIZE=$((10 * 1024 * 1024))  # 10MB

# Auto-truncate log if too large
if [ -f "$LOG_FILE" ] &amp;amp;&amp;amp; [ "$(stat -c%s "$LOG_FILE")" -ge "$MAX_LOG_SIZE" ]; then
    echo "[!] Log file exceeded 10MB. Truncating..." &amp;gt;&amp;gt; "$LOG_FILE"
    truncate -s 0 "$LOG_FILE"
fi

# Detect CPU cores and load threshold
CPU_CORES=$(nproc)
LOAD_THRESHOLD=$(echo "$CPU_CORES * $CPU_THRESHOLD" | bc)

# CPU Load Check
LOAD_AVG=$(awk '{print $1}' /proc/loadavg)
LOAD_OK=$(echo "$LOAD_AVG &amp;lt; $LOAD_THRESHOLD" | bc)

# Memory Check
MEM_USED_PERCENT=$(free | awk '/Mem:/ { printf("%.0f", $3/$2 * 100) }')
MEM_OK=$( [ "$MEM_USED_PERCENT" -lt "$MEM_THRESHOLD" ] &amp;amp;&amp;amp; echo 1 || echo 0 )

# Initialize counter file if needed
if [ ! -f "$CHECK_FILE" ]; then
    echo 0 &amp;gt; "$CHECK_FILE"
fi

COUNTER=$(cat "$CHECK_FILE")

# Evaluate system status
if [ "$LOAD_OK" -eq 0 ] || [ "$MEM_OK" -eq 0 ]; then
    echo "[!] High resource usage detected at $(date):" &amp;gt;&amp;gt; "$LOG_FILE"
    [ "$LOAD_OK" -eq 0 ] &amp;amp;&amp;amp; echo "    - CPU Load: $LOAD_AVG / Threshold: $LOAD_THRESHOLD" &amp;gt;&amp;gt; "$LOG_FILE"
    [ "$MEM_OK" -eq 0 ] &amp;amp;&amp;amp; echo "    - Memory Usage: $MEM_USED_PERCENT% / Threshold: $MEM_THRESHOLD%" &amp;gt;&amp;gt; "$LOG_FILE"
    COUNTER=$((COUNTER + 1))
    echo "$COUNTER" &amp;gt; "$CHECK_FILE"
else
    if [ "$COUNTER" -ne 0 ]; then
        echo "[✓] Resources back to normal at $(date): Load = $LOAD_AVG, Mem = $MEM_USED_PERCENT%" &amp;gt;&amp;gt; "$LOG_FILE"
    fi
    echo 0 &amp;gt; "$CHECK_FILE"
fi

# Reboot if over threshold for too long
if [ "$COUNTER" -ge "$MAX_RETRIES" ]; then
    echo "[!!!] High resource usage sustained for $MAX_RETRIES checks. Rebooting at $(date)..." &amp;gt;&amp;gt; "$LOG_FILE"
    rm -f "$CHECK_FILE"
    /sbin/shutdown -r now
fi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🔓 Step 2: Make the Script Executable&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod +x /usr/local/bin/reboot-on-high-load.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;⏰ Step 3: Schedule It to Run Every Minute&lt;/strong&gt;&lt;br&gt;
Open root crontab:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this line to the bottom:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* * * * * /usr/local/bin/reboot-on-high-load.sh &amp;gt;&amp;gt; /var/log/reboot-load.log 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This schedules the script to run every minute and logs its output to /var/log/reboot-load.log.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛡️ Why This Is Safe&lt;/strong&gt;&lt;br&gt;
✅ No reboots on single spikes — It only reboots if load stays high for 3 checks.&lt;/p&gt;

&lt;p&gt;✅ Self-resetting — If load normalizes, the counter resets.&lt;/p&gt;

&lt;p&gt;✅ Persistent state tracking — Uses /tmp/highload.counter.&lt;/p&gt;

&lt;p&gt;✅ Simple logging — Outputs to /var/log/reboot-load.log.&lt;/p&gt;

&lt;p&gt;✅ Cron-scheduled — Lightweight, runs every 60 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧪 Optional: Test the Script by Simulating High Load&lt;/strong&gt;&lt;br&gt;
🛠️ 1. Install a CPU Stress Tool&lt;br&gt;
On Ubuntu/Debian:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install -y stress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Alpine Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apk add stress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If stress is not available, use yes as a simple CPU loader (see below).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 2. Simulate High Load for Over 3 Minutes&lt;/strong&gt;&lt;br&gt;
Option A: With stress (preferred)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stress --cpu $(nproc) --timeout 200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;--cpu $(nproc) starts 1 thread per core.&lt;/p&gt;

&lt;p&gt;--timeout 200 runs for 200 seconds (~3.3 minutes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B: With yes Command&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in $(seq 1 $(nproc)); do yes &amp;gt; /dev/null &amp;amp; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let it run for at least 3 minutes, then stop it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;killall yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;📝 3. Monitor the Logs&lt;/strong&gt;&lt;br&gt;
In a second terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tail -f /var/log/reboot-load.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see output like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yaml
Copy
Edit
[!] Load is high: 7.23 / Threshold: 6.40
[!] Load is high: 7.45 / Threshold: 6.40
[!!!] High load sustained for 3 checks. Rebooting...
Then the system will automatically reboot.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Recap&lt;/strong&gt;&lt;br&gt;
By following this guide, you’ve set up a safe and efficient way to auto-reboot your Linux server during persistent high-load events:&lt;/p&gt;

&lt;p&gt;✅ Script with thresholds and state tracking&lt;/p&gt;

&lt;p&gt;✅ Cronjob for regular checks&lt;/p&gt;

&lt;p&gt;✅ No false positives from single spikes&lt;/p&gt;

&lt;p&gt;✅ Full log output for auditing&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comprehensive Guide to Optimizing Nginx Configuration</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Wed, 21 Feb 2024 17:35:31 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/comprehensive-guide-to-optimizing-nginx-configuration-3hb9</link>
      <guid>https://dev.to/tanvirrahman/comprehensive-guide-to-optimizing-nginx-configuration-3hb9</guid>
      <description>&lt;p&gt;Nginx stands out as a high-performance web server and reverse proxy known for its scalability and extensive configuration options. In this comprehensive guide, we'll thoroughly explore each directive within a sample Nginx configuration file, dissecting its functionality and discussing best practices for optimization, all while preserving the integrity of the original configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Nginx Configuration File
&lt;/h2&gt;

&lt;p&gt;Let's commence our exploration by carefully examining the provided Nginx configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Setting the number of worker processes to auto to dynamically adjust based on available system resources&lt;/span&gt;
&lt;span class="k"&gt;worker_processes&lt;/span&gt; &lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;# Configuring event handling, including setting maximum worker connections&lt;/span&gt;
&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Checking the system limit for file descriptors using 'ulimit -n'&lt;/span&gt;
    &lt;span class="kn"&gt;worker_connections&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# HTTP block containing global settings&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Including MIME types configuration file for proper content type detection&lt;/span&gt;
    &lt;span class="kn"&gt;include&lt;/span&gt; &lt;span class="s"&gt;mime.types&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# Configuring buffer sizes for handling client requests&lt;/span&gt;
    &lt;span class="kn"&gt;client_body_buffer_size&lt;/span&gt; &lt;span class="mi"&gt;10K&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;client_max_body_size&lt;/span&gt; &lt;span class="mi"&gt;8m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;client_header_buffer_size&lt;/span&gt; &lt;span class="mi"&gt;1k&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# Configuring timeouts for handling client requests&lt;/span&gt;
    &lt;span class="kn"&gt;client_body_timeout&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;client_header_timeout&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;keepalive_timeout&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;send_timeout&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# Enabling sendfile for efficient file transmission&lt;/span&gt;
    &lt;span class="kn"&gt;sendfile&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# Optimizing sendfile packets for better performance&lt;/span&gt;
    &lt;span class="kn"&gt;tcp_nopush&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# Server block defining HTTP server settings&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="mf"&gt;172.16&lt;/span&gt;&lt;span class="s"&gt;.133.129&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/home/vagrant/sites/blog&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;try_files&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="n"&gt;/not-found&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling root path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;index&lt;/span&gt; &lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling /welcome path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/welcome&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;'Hello&lt;/span&gt; &lt;span class="s"&gt;from&lt;/span&gt; &lt;span class="s"&gt;welcome&lt;/span&gt; &lt;span class="s"&gt;page'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;


        &lt;span class="c1"&gt;# Exact match - will only match greet&lt;/span&gt;
        &lt;span class="c1"&gt;# location = /welcome {&lt;/span&gt;
        &lt;span class="c1"&gt;#     return 200 'Hello from welcome page';&lt;/span&gt;
        &lt;span class="c1"&gt;# }&lt;/span&gt;

        &lt;span class="c1"&gt;# Regex match&lt;/span&gt;
        &lt;span class="c1"&gt;# location ~ /welcome[0-9] {&lt;/span&gt;
        &lt;span class="c1"&gt;#     return 200 'Hello from welcome page';&lt;/span&gt;
        &lt;span class="c1"&gt;# }&lt;/span&gt;

        &lt;span class="c1"&gt;# Preferred&lt;/span&gt;
        &lt;span class="c1"&gt;# location ^~ /welcome\d {&lt;/span&gt;
        &lt;span class="c1"&gt;#     return 200 'Hello from welcome page';&lt;/span&gt;
        &lt;span class="c1"&gt;# }&lt;/span&gt;

        &lt;span class="c1"&gt;# Regex match and case insensitive&lt;/span&gt;
        &lt;span class="c1"&gt;# location ~* /welcome[0-9] {&lt;/span&gt;
        &lt;span class="c1"&gt;#     return 200 'Hello from welcome page';&lt;/span&gt;
        &lt;span class="c1"&gt;# }&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling /arguments path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/arguments&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$arg_name&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling /get-weekend path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/get-weekend&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="nv"&gt;$weekend&lt;/span&gt; &lt;span class="nv"&gt;$date_local&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Capture part of the request and rewrite it&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt; &lt;span class="sr"&gt;^/week/(\w+)$&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;rewrite&lt;/span&gt; &lt;span class="s"&gt;^/week/(&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s"&gt;w+)&lt;/span&gt;$ &lt;span class="n"&gt;/weekend/&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt; &lt;span class="s"&gt;last&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Match the rewritten request and return the captured value&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt; &lt;span class="sr"&gt;^/weekend/(?&amp;lt;day&amp;gt;\w+)$&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Redirect&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/logo&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;307&lt;/span&gt; &lt;span class="n"&gt;/assets/brand/bootstrap-logo.svg&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling /secret path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/secret&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="n"&gt;/var/log/nginx/secret.access.log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="n"&gt;/var/log/nginx/access.log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;"Welcome&lt;/span&gt; &lt;span class="s"&gt;to&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt; &lt;span class="s"&gt;area"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling /most-secret path&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/most-secret&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="s"&gt;"Welcome&lt;/span&gt; &lt;span class="s"&gt;to&lt;/span&gt; &lt;span class="s"&gt;most&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt; &lt;span class="s"&gt;area"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Location block for handling non-existent resources&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/not-found&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt; &lt;span class="s"&gt;'Page&lt;/span&gt; &lt;span class="s"&gt;Not&lt;/span&gt; &lt;span class="s"&gt;Found'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Additional location blocks...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Detailed Explanation of Directives
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Worker Processes and Events
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;worker_processes&lt;/code&gt; directive sets the number of worker processes Nginx should use to handle incoming connections. When set to &lt;code&gt;auto&lt;/code&gt;, Nginx dynamically adjusts the number of worker processes based on available system resources. Inside the &lt;code&gt;events&lt;/code&gt; block, &lt;code&gt;worker_connections&lt;/code&gt; determines the maximum number of simultaneous connections each worker process can handle. It's crucial to adjust this value according to your server's capacity and expected traffic levels.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP Settings
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;http&lt;/code&gt; block encapsulates global settings related to HTTP functionality. The &lt;code&gt;include mime.types&lt;/code&gt; directive is essential for proper content type detection, as it includes a file (&lt;code&gt;mime.types&lt;/code&gt;) mapping file extensions to MIME types. This ensures accurate interpretation of file types when serving content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Buffer Settings
&lt;/h3&gt;

&lt;p&gt;Nginx employs buffers to efficiently manage client requests and responses. The &lt;code&gt;client_body_buffer_size&lt;/code&gt; and &lt;code&gt;client_header_buffer_size&lt;/code&gt; directives set the buffer sizes for reading client request bodies and headers, respectively. Proper adjustment of these values is crucial, especially for handling large requests. Additionally, &lt;code&gt;client_max_body_size&lt;/code&gt; specifies the maximum size of client request bodies, helping prevent denial-of-service attacks and ensuring server stability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeout Settings
&lt;/h3&gt;

&lt;p&gt;Timeouts play a vital role in managing client connections and preventing resource exhaustion. The &lt;code&gt;client_body_timeout&lt;/code&gt; and &lt;code&gt;client_header_timeout&lt;/code&gt; directives define the maximum time allowed for reading client request bodies and headers, respectively. Fine-tuning these values is essential for optimizing server performance and mitigating potential security risks. Similarly, &lt;code&gt;keepalive_timeout&lt;/code&gt; determines how long Nginx should keep a connection open for subsequent requests from the same client, reducing latency and improving user experience. The &lt;code&gt;send_timeout&lt;/code&gt; directive sets the maximum time for the client to accept or receive a response, preventing stalled connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Handling and Optimization
&lt;/h3&gt;

&lt;p&gt;Nginx offers efficient mechanisms for handling static files. Enabling &lt;code&gt;sendfile&lt;/code&gt; allows Nginx to use the operating system's &lt;code&gt;sendfile&lt;/code&gt; system call to transmit files directly, significantly improving file transmission speed and reducing server load. The &lt;code&gt;tcp_nopush&lt;/code&gt; directive optimizes the transmission of file packets over TCP connections by sending file packets as soon as possible without waiting for the entire buffer to fill up, thus reducing latency and enhancing network performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Server Block
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;server&lt;/code&gt; block defines settings specific to the HTTP server. Here, we configure Nginx to listen on port 80 for incoming requests and specify the server's IP address or domain name. Additionally, we set the root directory for serving files and define a fallback mechanism (&lt;code&gt;try_files&lt;/code&gt;) for handling requests to non-existent resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Location Block Matching Options
&lt;/h3&gt;

&lt;p&gt;In Nginx, location blocks are used to define how the server should respond to different URI patterns. There are several matching options available, each serving a specific purpose. Let's explore some of these options and their implications:&lt;/p&gt;

&lt;h4&gt;
  
  
  Exact Match
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;location = /welcome&lt;/code&gt; block, when uncommented, specifies an exact match for the URI &lt;code&gt;/welcome&lt;/code&gt;. This means that only requests to &lt;code&gt;/welcome&lt;/code&gt; will be handled by this location block. Any other requests, such as &lt;code&gt;/welcome/&lt;/code&gt; or &lt;code&gt;/welcome.html&lt;/code&gt;, will not match this block.&lt;/p&gt;

&lt;h4&gt;
  
  
  Regex Match
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;location ~ /welcome[0-9]&lt;/code&gt; block, when uncommented, uses a regular expression to match URIs that start with &lt;code&gt;/welcome&lt;/code&gt; followed by a numeric digit. For example, it would match URIs like &lt;code&gt;/welcome1&lt;/code&gt;, &lt;code&gt;/welcome2&lt;/code&gt;, etc. &lt;/p&gt;

&lt;h4&gt;
  
  
  Preferred Match
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;location ^~ /welcome\d&lt;/code&gt; block, when uncommented, is a preferred match for URIs starting with &lt;code&gt;/welcome&lt;/code&gt; followed by a numeric digit. The &lt;code&gt;^~&lt;/code&gt; modifier ensures that if a URI matches this pattern, Nginx will use this location block instead of others with the same prefix.&lt;/p&gt;

&lt;h4&gt;
  
  
  Case-Insensitive Regex Match
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;location ~* /welcome[0-9]&lt;/code&gt; block, when uncommented, performs a case-insensitive regex match for URIs starting with &lt;code&gt;/welcome&lt;/code&gt; followed by a numeric digit. This means it will match URIs like &lt;code&gt;/welcome1&lt;/code&gt;, &lt;code&gt;/Welcome2&lt;/code&gt;, &lt;code&gt;/WELCOME3&lt;/code&gt;, etc., regardless of case.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rewrite and Redirect
&lt;/h4&gt;

&lt;p&gt;Within location blocks, the &lt;code&gt;rewrite&lt;/code&gt; directive modifies the URI of incoming requests. For example, the &lt;code&gt;rewrite&lt;/code&gt; directive captures part of the request and rewrites it to another URI pattern. Additionally, the &lt;code&gt;return&lt;/code&gt; directive performs HTTP redirects. For instance, the &lt;code&gt;/logo&lt;/code&gt; location block issues a 307 redirect to &lt;code&gt;/assets/brand/bootstrap-logo.svg&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Handling Arguments
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;/arguments&lt;/code&gt; location block illustrates how Nginx handles query string arguments. The &lt;code&gt;$arg_name&lt;/code&gt; variable retrieves the value of the &lt;code&gt;name&lt;/code&gt; parameter from the request's query string and returns it as the response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Mastering Nginx configuration is essential for optimizing server performance, enhancing security, and delivering a seamless user experience. By understanding each directive in the configuration file and applying best practices for optimization, you can create a highly efficient and secure Nginx setup tailored to your application's requirements. Experiment with different configurations, monitor server metrics, and stay updated on Nginx developments to continually refine and improve your web server infrastructure.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>devops</category>
    </item>
    <item>
      <title>Part 1: Installing Nginx</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Mon, 05 Feb 2024 17:02:18 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/part-1-installing-nginx-1gdi</link>
      <guid>https://dev.to/tanvirrahman/part-1-installing-nginx-1gdi</guid>
      <description>&lt;h1&gt;
  
  
  Downloading and Installing Nginx from Source
&lt;/h1&gt;

&lt;p&gt;Nginx is a powerful and efficient web server thats widely used for hosting websites and applications. In this blog, well walk through the steps to download and install Nginx from source on a Linux system. This method is particularly useful when you need a specific version of Nginx or want to customize the build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before beginning, ensure you have a Linux system with internet access and root privileges. Then update the package list and install net-tools&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install net-tools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Downloading the Nginx Package
&lt;/h2&gt;

&lt;p&gt;First, download the desired version of Nginx. As of this writing, lets use version 1.22.1. You can download it using the &lt;code&gt;wget&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://nginx.org/download/nginx-1.22.1.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command fetches the tar.gz file containing the Nginx source code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Extracting the File
&lt;/h2&gt;

&lt;p&gt;Once the download is complete, extract the file and navigate to the extracted directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -zxvf nginx-1.22.1.tar.gz
cd nginx-1.22.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Installing Build Essentials
&lt;/h2&gt;

&lt;p&gt;Before compiling Nginx, you need to install &lt;code&gt;build-essential&lt;/code&gt;, a package that contains references to numerous tools and utilities required for building software:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install build-essential
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, initiate the configuration process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Installing Dependencies
&lt;/h2&gt;

&lt;p&gt;Nginx has several dependencies that need to be installed for it to function correctly. These include libraries for regular expression processing, encryption, and compression. Install these using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Final Configuration and Installation
&lt;/h2&gt;

&lt;p&gt;After installing the dependencies, run the configure script again. This time, it should complete without any errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Additional Configuration for Nginx
&lt;/h2&gt;

&lt;p&gt;This command configures Nginx with specific paths and options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./configure --sbin-path=/usr/bin/nginx \
            --conf-path=/etc/nginx/nginx.conf \
            --error-log-path=/var/log/nginx/error.log \
            --http-log-path=/var/log/nginx/access.log \
            --with-pcre \
            --pid-path=/var/run/nginx.pid \
            --with-http_ssl_module
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation of the configuration options:&lt;br&gt;
--sbin-path: Specifies the path where the nginx executable will be located.&lt;br&gt;
--conf-path: Sets the path for the nginx configuration file.&lt;br&gt;
--error-log-path: Designates the location for the nginx error log.&lt;br&gt;
--http-log-path: Specifies the path for the nginx access log.&lt;br&gt;
--with-pcre: Enables usage of Perl Compatible Regular Expressions for location matching.&lt;br&gt;
--pid-path: Sets the path of the nginx PID file.&lt;br&gt;
--with-http_ssl_module: This option enables the SSL module, which is essential for handling HTTPS traffic.&lt;/p&gt;

&lt;p&gt;This configuration provides a structured approach to organizing the Nginx installation.&lt;br&gt;
It helps in managing logs, configuration, and executable files in a standard and accessible manner.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7: Compiling and Installing Nginx
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This sequence compiles the Nginx source code and then installs it on your system.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 8: Verifying the Installation
&lt;/h2&gt;

&lt;p&gt;Listing the contents of the Nginx configuration directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -l /etc/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will show the files in the '/etc/nginx' directory, ensuring that Nginx is installed correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Checking the Nginx Version and Configuration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx -V
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 10: Running Nginx
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzrd7uaxxjz9hpmtmor4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzrd7uaxxjz9hpmtmor4.png" alt="Image description" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To stop Nginx run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo /usr/bin/nginx -s stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>nginx</category>
      <category>devops</category>
    </item>
    <item>
      <title>Configuring and Launching an EC2 Instance: A Comprehensive Guide with Detailed Commands</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Sat, 25 Nov 2023 10:53:43 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/configuring-and-launching-an-ec2-instance-a-comprehensive-guide-with-detailed-commands-28mi</link>
      <guid>https://dev.to/tanvirrahman/configuring-and-launching-an-ec2-instance-a-comprehensive-guide-with-detailed-commands-28mi</guid>
      <description>&lt;p&gt;Launching an Amazon EC2 (Elastic Compute Cloud) instance is a foundational skill for leveraging the AWS cloud platform. This guide will walk you through the steps required to set up, connect to, and manage an EC2 instance, including specific commands for accessing instance metadata.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcymd3nstwqt1ksaqhxow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcymd3nstwqt1ksaqhxow.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzyghuv0b2i4ket12sn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzyghuv0b2i4ket12sn7.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initial Setup: Naming, Tagging, and Image Selection&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Name and Tags Section&lt;/strong&gt;: Assign a name to your EC2 instance for easy identification. Tags, defined as Key/Value pairs, aren't mandatory but highly recommended for efficient organization in production environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnmlum8tumu77geah06l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnmlum8tumu77geah06l.png" alt="Image description" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Application and OS Images&lt;/strong&gt;: Select 'Amazon Linux' under Quick Start options. Amazon provides a range of AMIs, featuring various versions of Linux and Windows, each pre-configured with essential software packages.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrsujt4627ee3rk4yfly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrsujt4627ee3rk4yfly.png" alt="Image description" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selecting Instance Type and Key Pair Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Instance Type Selection&lt;/strong&gt;: Review the list of available instance types, focusing on their hardware resources like CPU and memory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcssssx9bnmbfykl35nw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcssssx9bnmbfykl35nw7.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Key Pair Creation&lt;/strong&gt;: Generate a new key pair for secure SSH access. Download the &lt;code&gt;.pem&lt;/code&gt; file containing your private key after maintaining the default settings.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi04prtaerx9pez5z6yt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi04prtaerx9pez5z6yt.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2b70gmx1d2jswepxfra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2b70gmx1d2jswepxfra.png" alt="Image description" width="655" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network and Storage Settings&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;: Ensure SSH traffic is allowed from 'Anywhere' in the Security Groups section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsbjjilt3iu2h4ke7vx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsbjjilt3iu2h4ke7vx1.png" alt="Image description" width="784" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure Storage&lt;/strong&gt;: Stick with the default 8 GiB gp3 root volume unless your application demands otherwise.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas5aesawrj40562zr54n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas5aesawrj40562zr54n.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launching and Monitoring the Instance&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Launching the Instance&lt;/strong&gt;: Launch your instance after reviewing your configurations. Monitor its deployment on the EC2 console's Instances screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif1u9x802j5cugicoj9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif1u9x802j5cugicoj9c.png" alt="Image description" width="413" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihqah0zatstyc8gr1ma0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihqah0zatstyc8gr1ma0.png" alt="Image description" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Establishing an SSH Connection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the correct permissions for your key file: &lt;code&gt;chmod 400 /path/to/your/keypair.pem&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Connect to your instance: &lt;code&gt;ssh -i /path/to/your/keypair.pem ec2-user@server-ip&lt;/code&gt;, where &lt;code&gt;server-ip&lt;/code&gt; is your instance's Public IP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Accessing EC2 Instance Metadata&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instance metadata provides valuable information about your running instance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Token&lt;/strong&gt;: Generate a token for secure metadata access: &lt;code&gt;TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 3600")&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz42h7agr9qefb70h3ju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz42h7agr9qefb70h3ju.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Listing Metadata&lt;/strong&gt;: Retrieve all instance metadata with the command: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Extracting Specific Metadata&lt;/strong&gt;: Use the following commands to get detailed information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security groups: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/security-groups&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;AMI ID: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/ami-id&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Hostname: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/hostname&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Instance ID: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/instance-id&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Instance type: &lt;code&gt;curl -w "\n" -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/instance-type&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Instance Termination Process&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Learn to terminate your instance via the AWS console, an important step for managing costs and resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the AWS console, navigate to 'Instances', select your instance, and choose 'Terminate instance' from the 'Instance State' dropdown.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvj2e6yfofyxaueo5jrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvj2e6yfofyxaueo5jrx.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide provides a thorough understanding of launching and managing an EC2 instance, including detailed commands for accessing instance metadata, ensuring you are well-equipped to utilize this powerful AWS service.&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>devops</category>
      <category>aws</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>DevContainer and Makefile: A Duo for Simplified GCP Workflows</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Sun, 27 Aug 2023 16:12:45 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/devcontainer-and-makefile-a-duo-for-simplified-gcp-workflows-ii9</link>
      <guid>https://dev.to/tanvirrahman/devcontainer-and-makefile-a-duo-for-simplified-gcp-workflows-ii9</guid>
      <description>&lt;p&gt;Setting up a development environment can sometimes feel like orchestrating a symphony with disparate instruments, given how many moving parts are involved. However, DevContainers within Visual Studio Code (VSCode) are here to change that tune. Below, let's explore how to streamline this process and create a seamless, shareable, and consistent development environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Installing the DevContainer Extension in VSCode
&lt;/h2&gt;

&lt;p&gt;Before diving into the code, ensure that the DevContainer extension is installed within your VSCode setup. This extension is your gateway to crafting isolated and reproducible development environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi5ssc5wosv3pa8j03fq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi5ssc5wosv3pa8j03fq.png" alt="Image description" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Setting Up the Directory Structure
&lt;/h2&gt;

&lt;p&gt;Create a new folder which will act as the root of your project.&lt;br&gt;
Inside this folder, create another directory named .devcontainer.&lt;br&gt;
Within .devcontainer, create two essential files: devcontainer.json and Dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh5lqkoputg2zxc2r6m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh5lqkoputg2zxc2r6m6.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Crafting the Dockerfile
&lt;/h2&gt;

&lt;p&gt;The Dockerfile is like a recipe, dictating the ingredients and steps required to create your development environment. Below is an example using Ubuntu as the base image and installing Google Cloud SDK among other packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use an official Ubuntu as a parent image&lt;/span&gt;
FROM ubuntu:latest

&lt;span class="c"&gt;# Set environment variables to non-interactive (this prevents some prompts)&lt;/span&gt;
ENV &lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;non-interactive

&lt;span class="c"&gt;# Run package updates, install packages, and clean up&lt;/span&gt;
RUN apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="c"&gt;# Install common packages and openssh-client&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        ca-certificates &lt;span class="se"&gt;\&lt;/span&gt;
        curl &lt;span class="se"&gt;\&lt;/span&gt;
        gnupg &lt;span class="se"&gt;\&lt;/span&gt;
        openssh-client &lt;span class="se"&gt;\&lt;/span&gt;
        python3 &lt;span class="se"&gt;\&lt;/span&gt;
        make &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="c"&gt;# Add Google Cloud SDK repo and its key&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/apt/sources.list.d/google-cloud-sdk.list &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key &lt;span class="nt"&gt;--keyring&lt;/span&gt; /usr/share/keyrings/cloud.google.gpg add - &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get update &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; google-cloud-sdk &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="c"&gt;# Remove unnecessary files&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="c"&gt;# Generate SSH Key&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-b&lt;/span&gt; 4096 &lt;span class="nt"&gt;-f&lt;/span&gt; /root/.ssh/id_rsa &lt;span class="nt"&gt;-N&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;

WORKDIR /automation
COPY &lt;span class="nb"&gt;.&lt;/span&gt; /automation/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Configuring devcontainer.json
&lt;/h2&gt;

&lt;p&gt;This JSON file is the control center for the DevContainer. It specifies how the container should behave when we open VSCode inside it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"Code space configuration"&lt;/span&gt;,
    &lt;span class="s2"&gt;"build"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"dockerfile"&lt;/span&gt;: &lt;span class="s2"&gt;"Dockerfile"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"args"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; 
        //  // Update &lt;span class="s1"&gt;'VARIANT'&lt;/span&gt; to pick a .NET Core version: 2.1, 3.1, 5.0
        //  &lt;span class="s2"&gt;"VARIANT"&lt;/span&gt;: &lt;span class="s2"&gt;"5.0"&lt;/span&gt;,
        //  // Options
        //  &lt;span class="s2"&gt;"INSTALL_NODE"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt;,
        //  &lt;span class="s2"&gt;"NODE_VERSION"&lt;/span&gt;: &lt;span class="s2"&gt;"lts/*"&lt;/span&gt;,
        //  &lt;span class="s2"&gt;"INSTALL_AZURE_CLI"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt;,
        //  &lt;span class="s2"&gt;"INSTALL_TERRAFORM"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt;,
        //  &lt;span class="s2"&gt;"TERRAFORM_FILE"&lt;/span&gt;:&lt;span class="s2"&gt;"terraform_1.0.4_linux_amd64.zip"&lt;/span&gt;,
        //  &lt;span class="s2"&gt;"TERRAFORM_VERSION"&lt;/span&gt;:&lt;span class="s2"&gt;"https://releases.hashicorp.com/terraform/1.0.4/terraform_1.0.4_linux_amd64.zip"&lt;/span&gt;
        // &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,

    // Set &lt;span class="k"&gt;*&lt;/span&gt;default&lt;span class="k"&gt;*&lt;/span&gt; container specific settings.json values on container create.
    &lt;span class="s2"&gt;"settings"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"terminal.integrated.shell.linux"&lt;/span&gt;: &lt;span class="s2"&gt;"/bin/zsh"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,

    // Add the IDs of extensions you want installed when the container is created.
    &lt;span class="s2"&gt;"extensions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        // &lt;span class="s2"&gt;"ms-dotnettools.csharp"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"hashicorp.terraform"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"ms-vscode.azure-account"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"ms-azuretools.vscode-azurefunctions"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"ms-azuretools.vscode-azureresourcegroups"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"github.copilot"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"ms-mssql.mssql"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"hookyqr.beautify"&lt;/span&gt;,
        // &lt;span class="s2"&gt;"ms-azuretools.vscode-docker"&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,

    // Use &lt;span class="s1"&gt;'forwardPorts'&lt;/span&gt; to make a list of ports inside the container available locally.
    // &lt;span class="s2"&gt;"forwardPorts"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;5000, 5001],

    // Use &lt;span class="s1"&gt;'postCreateCommand'&lt;/span&gt; to run commands after the container is created.
    &lt;span class="s2"&gt;"postCreateCommand"&lt;/span&gt;: &lt;span class="s2"&gt;"uname -a"&lt;/span&gt;,

    // Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
    &lt;span class="s2"&gt;"remoteUser"&lt;/span&gt;: &lt;span class="s2"&gt;"root"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Initiating the DevContainer
&lt;/h2&gt;

&lt;p&gt;Run the Dev Containers: Reopen in Container command from VSCode. If everything is set up correctly, you'll land inside the container, where you can check the Google Cloud SDK version, among other things.&lt;/p&gt;

&lt;p&gt;[If you encounter issues, you can debug by running&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker run -it ubuntu:latest /bin/bash&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 and manually inputting the commands from your Dockerfile for analysis.]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9sl6rn89kct9lh6k4rb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9sl6rn89kct9lh6k4rb.png" alt="Image description" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52vfyrtzhxb3r9fh7iwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52vfyrtzhxb3r9fh7iwc.png" alt="Image description" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Automating Tasks with Makefile
&lt;/h2&gt;

&lt;p&gt;You can further enhance your DevContainer experience by utilizing a Makefile. This file serves as a task runner that can execute a series of commands for you. Here's a sample that demonstrates printing a simple "hello":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;greet :&lt;span class="o"&gt;=&lt;/span&gt; hello
&lt;span class="nb"&gt;echo&lt;/span&gt;:
    @ &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;greet&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq7xrpuc83ja44aw7aex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq7xrpuc83ja44aw7aex.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Integrating Google Cloud Platform
&lt;/h2&gt;

&lt;p&gt;To start working with Google Cloud Platform (GCP), obtain service credentials and place them in a auth.json file. Then, update your Makefile to interact with GCP resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3tcy15bday3x75mvy6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3tcy15bday3x75mvy6a.png" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Makefile for Google Cloud Platform (GCP)&lt;/span&gt;

&lt;span class="c"&gt;# Variables&lt;/span&gt;
KEY_FILE :&lt;span class="o"&gt;=&lt;/span&gt; auth.json
PROJECT_ID :&lt;span class="o"&gt;=&lt;/span&gt; playground-s-11-b2eb9fb3
VPC_NAME :&lt;span class="o"&gt;=&lt;/span&gt; vpc-tr
SUBNET_NAME :&lt;span class="o"&gt;=&lt;/span&gt; subnet-tr
REGION :&lt;span class="o"&gt;=&lt;/span&gt; us-central1
COMPUTE_ZONE :&lt;span class="o"&gt;=&lt;/span&gt; us-central1-a
CIDR_BLOCK :&lt;span class="o"&gt;=&lt;/span&gt; 10.10.0.0/16
VM_NAME :&lt;span class="o"&gt;=&lt;/span&gt; vm-tr
VM_TYPE :&lt;span class="o"&gt;=&lt;/span&gt; n1-standard-1
VM_IMAGE_FAMILY :&lt;span class="o"&gt;=&lt;/span&gt; debian-11
VM_IMAGE_PROJECT :&lt;span class="o"&gt;=&lt;/span&gt; debian-cloud
INTERNAL_IP_RANGE :&lt;span class="o"&gt;=&lt;/span&gt; 31.43.23.23

SSH_KEY_PATH :&lt;span class="o"&gt;=&lt;/span&gt; /root/.ssh/id_rsa.pub
&lt;span class="c"&gt;# Read the SSH public key into a variable&lt;/span&gt;
SSH_PUBLIC_KEY :&lt;span class="o"&gt;=&lt;/span&gt; root &lt;span class="si"&gt;$(&lt;/span&gt;shell &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSH_KEY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Default rule&lt;/span&gt;
default: gcloud_login gcloud_set_project gcloud_list_networks

&lt;span class="c"&gt;# Rule for logging into GCP&lt;/span&gt;
gcloud_login:
    @ gcloud auth activate-service-account &lt;span class="nt"&gt;--key-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KEY_FILE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for setting GCP project&lt;/span&gt;
gcloud_set_project:
    @ gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for setting GCP compute zone&lt;/span&gt;
gcloud_set_zone:
    @ gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;compute/zone &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPUTE_ZONE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for listing GCP networks&lt;/span&gt;
gcloud_list_networks:
    @ gcloud compute networks list

&lt;span class="c"&gt;# Rule for creating a custom VPC&lt;/span&gt;
create_vpc:
    @ gcloud compute networks create &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--subnet-mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom

&lt;span class="c"&gt;# Rule for creating a custom VPC with subnets&lt;/span&gt;
create_vpc_with_subnets: create_vpc
    @ gcloud compute networks subnets create &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SUBNET_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REGION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--range&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CIDR_BLOCK&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for deleting a VPC&lt;/span&gt;
delete_vpc:
    @ gcloud compute networks delete &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--quiet&lt;/span&gt;

&lt;span class="c"&gt;# Rule for creating firewall rule for internal traffic&lt;/span&gt;
create_firewall_internal:
    @ gcloud compute firewall-rules create allow-internal &lt;span class="nt"&gt;--network&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--allow&lt;/span&gt; tcp,udp,icmp &lt;span class="nt"&gt;--source-ranges&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;INTERNAL_IP_RANGE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for creating firewall rule to allow all IP ranges&lt;/span&gt;
create_firewall_allow_all:
    @ gcloud compute firewall-rules create allow-all-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--allow&lt;/span&gt; tcp,udp,icmp &lt;span class="nt"&gt;--source-ranges&lt;/span&gt; 0.0.0.0/0

&lt;span class="c"&gt;# Rule for creating firewall rule for SSH, RDP, and ICMP&lt;/span&gt;
create_firewall_ssh_rdp_icmp:
    @ &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Creating firewall rules for VPC: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    @ gcloud compute firewall-rules create allow-ssh-rdp-icmp-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--allow&lt;/span&gt; tcp:22,tcp:3389,icmp

&lt;span class="c"&gt;# Rule for describing a specific GCP network&lt;/span&gt;
describe_network:
    @ gcloud compute networks describe &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Rule for listing subnets in the VPC&lt;/span&gt;
gcloud_list_subnets:
    @ gcloud compute networks subnets list &lt;span class="nt"&gt;--filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"network:&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Rule for creating a VM instance&lt;/span&gt;
gcloud_create_vm:
    @ gcloud compute instances create &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPUTE_ZONE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_TYPE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_IMAGE_FAMILY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--image-project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_IMAGE_PROJECT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--boot-disk-size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10GB &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssh-keys&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;SSH_PUBLIC_KEY&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Rule for deleting a VM instance&lt;/span&gt;
gcloud_delete_vm:
    @ gcloud compute instances delete &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMPUTE_ZONE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--quiet&lt;/span&gt;

&lt;span class="c"&gt;# Rule for deploying a GCP function&lt;/span&gt;
deploy_function:
    @ gcloud functions deploy FUNCTION_NAME &lt;span class="nt"&gt;--runtime&lt;/span&gt; RUNTIME &lt;span class="nt"&gt;--trigger-http&lt;/span&gt; &lt;span class="nt"&gt;--allow-unauthenticated&lt;/span&gt;

&lt;span class="c"&gt;# Phony targets&lt;/span&gt;
.PHONY: gcloud_login gcloud_set_project gcloud_set_zone gcloud_list_networks create_vpc delete_vpc create_firewall_internal create_firewall_allow_all create_firewall_ssh_rdp_icmp describe_network create_vpc_with_subnets gcloud_list_subnets gcloud_create_vm gcloud_delete_vm deploy_function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24rcgd4yexf4e0wws0kd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24rcgd4yexf4e0wws0kd.png" alt="Image description" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Any File
&lt;/h2&gt;

&lt;p&gt;You're not limited to specific languages or files. The DevContainer allows you to run any script, be it Shell, Python, or even a simple HTTP server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks9ojh7d4s5d49f2q9l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks9ojh7d4s5d49f2q9l8.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;//hello.sh
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"hello"&lt;/span&gt;

// Dockerfile
FROM nginx

//makefile
image_name:&lt;span class="o"&gt;=&lt;/span&gt;tanvir/web
tag:&lt;span class="o"&gt;=&lt;/span&gt;v1.1
container_name:&lt;span class="o"&gt;=&lt;/span&gt;web
build:
    @ docker build &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;image_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;tag&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
run:
    @ docker run &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;container_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;image_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;tag&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
b:
    @ &lt;span class="nb"&gt;chmod&lt;/span&gt; +x hello.sh 
    @ ./hello.sh

files:&lt;span class="o"&gt;=&lt;/span&gt;hello.sh hello1.sh hello2.sh
&lt;span class="nb"&gt;chmod&lt;/span&gt;:
    @ &lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;files&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
run1:
    @ ./hello.sh
    @ ./hello1.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, you have a modular, version-controlled, and fully functional development environment. The best part is that all of this is contained within a DevContainer, ensuring that each team member can replicate the exact environment, thereby eliminating the "it works on my machine" syndrome. Happy coding!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>gcp</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>Supercharge Your Container Networking: Seamless Host Communication with VxLAN and Docker</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Sat, 15 Jul 2023 05:19:42 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/supercharge-your-container-networking-seamless-host-communication-with-vxlan-and-docker-39o</link>
      <guid>https://dev.to/tanvirrahman/supercharge-your-container-networking-seamless-host-communication-with-vxlan-and-docker-39o</guid>
      <description>&lt;p&gt;In this hands-on demo, we will explore how to set up communication between two hosts using Virtual Extensible LAN (VxLAN) and Docker. The goal is to create a VxLAN overlay network tunnel between the hosts' containers, allowing them to communicate with each other. Let's dive into the steps involved in this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we going to cover in this hands-on demo?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating two VMs and installing Docker:&lt;/strong&gt; We will use two virtual machines (VMs) and install Docker to run the containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating separate subnets and assigning static IP addresses:&lt;/strong&gt; To simplify the setup, we'll create separate subnets for each VM and assign static IP addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating a VxLAN bridge:&lt;/strong&gt; We'll utilize the Linux "ip link vxlan" feature to create a VxLAN bridge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Binding the VxLAN to the Docker bridge:&lt;/strong&gt; We'll bind the VxLAN to the Docker bridge to establish the tunnel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Verifying communication between containers:&lt;/strong&gt; Finally, we'll test the communication between containers on different hosts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let's get started!&lt;/strong&gt;&lt;br&gt;
To facilitate effective communication between hosts, we need to deploy two VMs using any hypervisor or virtualization technology. It is crucial to ensure that both VMs are connected to the same network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Creating the VMs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We begin by creating two &lt;a href="https://help.ubuntu.com/community/Installation/MinimalCD" rel="noopener noreferrer"&gt;Lubuntu VMs&lt;/a&gt; using &lt;a href="https://mac.getutm.app/" rel="noopener noreferrer"&gt;UTM&lt;/a&gt; / &lt;a href="https://ubuntu.com/server/docs/virtualization-multipass" rel="noopener noreferrer"&gt;Multipass&lt;/a&gt;. These VMs will serve as the hosts for our containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Installing necessary tools and Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, we will install several necessary tools and Docker on our Ubuntu VMs. Here's a brief overview of the tools we are going to install:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;net-tools&lt;/code&gt;: This package includes important network management tools such as &lt;code&gt;ifconfig&lt;/code&gt;, &lt;code&gt;netstat&lt;/code&gt;, &lt;code&gt;route&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;iputils-ping&lt;/code&gt;: This package provides the &lt;code&gt;ping&lt;/code&gt; command, which is used to test the reachability of a network host.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bridge-utils&lt;/code&gt;: This package provides utilities for configuring Ethernet bridging on Linux.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To install these tools and Docker, execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;net-tools
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;iputils-ping
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;bridge-utils
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, if the IP address of one of the VMs is the same as the other, you'll need to modify it to avoid conflicts. For example, you can change the IP address of one of the VMs to &lt;strong&gt;192.168.64.7/24&lt;/strong&gt; using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ifconfig eth0 192.168.64.7/24
route add default gw 192.168.64.1 eth0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To configure DNS resolution, open the resolv.conf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete the existing content and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nameserver 8.8.8.8
nameserver 8.8.4.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrdnewwtoi3v3f5yjcig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrdnewwtoi3v3f5yjcig.png" alt="Image 1" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's proceed to set up the containers and their communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Running Docker Containers on the VxLAN Network&lt;/strong&gt;&lt;br&gt;
Now for host1(amicable-hyena),&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a separate docker bridge network &lt;/span&gt;
docker network create &lt;span class="nt"&gt;--subnet&lt;/span&gt; 172.18.0.0/16 vxlan-net

&lt;span class="c"&gt;# list all networks in docker&lt;/span&gt;
docker network &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;span class="c"&gt;# The output should include the newly created vxlan-net network.&lt;/span&gt;
NETWORK ID     NAME        DRIVER    SCOPE
53be5ce8e682   bridge      bridge    &lt;span class="nb"&gt;local
&lt;/span&gt;757eb3a14b73   host        host      &lt;span class="nb"&gt;local
&lt;/span&gt;c1c0e4f01fa6   none        null      &lt;span class="nb"&gt;local
&lt;/span&gt;08bdd2dc3a82   vxlan-net   bridge    &lt;span class="nb"&gt;local&lt;/span&gt;

&lt;span class="c"&gt;# Check interfaces&lt;/span&gt;
ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    &lt;span class="nb"&gt;link&lt;/span&gt;/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s2: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether 36:db:7c:3f:11:58 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.9/24 brd 192.168.64.255 scope global dynamic enp0s2
       valid_lft 85395sec preferred_lft 85395sec
    inet6 fde4:cfbc:2cca:cd78:34db:7cff:fe3f:1158/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591923sec preferred_lft 604723sec
    inet6 fe80::34db:7cff:fe3f:1158/64 scope &lt;span class="nb"&gt;link
       &lt;/span&gt;valid_lft forever preferred_lft forever
3: docker0: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; mtu 1500 qdisc noqueue state DOWN group default
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether 02:42:aa:59:e4:e3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: br-08bdd2dc3a82: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; mtu 1500 qdisc noqueue state DOWN group default
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether 02:42:4b:42:67:6b brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-08bdd2dc3a82
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Host 2(affluent-toad),&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create &lt;span class="nt"&gt;--subnet&lt;/span&gt; 172.18.0.0/16 vxlan-net

docker network &lt;span class="nb"&gt;ls
&lt;/span&gt;cdc5ceed0de2   bridge      bridge    &lt;span class="nb"&gt;local
&lt;/span&gt;11ed966e63df   host        host      &lt;span class="nb"&gt;local
&lt;/span&gt;5ea8856ad30c   none        null      &lt;span class="nb"&gt;local
&lt;/span&gt;55316818ea9f   vxlan-net   bridge    &lt;span class="nb"&gt;local

&lt;/span&gt;ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    &lt;span class="nb"&gt;link&lt;/span&gt;/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s2: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether f2:5f:8d:d3:9f:c9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.10/24 brd 192.168.64.255 scope global dynamic enp0s2
       valid_lft 85276sec preferred_lft 85276sec
    inet6 fde4:cfbc:2cca:cd78:f05f:8dff:fed3:9fc9/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591908sec preferred_lft 604708sec
    inet6 fe80::f05f:8dff:fed3:9fc9/64 scope &lt;span class="nb"&gt;link
       &lt;/span&gt;valid_lft forever preferred_lft forever
3: docker0: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; mtu 1500 qdisc noqueue state DOWN group default
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether 02:42:b3:cb:9a:b9 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: br-55316818ea9f: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; mtu 1500 qdisc noqueue state DOWN group default
    &lt;span class="nb"&gt;link&lt;/span&gt;/ether 02:42:2c:83:cd:df brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-55316818ea9f
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Run docker container&lt;/strong&gt;&lt;br&gt;
Let's run docker container on top of newly created docker bridge network and try to ping docker bridge&lt;/p&gt;

&lt;p&gt;For Host 1,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# running alpine container with "sleep 3000" and a static ip&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; vxlan-net &lt;span class="nt"&gt;--ip&lt;/span&gt; 172.18.0.11 alpine &lt;span class="nb"&gt;sleep &lt;/span&gt;3000

&lt;span class="c"&gt;# check the container running or not&lt;/span&gt;
docker ps
CONTAINER ID   IMAGE     COMMAND        CREATED              STATUS              PORTS     NAMES
d2b96eddea7a   alpine    &lt;span class="s2"&gt;"sleep 3000"&lt;/span&gt;   About a minute ago   Up About a minute             funny_jepsen

&lt;span class="c"&gt;# check the IPAddress to make sure that the ip assigned properly&lt;/span&gt;
docker inspect d2 | &lt;span class="nb"&gt;grep &lt;/span&gt;IPAddress
            &lt;span class="s2"&gt;"SecondaryIPAddresses"&lt;/span&gt;: null,
            &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.18.0.11"&lt;/span&gt;,

&lt;span class="c"&gt;# ping the docker bridge ip to see whether the traffic can pass&lt;/span&gt;
ping 172.18.0.1 &lt;span class="nt"&gt;-c&lt;/span&gt; 2
PING 172.18.0.1 &lt;span class="o"&gt;(&lt;/span&gt;172.18.0.1&lt;span class="o"&gt;)&lt;/span&gt; 56&lt;span class="o"&gt;(&lt;/span&gt;84&lt;span class="o"&gt;)&lt;/span&gt; bytes of data.
64 bytes from 172.18.0.1: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.166 ms
64 bytes from 172.18.0.1: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.050 ms

&lt;span class="nt"&gt;---&lt;/span&gt; 172.18.0.1 ping statistics &lt;span class="nt"&gt;---&lt;/span&gt;
2 packets transmitted, 2 received, 0% packet loss, &lt;span class="nb"&gt;time &lt;/span&gt;1018ms
rtt min/avg/max/mdev &lt;span class="o"&gt;=&lt;/span&gt; 0.050/0.108/0.166/0.058 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Host 2,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--net&lt;/span&gt; vxlan-net &lt;span class="nt"&gt;--ip&lt;/span&gt; 172.18.0.12 alpine &lt;span class="nb"&gt;sleep &lt;/span&gt;3000

docker ps
CONTAINER ID   IMAGE     COMMAND        CREATED         STATUS         PORTS     NAMES
9d2e17598b8b   alpine    &lt;span class="s2"&gt;"sleep 3000"&lt;/span&gt;   3 seconds ago   Up 2 seconds             eloquent_black

docker inspect 9d | &lt;span class="nb"&gt;grep &lt;/span&gt;IPAddress
            &lt;span class="s2"&gt;"SecondaryIPAddresses"&lt;/span&gt;: null,
            &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.18.0.12"&lt;/span&gt;,

ping 172.18.0.1 &lt;span class="nt"&gt;-c&lt;/span&gt; 2

PING 172.18.0.1 &lt;span class="o"&gt;(&lt;/span&gt;172.18.0.1&lt;span class="o"&gt;)&lt;/span&gt; 56&lt;span class="o"&gt;(&lt;/span&gt;84&lt;span class="o"&gt;)&lt;/span&gt; bytes of data.
64 bytes from 172.18.0.1: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.193 ms
64 bytes from 172.18.0.1: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;64 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.067 ms

&lt;span class="nt"&gt;---&lt;/span&gt; 172.18.0.1 ping statistics &lt;span class="nt"&gt;---&lt;/span&gt;
2 packets transmitted, 2 received, 0% packet loss, &lt;span class="nb"&gt;time &lt;/span&gt;1017ms
rtt min/avg/max/mdev &lt;span class="o"&gt;=&lt;/span&gt; 0.067/0.130/0.193/0.063 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Testing Communication&lt;/strong&gt;&lt;br&gt;
Now, let's access one of the running containers and test communication between hosts. Note that containers within the same host should communicate, but container-to-container communication between hosts will fail at this stage because there is no tunnel or anything to carry the traffic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; d2 sh

apk update
apk add net-tools
apk add iputils-ping

ping 172.18.0.12 &lt;span class="nt"&gt;-c&lt;/span&gt; 2
ping: &lt;span class="nt"&gt;-c&lt;/span&gt;: Try again

ping &lt;span class="nt"&gt;-c&lt;/span&gt; 2 192.168.64.10
PING 192.168.64.10 &lt;span class="o"&gt;(&lt;/span&gt;192.168.64.10&lt;span class="o"&gt;)&lt;/span&gt; 56&lt;span class="o"&gt;(&lt;/span&gt;84&lt;span class="o"&gt;)&lt;/span&gt; bytes of data.
64 bytes from 192.168.64.10: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;63 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.93 ms
64 bytes from 192.168.64.10: &lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;63 &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.967 ms

&lt;span class="nt"&gt;---&lt;/span&gt; 192.168.64.10 ping statistics &lt;span class="nt"&gt;---&lt;/span&gt;
2 packets transmitted, 2 received, 0% packet loss, &lt;span class="nb"&gt;time &lt;/span&gt;1001ms
rtt min/avg/max/mdev &lt;span class="o"&gt;=&lt;/span&gt; 0.967/2.447/3.928/1.480 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Creating the VxLAN Tunnel&lt;/strong&gt;&lt;br&gt;
To establish communication between the containers, we need to create a VxLAN tunnel and attach it to the Docker bridge. Make sure the VNI (Virtual Network Identifier) ID is the same for both hosts.&lt;/p&gt;

&lt;p&gt;For Host 1,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# check the bridges list on the hosts&lt;/span&gt;
brctl show

bridge name bridge &lt;span class="nb"&gt;id       &lt;/span&gt;STP enabled interfaces
br-08bdd2dc3a82     8000.02424b42676b   no
docker0     8000.0242aa59e4e3   no

&lt;span class="c"&gt;# create a vxlan&lt;/span&gt;
&lt;span class="c"&gt;# 'vxlan-demo' is the name of the interface, type should be vxlan&lt;/span&gt;
&lt;span class="c"&gt;# VNI ID is 100&lt;/span&gt;
&lt;span class="c"&gt;# dstport should be 4789 which a udp standard port for vxlan communication&lt;/span&gt;
&lt;span class="c"&gt;# 192.168.64.10  is the ip of another host&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;add vxlan-demo &lt;span class="nb"&gt;type &lt;/span&gt;vxlan &lt;span class="nb"&gt;id &lt;/span&gt;100 remote 192.168.64.10 dstport 4789 dev enp0s2

&lt;span class="c"&gt;# check interface list if the vxlan interface created&lt;/span&gt;
ip a | &lt;span class="nb"&gt;grep &lt;/span&gt;vxla

7: vxlan-demo: &amp;lt;BROADCAST,MULTICAST&amp;gt; mtu 1450 qdisc noop state DOWN group default qlen 1000

&lt;span class="c"&gt;# make the interface up&lt;/span&gt;
ip &lt;span class="nb"&gt;link set &lt;/span&gt;vxlan-demo up

&lt;span class="c"&gt;# now attach the newly created vxlan interface to the docker bridge we created&lt;/span&gt;
brctl addif br-08bdd2dc3a82 vxlan-demo

&lt;span class="c"&gt;# check the route to ensure everything is okay. here '172.18.0.0' part is our concern part.&lt;/span&gt;
route &lt;span class="nt"&gt;-n&lt;/span&gt;

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.64.1    0.0.0.0         UG    100    0        0 enp0s2
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-08bdd2dc3a82
192.168.64.0    0.0.0.0         255.255.255.0   U     0      0        0 enp0s2
192.168.64.1    0.0.0.0         255.255.255.255 UH    100    0        0 enp0s2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Host 2,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brctl show

bridge name bridge &lt;span class="nb"&gt;id       &lt;/span&gt;STP enabled interfaces
br-55316818ea9f     8000.02422c83cddf   no
docker0     8000.0242b3cb9ab9

ip &lt;span class="nb"&gt;link &lt;/span&gt;add vxlan-demo &lt;span class="nb"&gt;type &lt;/span&gt;vxlan &lt;span class="nb"&gt;id &lt;/span&gt;100 remote 192.168.64.9 dstport 4789 dev enp0s2

ip a | &lt;span class="nb"&gt;grep &lt;/span&gt;vxlan

7: vxlan-demo: &amp;lt;BROADCAST,MULTICAST&amp;gt; mtu 1450 qdisc noop state DOWN group default qlen 1000

ip &lt;span class="nb"&gt;link set &lt;/span&gt;vxlan-demo up
brctl addif br-55316818ea9f vxlan-demo

route &lt;span class="nt"&gt;-n&lt;/span&gt;

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.64.1    0.0.0.0         UG    100    0        0 enp0s2
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-55316818ea9f
192.168.64.0    0.0.0.0         255.255.255.0   U     0      0        0 enp0s2
192.168.64.1    0.0.0.0         255.255.255.255 UH    100    0        0 enp0s2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Testing Communication between Containers&lt;/strong&gt;&lt;br&gt;
Now that the VxLAN overlay network tunnel has been created, let's test the communication between the containers on different hosts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; a9 bash
ping 192.168.64.10 &lt;span class="nt"&gt;-c&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find result of ping from below images. [Note: Re ran docker container in both host as previous container already ran for 3000 seconds] &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33l8m5pr7pidcv9pa9sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33l8m5pr7pidcv9pa9sw.png" alt="Image 3" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see successful ping responses, indicating that communication between the containers on different hosts is now established.&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully set up communication between hosts using VxLAN and Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference:&lt;/strong&gt;&lt;br&gt;
The demonstration code and steps mentioned in this blog post were adapted from the following GitHub repository: &lt;a href="https://github.com/faysalmehedi/vxlan-docker-hands-on" rel="noopener noreferrer"&gt;vxlan-docker-hands-on&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to explore the repository for more in-depth details and examples.&lt;/p&gt;

&lt;p&gt;Thank you for reading this blog post, and I hope you found it informative and useful.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>linux</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Simplifying Underlay - Overlay Networks, VxLAN and Packet Walk: A Journey of Networks</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Thu, 13 Jul 2023 20:09:39 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/simplifying-underlay-overlay-networks-vxlan-and-packet-walk-a-journey-of-networks-2jfb</link>
      <guid>https://dev.to/tanvirrahman/simplifying-underlay-overlay-networks-vxlan-and-packet-walk-a-journey-of-networks-2jfb</guid>
      <description>&lt;p&gt;In the intricate world of networking, there are two crucial concepts: Underlay Networks and Overlay Networks. These concepts play a vital role in establishing efficient and secure data transmission. Let's unravel these concepts and explore their significance in our interconnected world.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is an Underlay Network?
&lt;/h3&gt;

&lt;p&gt;An Underlay Network forms the physical infrastructure on which overlay networks are built. It acts as the foundational layer responsible for delivering data packets across networks. Underlay networks operate at different layers of the OSI model, such as Layer 2 (Data Link Layer) or Layer 3 (Network Layer). For example, Ethernet-based Layer 2 underlay networks use Virtual Local Area Networks (VLANs) for segmentation. The Internet itself serves as a familiar example of a Layer 3 underlay network, providing the foundation for various overlay networks to operate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Delving into Overlay Networks
&lt;/h3&gt;

&lt;p&gt;Overlay Networks are virtual networks constructed on top of the underlying network infrastructure, i.e., the underlay network. They abstract the physical network, creating a virtualized environment where overlay nodes (e.g., routers) communicate over the physical network. Overlay networks implement network virtualization concepts and employ Layer 2 and Layer 3 tunneling encapsulation protocols like VXLAN, GRE, and IPSec. These protocols enable the transportation of data packets across the overlay network, bridging the gap between different network segments or sites that may not natively support direct communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making the Connection Between Underlay and Overlay Networks:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To better grasp the relationship between underlay and overlay networks, let's use a relatable example. Imagine embarking on a road trip from New York to Los Angeles. The physical highways, streets, bridges, and tunnels that you traverse represent the underlay network—the tangible pathways guiding your journey.&lt;/p&gt;

&lt;p&gt;Now, consider relying on a GPS navigation system throughout your trip. The GPS system utilizes the physical road infrastructure (the underlay network) but creates a virtual pathway (the overlay network) specifically guiding you from your starting point to your destination. The GPS system is not concerned with the physical attributes of the roads; its focus is on virtually connecting your origin and endpoint.&lt;/p&gt;

&lt;p&gt;In the context of computer networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Underlay Network resembles the physical highways and roads comprising physical routers, switches, and cables. Data packets use this infrastructure to travel across the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Overlay Network acts like the GPS system. It is a virtual network built on top of the underlay network, providing a specific, optimized path for data packets to travel from source to destination.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize, the Underlay Network represents the physical infrastructure facilitating data packet movement, while the Overlay Network creates virtual pathways within the underlying physical network to optimize data transmission.&lt;/p&gt;

&lt;h2&gt;
  
  
  VxLAN: Extending Layer 2 Networks Beyond Limits
&lt;/h2&gt;

&lt;p&gt;VxLAN, short for Virtual Extensible LAN, revolutionizes network virtualization by extending Layer 2 networks across Layer 3 infrastructures. This technology proves invaluable in dynamic data centers with multi-tenant Virtual Machines (VMs). VxLAN segments, or overlays, establish virtual communication paths, enabling VMs within the same segment to communicate seamlessly, even across different physical networks. Each segment is identified by a unique VNI (VxLAN Network Identifier), allowing for up to 16 million segments within the same administrative domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  VNI: Unleashing Network Virtualization Potential
&lt;/h2&gt;

&lt;p&gt;The VxLAN Network Identifier (VNI) unlocks network virtualization capabilities by providing a vast address space. Unlike VLANs, VxLAN's 24-bit ID space allows for approximately 16 million unique VNIs. Each VNI acts as a unique identifier, similar to a postcode, ensuring data reaches the intended destination accurately. This extensive range of VNIs guarantees uniqueness across the entire network, enhancing flexibility and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  VTEP: Orchestrating Traffic Encapsulation
&lt;/h2&gt;

&lt;p&gt;VxLAN traffic encapsulation and direction are handled by VTEP (VxLAN Tunnel End Point), acting as the orchestrator. VTEPs create stateless tunnels across the network, encapsulating traffic from the source switch and delivering it to the destination switch. Equipped with an IP address in the underlay network and associated with one or more VNIs, VTEPs perform the intricate task of adding and removing headers to ensure seamless frame delivery across the network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making Sense of VxLAN, VNI, and VTEP with an Engaging Analogy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine you're sending letters to friends residing in different cities. Your goal is to make it seem like all these letters originate from your home city. In this scenario, VxLAN acts as a clever post office system that enables precisely that!&lt;/p&gt;

&lt;h2&gt;
  
  
  VxLAN: The Clever Post Office
&lt;/h2&gt;

&lt;p&gt;VxLAN, the virtual post office, empowers your computer (representing your home city) to send data to different parts of a network (different cities) as if they were part of your local network. This technology proves especially useful in large data centers where numerous computers (or VMs) need to communicate as if they were in the same location, regardless of their physical separation.&lt;/p&gt;

&lt;h2&gt;
  
  
  VNI: The Unique Postcodes
&lt;/h2&gt;

&lt;p&gt;Each letter (or data piece) requires a unique postcode to reach the correct friend (or VM). The VNI or VxLAN Network Identifier, fulfills this role. Operating as a unique identifier, similar to a postcode, the VNI ensures data reaches its intended destination accurately. With VxLAN, you have the capability to employ up to 16 million unique postcodes, far surpassing the limitations of traditional VLAN systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  VTEP: The Reliable Postman
&lt;/h2&gt;

&lt;p&gt;The diligent postman, VTEP (VxLAN Tunnel End Point), encapsulates your letter (data) into an envelope (encapsulation) adorned with the correct address (IP and UDP headers). At the recipient's end,&lt;/p&gt;

&lt;p&gt;the envelope is opened (decapsulation), and the letter is safely delivered to the intended friend (the corresponding VM).&lt;/p&gt;

&lt;p&gt;In essence, VxLAN serves as our clever post office system, facilitating the seamless delivery of data to different friends (VMs) residing in various cities (networks) while maintaining the illusion that all letters originate from your home city (your local network). Each letter receives a unique postcode (VNI), and VTEP acts as the reliable postman, ensuring accurate delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Packet Walk in a VxLAN Network: A Letter's Journey
&lt;/h2&gt;

&lt;p&gt;Let's simplify it using an analogy: the journey of a letter from one city to another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Starting the Journey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The journey commences when a letter (data) arrives at a switch, acting as our post office. The letter originates from a host (home) and arrives through an untagged access port (regular pathway). The post office assigns an area code (VLAN) to the letter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Deciding the Destination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The post office determines the letter's destination is a remote post office (switch) located in another city (location). This remote post office is connected to the local post office through an array of roads (an IP network).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Preparing for the Journey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The letter's area code (VLAN) is associated with a specific mailbox (VNI). To prepare the letter for its journey, it is placed in a dedicated envelope (VxLAN header applied). The local post office (VTEP) wraps the letter in additional envelopes (encapsulation) comprising UDP and IP headers. The letter sets off on its journey along the roads (IP network).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Arriving at the Destination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upon reaching the remote city (remote switch), the local post office receives the letter and removes the additional envelopes (decapsulation). The original letter, complete with its area code (a regular layer-2 frame with a VLAN ID), remains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Delivery to the Recipient&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The remote post office selects an egress port (destination) based on the recipient's address (normal MAC lookups). The letter proceeds on its final leg of the journey, reaching the intended recipient just as expected.&lt;/p&gt;

&lt;p&gt;In summary, VxLAN technology enables our "letter" to travel seamlessly from one "home" to another, across cities, while maintaining the illusion of a neighborhood environment.&lt;/p&gt;

&lt;p&gt;Thank you for joining us on this exploration of Underlay and Overlay Networks, VxLAN, and the Packet Walk. These concepts are essential in building efficient and secure networks. By understanding the roles of underlay and overlay networks, the power of VxLAN technology, and the journey of data packets, we gain valuable insights into the world of networking. Keep exploring and embracing the possibilities that networking offers in our interconnected world. Stay connected and informed!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc7348" rel="noopener noreferrer"&gt;https://datatracker.ietf.org/doc/html/rfc7348&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://ce.sc.edu/cyberinfra/workshops/Material/SDN/Lab%205%20-Configuring%20VXLAN%20to%20Provide%20Network%20Traffic%20Isolation.pdf" rel="noopener noreferrer"&gt;http://ce.sc.edu/cyberinfra/workshops/Material/SDN/Lab%205%20-Configuring%20VXLAN%20to%20Provide%20Network%20Traffic%20Isolation.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vincent.bernat.ch/en/blog/2017-vxlan-linux" rel="noopener noreferrer"&gt;https://vincent.bernat.ch/en/blog/2017-vxlan-linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/faysalmehedi/vxlan-docker-hands-on" rel="noopener noreferrer"&gt;https://github.com/faysalmehedi/vxlan-docker-hands-on&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>networking</category>
      <category>vxlan</category>
      <category>packet</category>
      <category>overlay</category>
    </item>
    <item>
      <title>Unveiling the Mysteries of Network Management in Linux</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Thu, 22 Jun 2023 19:21:21 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/unveiling-the-mysteries-of-network-management-in-linux-4haj</link>
      <guid>https://dev.to/tanvirrahman/unveiling-the-mysteries-of-network-management-in-linux-4haj</guid>
      <description>&lt;p&gt;To successfully navigate the labyrinth of network configuration in a Linux environment, we must first ensure that our system is appropriately primed. This preparatory stage involves keeping our software up-to-date and confirming the presence of essential networking utilities. Here's how we do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo apt update&lt;/code&gt;: This command refreshes our local index of software packages, paving the way for us to access the most recent versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo apt upgrade -y&lt;/code&gt;: An execution of this command results in an upgrade of all the installed software on our system. The '-y' flag automatically approves any prompts that might arise during the process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo apt install iproute2&lt;/code&gt;: This command sets about installing 'iproute2', a bundle of essential utilities for handling TCP/IP networking and traffic control in Linux.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo apt install net-tools&lt;/code&gt;: By running this command, we are introducing 'net-tools' into our system. This package provides commands like 'ifconfig' that are instrumental in configuring network interfaces.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these preparatory steps complete, we've established a sturdy foundation that will support our subsequent foray into network exploration and manipulation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mastering Network Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Illustration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol27bzhflfrvtloctrz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol27bzhflfrvtloctrz7.png" alt="Overall Concepts" width="800" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Our Network Bridge
&lt;/h3&gt;

&lt;p&gt;Our first order of business is to erect a new network bridge, which we'll christen 'v-bridge'. The subsequent commands breathe life into 'v-bridge', sets it in an active (UP) state, and assigns it an IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip link add dev v-bridge type bridge
ip link set v-bridge up
ip addr add 192.168.0.1/24 dev v-bridge
ip addr show dev v-bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtng2ze10xwkn9v1bs5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtng2ze10xwkn9v1bs5l.png" alt="Check Bridge" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Carving Out Network Namespaces
&lt;/h3&gt;

&lt;p&gt;Next, we shift our attention towards creating three distinct network namespaces, appropriately dubbed "red," "green," and "blue":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip netns add red
ip netns add green
ip netns add blue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can verify the successful creation of our namespaces by executing &lt;code&gt;ip netns list&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Crafting Virtual Ethernet Interfaces
&lt;/h3&gt;

&lt;p&gt;Our next stride involves the creation of virtual Ethernet (veth) pairs. These pairs function as a conduit, allowing seamless network communication between two endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip link add veth-red-ns type veth peer name veth-red-br
ip link add veth-green-ns type veth peer name veth-green-br
ip link add veth-blue-ns type veth peer name veth-blue-br
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connecting Virtual Ethernet Interfaces
&lt;/h3&gt;

&lt;p&gt;Our freshly minted veth interfaces are then linked to their corresponding network namespaces and our primary 'v-bridge':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip link set dev veth-red-ns netns red
ip link set dev veth-green-ns netns green
ip link set dev veth-blue-ns netns blue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip link set dev veth-red-br master v-bridge
ip link set dev veth-green-br master v-bridge
ip link set dev veth-blue-br master v-bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To activate these interfaces and prepare them for network communication, we execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip link set dev veth-red-br up
ip link set dev veth-green-br up
ip link set dev veth-blue-br up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip netns exec red ip link set dev veth-red-ns up


ip netns exec green ip link set dev veth-green-ns up
ip netns exec blue ip link set dev veth-blue-ns up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Assigning IP Addresses and Default Routes in Namespaces
&lt;/h3&gt;

&lt;p&gt;Now we dive into the network namespaces to configure IP addresses and default routes for our veth interfaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip netns exec red ip address add 192.168.0.2/24 dev veth-red-ns
ip netns exec red ip route add default via 192.168.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and similar commands for the "green" and "blue" namespaces.&lt;/p&gt;

&lt;p&gt;We've now configured our network namespaces, laying the groundwork for network communication and establishing a common gateway (192.168.0.1/24) for outbound traffic.&lt;/p&gt;

&lt;p&gt;We can confirm inter-namespace communication by pinging one namespace from another, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip netns exec red ping -c 2 192.168.0.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn27h2sri86axj0xcu4rj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn27h2sri86axj0xcu4rj.png" alt="Ping Namespaces" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Internet Connectivity
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Activating IP Forwarding
&lt;/h3&gt;

&lt;p&gt;We start this stage by activating IP forwarding on our system, accomplished by setting the 'net.ipv4.ip_forward' sysctl parameter to 1: &lt;code&gt;sysctl -w net.ipv4.ip_forward=1&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring NAT and Firewall Rules
&lt;/h3&gt;

&lt;p&gt;Next, we employ iptables to configure NAT. This allows our network namespaces to access the internet via the "enp0s2" interface: &lt;code&gt;iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o enp0s2 -j MASQUERADE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The snapshot below provides a glimpse into the NAT configuration process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0qmfpumoilm2v2pvjfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0qmfpumoilm2v2pvjfp.png" alt="Configure NAT" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can verify internet connectivity by initiating a ping from any network namespace to an external IP address. An example follows: &lt;code&gt;ip netns exec red ping -c 2 8.8.8.8&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The successful ping to an external IP is displayed below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85m2pgtcsdcl0ee7t9z6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85m2pgtcsdcl0ee7t9z6.png" alt="Ping" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In Conclusion
&lt;/h2&gt;

&lt;p&gt;In this detailed guide, we've navigated the complexities of managing network configuration in a Linux environment. We've delved into creating network namespaces and virtual Ethernet pairs, connecting them via a network bridge, assigning IP addresses and default routes within namespaces, and establishing communication between namespaces. Further, we've covered the enabling of IP forwarding, NAT configuration, and firewall rule setup to allow internet access to our network namespaces.&lt;/p&gt;

&lt;p&gt;Should you want to dive deeper into these subjects or seek professional networking, I'm always open to stimulating discussions. Feel free to connect with me on &lt;a href="https://www.linkedin.com/in/tanvir-rahman/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
      <category>networking</category>
    </item>
    <item>
      <title>Exploring Linux: Kernel Space, User Space, Namespaces and Network Chaining Unveiled</title>
      <dc:creator>Tanvir Rahman</dc:creator>
      <pubDate>Thu, 22 Jun 2023 04:25:43 +0000</pubDate>
      <link>https://dev.to/tanvirrahman/exploring-linux-kernel-space-user-space-namespaces-and-network-chaining-unveiled-4874</link>
      <guid>https://dev.to/tanvirrahman/exploring-linux-kernel-space-user-space-namespaces-and-network-chaining-unveiled-4874</guid>
      <description>&lt;p&gt;In the sprawling world of Linux, terms like kernel space, user space and namespaces are often bandied about. These concepts serve as the backbone of how a Linux system operates, making them indispensable knowledge for any system administrator or developer. Today, we're going to delve into what these terms mean and their importance in the Linux ecosystem&lt;/p&gt;

&lt;h3&gt;
  
  
  The Realm of Kernel Space and User Space
&lt;/h3&gt;

&lt;p&gt;A Linux operating system's memory space is divided into two main parts – kernel space and user space. This segregation is instrumental in maintaining system performance, security, and efficiency.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kernel Space:&lt;/strong&gt; Kernel space is the privileged realm where the heart of the operating system, the kernel, resides. Responsible for interacting directly with the hardware, controlling memory, and managing system services, the kernel is a critical part of the OS that requires a secure, isolated space - hence, the kernel space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User Space:&lt;/strong&gt; User space is the unprivileged territory where user mode applications get to play. Each application gets its private memory space, ensuring that it cannot interfere with others or the kernel. This separation safeguards system integrity and facilitates efficient task management.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Decoding Linux Namespaces
&lt;/h3&gt;

&lt;p&gt;Linux namespaces are a powerful feature of the Linux kernel that essentially isolates different system resources for different sets of processes. Each namespace gives processes an illusion of having their unique system resources.&lt;/p&gt;

&lt;p&gt;For instance, the PID namespace provides an isolated environment where process IDs are unique within the namespace. Similarly, the network namespace provides independent network devices, IP routing tables, and port numbers. The mount namespace isolates the set of filesystem mount points, giving processes their unique view of the filesystem hierarchy.&lt;/p&gt;

&lt;p&gt;This resource isolation makes processes feel like they're running on their own private machine, paving the way for lightweight virtualization solutions such as Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Intricate Relations
&lt;/h3&gt;

&lt;p&gt;The division between kernel space and user space is a fundamental design aspect of Linux, ensuring system stability and security. User applications running in user space interact with hardware indirectly through the kernel, which operates in kernel space and has direct hardware access.&lt;/p&gt;

&lt;p&gt;Linux namespaces, a feature of the Linux kernel, operate within kernel space. The kernel manages these namespaces, providing resource isolation and allocation. However, applications running in user space can reside within namespaces, offering them an isolated system view and mimicking the feeling of running on a private machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Navigating Through Prerouting, Postrouting, Forwarding, Input, and Output Chains
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhbi9r83do0s2i3ph2kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhbi9r83do0s2i3ph2kz.png" alt="Routing Chains" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These terms are often used to describe various stages of processing within the Netfilter framework, which is part of the Linux kernel. Netfilter provides a means for packet filtering, network address [and port] translation (NAT/NAPT) and other packet mangling. It is the infrastructure that facilitates building network-facing services in the kernel and is used by the &lt;code&gt;iptables&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;Here is an explanation of each term, along with an example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prerouting&lt;/strong&gt;: This is the very first chain that an incoming network packet will hit. At this point, any decisions about where to route the packet (i.e., to a local process or to another network interface for forwarding on) are yet to be made. This is also the stage where Destination NAT (DNAT) happens.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;: Imagine you have a Linux machine set up as a router, and it receives a packet destined for an IP that the router has been configured to handle (via DNAT). The Prerouting chain would be used to translate the destination address to the appropriate internal IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Input&lt;/strong&gt;: If the packet is intended for a local process (the destination IP is one of the server's own IP), it will go to the Input chain next. This is where you would typically place rules to handle packets destined for the server itself.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;: If a packet arrives at the server that is destined for port 22 (SSH), you could have a rule in the Input chain to accept such packets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Forward&lt;/strong&gt;: If the packet is intended to be forwarded to another device (the destination IP is not any of the server's IP), it will be sent to the Forward chain next. This is where decisions are made about forwarding the packets to other networks.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;: If a packet arrives at the router destined for a different subnet, the Forward chain is used to decide if the packet should be forwarded, and to which interface it should be sent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;: This is the first chain that packets coming from local processes will hit. Any packet that your server is sending out will go through this chain.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;: If a local process, such as a web server, is sending out a packet in response to a request, it would go through the Output chain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Postrouting&lt;/strong&gt;: This is the last chain that a packet will hit before it leaves the server. This is typically where you would do Source NAT (SNAT), where you change the source address of outgoing packets so they appear to come from the router itself.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;: Imagine you have a private network and a router that connects your network to the internet. When a packet is leaving your network, the Postrouting chain would be used to translate the source IP to the public IP of the router.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Well, that wraps up our deep-dive into the intricate world of Linux's Kernel Space, User Space, Namespaces and the all-important network chains. As we've seen, the kernel forms the heart of Linux, with namespaces and network chains acting as its vital arteries, ensuring everything runs smoothly and efficiently. &lt;/p&gt;

&lt;p&gt;The beauty of Linux is that it gives us, the users, an unprecedented level of control over our systems. Understanding these fundamental concepts is the first step towards leveraging that control to create powerful, efficient and secure systems. &lt;/p&gt;

&lt;p&gt;Remember, there's always more to learn in the fascinating world of Linux. So, keep exploring, keep tinkering and keep building. And of course, if you found this blog useful, do share it with your peers who might also benefit from it.&lt;/p&gt;

&lt;p&gt;Until next time, happy coding!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
