<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Syed Omair</title>
    <description>The latest articles on DEV Community by Syed Omair (@syed_omair).</description>
    <link>https://dev.to/syed_omair</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/syed_omair"/>
    <language>en</language>
    <item>
      <title>Creating Pods Without Deployment or Service: The Bare-Metal Kubernetes Experience</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Sat, 06 Dec 2025 17:26:54 +0000</pubDate>
      <link>https://dev.to/syed_omair/creating-pods-without-deployment-or-service-the-bare-metal-kubernetes-experience-4j0n</link>
      <guid>https://dev.to/syed_omair/creating-pods-without-deployment-or-service-the-bare-metal-kubernetes-experience-4j0n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, while Deployments and Services are the recommended approach for managing applications, there are legitimate scenarios where you might want to create a Pod directly without these abstractions. This "bare-metal" approach to Pod management gives you maximum control but comes with important trade-offs. Let's explore when, why, and how to use standalone Pods in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Standalone Pod: Basic Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Example 1: The Simplest Pod Definition&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# simple-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standalone-nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and verify the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; simple-pod.yaml
kubectl get pods
kubectl describe pod standalone-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example 2: A Batch Job Pod (Without Job Controller)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# batch-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data-processor&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;  &lt;span class="c1"&gt;# Note: Not "Always"&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;processor&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'Processing&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;data...';&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;30;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'Done!'"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and verify the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; batch-pod.yaml
kubectl get pods
kubectl describe pod data-processor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Create Pods Without Deployments or Services?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. One-Off Tasks and Ephemeral Workloads&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cleanup-job.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;temp-cleanup&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;  &lt;span class="c1"&gt;# We don't want this to restart&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cleaner&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rm&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-rf&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/tmp/old-files/*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Case: Cleaning up temporary files, running database migrations, or executing a single batch operation where automatic restart would be harmful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Debugging and Troubleshooting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# network-debug-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network-tester&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tester&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nicolaka/netshoot&lt;/span&gt;  &lt;span class="c1"&gt;# Popular network troubleshooting image&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;infinity"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and debug:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; network-debug-pod.yaml
kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; network-tester &lt;span class="nt"&gt;--&lt;/span&gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run curl, dig, ping, tcpdump, etc.&lt;br&gt;
Advantage: Quick spin-up without deployment overhead. Perfect for temporary diagnostic tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Init Containers (Within Pods)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# init-container-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-with-init&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-app&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init-db&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;until&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nslookup&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;database-service;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;waiting&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;database;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done;'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Init containers are part of Pod spec, not separate resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Learning and Experimentation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# learning-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;learning-experiment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;experiment&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LEARNING_MODE&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Benefit: Direct exposure to Pod lifecycle without controller abstractions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Specialized Node Operations&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# node-specific-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-maintenance&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodeName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-01&lt;/span&gt;  &lt;span class="c1"&gt;# Direct node assignment&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;maintenance&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/bash"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apt-get&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;update&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;apt-get&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;upgrade&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-y"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Case: Node-specific maintenance tasks where you need to target a specific node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Standalone Pods
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Simplicity and Clarity&lt;/li&gt;
&lt;li&gt;Direct Control&lt;/li&gt;
&lt;li&gt;Quick Iteration&lt;/li&gt;
&lt;li&gt;Explicit Failure Handling&lt;/li&gt;
&lt;li&gt;Resource Efficiency

&lt;ul&gt;
&lt;li&gt;No deployment controller overhead:&lt;/li&gt;
&lt;li&gt;No reconciliation loops&lt;/li&gt;
&lt;li&gt;No status tracking&lt;/li&gt;
&lt;li&gt;Less etcd traffic&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Disadvantages and Production Concerns
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;No Self-Healing&lt;/li&gt;
&lt;li&gt;No Rolling Updates&lt;/li&gt;
&lt;li&gt;No Scaling Capabilities&lt;/li&gt;
&lt;li&gt;Limited Service Integration&lt;/li&gt;
&lt;li&gt;Manual Lifecycle Management&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When to Use (and When Not to Use)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Standalone Pods When:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Temporary debugging or troubleshooting&lt;/li&gt;
&lt;li&gt;One-off batch jobs (though consider Job resource)&lt;/li&gt;
&lt;li&gt;Learning and experimentation&lt;/li&gt;
&lt;li&gt;Init containers within other pods&lt;/li&gt;
&lt;li&gt;Simple, stateless utilities with no HA requirements&lt;/li&gt;
&lt;li&gt;Node-specific operations requiring direct assignment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Avoid Standalone Pods When:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production applications requiring high availability&lt;/li&gt;
&lt;li&gt;Services needing load balancing&lt;/li&gt;
&lt;li&gt;Applications requiring zero-downtime updates&lt;/li&gt;
&lt;li&gt;Workloads that need automatic scaling&lt;/li&gt;
&lt;li&gt;Stateful applications (use StatefulSet instead)&lt;/li&gt;
&lt;li&gt;Long-running services accessed by other pods&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Standalone Pods in Kubernetes serve an important niche in the container orchestration ecosystem. They provide the raw, unabstracted access to container execution that's essential for debugging, one-off tasks, and learning. However, they lack the robustness and automation features that make Kubernetes powerful for production workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways:&lt;/strong&gt;&lt;br&gt;
Use standalone Pods strategically for their intended purposes&lt;br&gt;
Understand the trade-offs between control and automation&lt;br&gt;
Know when to graduate to Deployments and Services&lt;br&gt;
Always clean up temporary standalone Pods&lt;br&gt;
Document the purpose of each standalone Pod for team clarity&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: Kubernetes is about abstraction and automation. While standalone Pods give you escape hatches from these abstractions, they should be the exception rather than the rule in a well-architected Kubernetes environment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Port Forwarding: A Practical Guide with Examples</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Sat, 06 Dec 2025 16:11:37 +0000</pubDate>
      <link>https://dev.to/syed_omair/kubernetes-port-forwarding-a-practical-guide-with-examples-25o6</link>
      <guid>https://dev.to/syed_omair/kubernetes-port-forwarding-a-practical-guide-with-examples-25o6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes port forwarding is a powerful feature that allows you to access and debug applications running inside your cluster directly from your local machine. Whether you're a developer testing a service, an operator troubleshooting issues, or someone who needs temporary access to internal resources, port forwarding provides a convenient bridge between your local environment and the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;In this technical deep dive, we'll explore how port forwarding works under the hood, walk through practical examples, and discuss best practices for using this feature effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Port Forwarding in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Port forwarding creates a secure tunnel between your local machine and a specific pod or service running in your Kubernetes cluster. This allows you to access internal cluster resources as if they were running locally on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works: The Technical Details
&lt;/h2&gt;

&lt;p&gt;When you run &lt;code&gt;kubectl port-forward&lt;/code&gt;, here's what happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubectl Client initiates a request to the Kubernetes API server&lt;/li&gt;
&lt;li&gt;API Server authenticates and authorizes the request&lt;/li&gt;
&lt;li&gt;Kubelet on the target node establishes the connection&lt;/li&gt;
&lt;li&gt;Tunnel is created using SPDY or WebSocket protocol&lt;/li&gt;
&lt;li&gt;Traffic flows through the API server proxy to the target pod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture looks like this:&lt;br&gt;
&lt;code&gt;Local Machine → kubectl → API Server → Kubelet → Target Pod&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl installed and configured&lt;/li&gt;
&lt;li&gt;Access to a Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Basic understanding of pods, services, and deployments&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Basic Port Forwarding Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Example 1: Forwarding to a Pod&lt;/strong&gt;&lt;br&gt;
Let's start with the most common use case - forwarding to a specific pod.&lt;/p&gt;

&lt;p&gt;First, let's create a simple nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# nginx-pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's forward local port 2224 to the pod's port 80:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward pod/nginx-demo 2224:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see output similar to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Forwarding from 127.0.0.1:2224 -&amp;gt; 80&lt;br&gt;
Forwarding from [::1]:2224 -&amp;gt; 80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now open your browser and navigate to &lt;a href="http://localhost:2224" rel="noopener noreferrer"&gt;http://localhost:2224&lt;/a&gt; &lt;br&gt;
OR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:2224
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;you'll see the nginx welcome page!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example 2: Forwarding to a Deployment&lt;/strong&gt;&lt;br&gt;
When working with deployments, you typically forward to a specific pod within the deployment. First, let's create a deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# web-app-deployment.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; web-app-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the pod name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;web-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forward to one of the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward pod/web-app-7d75f47444-abc123 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example 3: Forwarding to a Service
&lt;/h2&gt;

&lt;p&gt;You can also forward directly to a service. This is particularly useful when you want to leverage the service's load balancing capabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# web-service.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; web-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forward to the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/web-service 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Port Forwarding Techniques
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Multiple Port Forwarding&lt;/strong&gt;&lt;br&gt;
You can forward multiple ports simultaneously. This is useful for applications that expose multiple services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward pod/multi-port-app 8080:80 8443:443 3000:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Background Port Forwarding&lt;/strong&gt;&lt;br&gt;
For long-running port forwarding sessions, you might want to run it in the background:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start port forwarding in background&lt;/span&gt;
kubectl port-forward pod/nginx-demo 8080:80 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the process ID for later&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PORT_FORWARD_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you're done, kill the process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="nv"&gt;$PORT_FORWARD_PID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;While port forwarding is incredibly useful, it's important to use it securely:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Port forwarding uses your kubeconfig credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization&lt;/strong&gt;: RBAC rules apply to port forwarding requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Policies&lt;/strong&gt;: They don't affect port-forwarded traffic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporary Access&lt;/strong&gt;: Use it for debugging, not for permanent access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Logging&lt;/strong&gt;: Port forwarding requests are logged in the API server audit logs&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes port forwarding is an essential tool in the Kubernetes practitioner's toolkit. It provides a simple yet powerful way to bridge the gap between local development and cluster-based services. While perfect for debugging and development scenarios, remember that it's not designed for production traffic or permanent access solutions.&lt;/p&gt;

&lt;p&gt;By understanding how to use port forwarding effectively—from basic single-port forwarding to advanced multi-port scenarios—you can significantly improve your development and debugging workflow in Kubernetes environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: With great power comes great responsibility. Use port forwarding judiciously, always consider security implications, and clean up after your debugging sessions to maintain a secure and efficient Kubernetes environment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>portforwarding</category>
    </item>
    <item>
      <title>The Hidden Costs of Inefficient Code: A Case Study</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Wed, 19 Mar 2025 21:10:08 +0000</pubDate>
      <link>https://dev.to/syed_omair/the-hidden-costs-of-inefficient-code-a-case-study-4ka9</link>
      <guid>https://dev.to/syed_omair/the-hidden-costs-of-inefficient-code-a-case-study-4ka9</guid>
      <description>&lt;p&gt;As developers, we often focus on getting our code to work, but sometimes overlook the performance implications of our design choices. In this post, we'll explore how a seemingly minor difference in code can significantly impact performance, using a real-world example from our own system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scenario
&lt;/h2&gt;

&lt;p&gt;We have two versions of code that achieve the same goal: fetching user points from a point server. The first version is optimized, while the second is less efficient. Let's dive into both and see why one outperforms the other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version 1: Optimized Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;userList, count, err := u.repo.GetAllUserDB(limit, offset, orderBy, sort)
if err != nil {
    return err
}

conn, err := u.pointServiceConnectionPool.Get()
if err != nil {
    return fmt.Errorf("failed to get connection from pool: %v", err)
}
defer u.pointServiceConnectionPool.Put(conn)

client := pb.NewPointServerClient(conn)

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

userIDs := []string{}
for _, user := range userList {
    userIDs = append(userIDs, user.ID)
}

r, err := client.GetUserListPoints(ctx, &amp;amp;pb.UserListRequest{UserIds: userIDs})
if err != nil {
    u.logger.Error("failed to get points for users", zap.Error(err), zap.Any("userIDs", userIDs))
    return err
}
userPoints := r.GetUserPoints()
for k, v := range userPoints {
    u.logger.Debug("user points", zap.String("user_id", k), zap.Any("points", v))
}

userList = updateUserListWithPoints(userList, userPoints)

return nil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d0w0l10obs5ekf08tbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d0w0l10obs5ekf08tbl.png" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Version 2: Inefficient Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;userList, count, err := u.repo.GetAllUserDB(limit, offset, orderBy, sort)
if err != nil {
    return err
}

conn, err := u.pointServiceConnectionPool.Get()
if err != nil {
    return fmt.Errorf("failed to get connection from pool: %v", err)
}
defer u.pointServiceConnectionPool.Put(conn)

client := pb.NewPointServerClient(conn)

for _, user := range userList {
    u.logger.Debug("fetch user points", zap.String("user_id", user.ID))

    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()

    r, err := client.GetUserPoints(ctx, &amp;amp;pb.PointRequest{UserId: user.ID})
    if err != nil {
        u.logger.Error("failed to get user points", zap.Error(err), zap.String("userID", user.ID))
        continue
    }
    u.logger.Debug("user points", zap.String("user_id", user.ID), zap.String("user points", r.GetUserPoint()))

    point, err := strconv.Atoi(r.GetUserPoint())
    if err != nil {
        point = 0
    }
    user.Point = point
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtash670prrldqo7tvnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtash670prrldqo7tvnb.png" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus Query: &lt;br&gt;
histogram_quantile(0.95, rate(http_response_time_seconds_bucket[5m]))&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the Difference?
&lt;/h2&gt;

&lt;p&gt;The key difference between these two versions lies in how they interact with the point server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Version 1 fetches points for all users in a single request using GetUserListPoints. This approach minimizes the number of requests to the server, reducing overhead and latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version 2 fetches points for each user individually using GetUserPoints. This results in multiple requests to the server, increasing both the number of network calls and the overall processing time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Impact
&lt;/h2&gt;

&lt;p&gt;When we look at the Prometheus graphs for these two versions, the difference is striking. The response time for Version 1 is significantly lower than Version 2. This is because Version 1 reduces the number of requests to the point server, minimizing network latency and server load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Batching Requests: When possible, batching requests can significantly improve performance by reducing the number of network calls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Efficient API Usage: Using APIs designed for bulk operations can reduce overhead compared to making individual requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring Performance: Tools like Prometheus are invaluable for identifying performance bottlenecks and optimizing code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Even seemingly minor differences in code can have a profound impact on performance. By optimizing how we interact with external services and leveraging batching, we can significantly improve response times and reduce system load. Always keep an eye on performance metrics and be mindful of how your code interacts with external systems. Small changes can add up to make a big difference in the efficiency and scalability of your applications.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>performance</category>
      <category>go</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Improving Performance with Concurrency in Golang: A Case Study</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Mon, 03 Mar 2025 09:01:47 +0000</pubDate>
      <link>https://dev.to/syed_omair/improving-performance-with-concurrency-in-golang-a-case-study-3dip</link>
      <guid>https://dev.to/syed_omair/improving-performance-with-concurrency-in-golang-a-case-study-3dip</guid>
      <description>&lt;p&gt;In modern software development, performance is often a critical factor, especially when dealing with applications that require fetching and processing large amounts of data. Recently, I worked on optimizing a Go method that fetches user data and statistics from a database. By introducing concurrency, I was able to significantly reduce the execution time of the method. In this post, I’ll walk you through the changes I made and the results I achieved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;The method in question, GetAllUsersData, is responsible for fetching a list of users along with various statistics such as the highest and lowest age, average age, highest and lowest salary, and average salary. Initially, the method was implemented sequentially, meaning each database query was executed one after the other. Here’s a simplified version of the original implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// GetAllUsersData fetches user data and statistics 
func (c *Controller) GetAllUsersData(limit, offset int, orderBy, sort string) (map[string]interface{}, error) {
    methodName := "GetAllUsersData"
    c.Logger.Debug("method start", zap.String("method", methodName))
    start := time.Now()

    var (
        userList          []*models.User
        count             string
        intUserHighAge    int
        intUserLowAge     int
        fltUserAvgAge     float64
        fltUserAvgSalary  float64
        fltUserLowSalary  float64
        fltUserHighSalary float64
        err               error
    )

    userList, count, err = c.Repo.GetAllUserDB(limit, offset, orderBy, sort)
    if err != nil {
        return nil, err
    }
    intUserHighAge, err = c.Repo.GetUserHighAge()
    if err != nil {
        return nil, err
    }
    intUserLowAge, err = c.Repo.GetUserLowAge()
    if err != nil {
        return nil, err
    }
    fltUserAvgAge, err = c.Repo.GetUserAvgAge()
    if err != nil {
        return nil, err
    }
    fltUserLowSalary, err = c.Repo.GetUserLowSalary()
    if err != nil {
        return nil, err
    }
    fltUserHighSalary, err = c.Repo.GetUserHighSalary()
    if err != nil {
        return nil, err
    }
    fltUserAvgSalary, err = c.Repo.GetUserAvgSalary()
    if err != nil {
        return nil, err
    }
    responseUserObj := models.ResponseUser{
        HighAge:    strconv.Itoa(intUserHighAge),
        LowAge:     strconv.Itoa(intUserLowAge),
        AvgAge:     fmt.Sprintf("%.2f", fltUserAvgAge),
        HighSalary: fmt.Sprintf("%.2f", fltUserHighSalary),
        LowSalary:  fmt.Sprintf("%.2f", fltUserLowSalary),
        AvgSalary:  fmt.Sprintf("%.2f", fltUserAvgSalary),
        Count:      count,
        List:       userList,
    }

    var responseObj map[string]interface{}
    err = mapstructure.Decode(responseUserObj, &amp;amp;responseObj)
    if err != nil {
        return nil, err
    }

    duration := time.Since(start)
    seconds := int(duration.Seconds()) % 60
    milliseconds := duration.Milliseconds() % 1000
    strDuration := fmt.Sprintf("%02d.%03d", seconds, milliseconds)
    c.Logger.Debug("method end", zap.String("method", methodName), zap.String("Time elapsed for this method:", strDuration))
    return responseObj, nil

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this approach worked, it was not optimal. Each database call had to wait for the previous one to complete, leading to longer execution times. For example, the method took 64 milliseconds to complete in its sequential form.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Introducing Concurrency
&lt;/h2&gt;

&lt;p&gt;To improve performance, I decided to leverage Go’s concurrency model using Goroutines and the errgroup package. The idea was to execute all database queries concurrently, allowing them to run in parallel and reduce the total execution time.&lt;/p&gt;

&lt;p&gt;Here’s the updated version of the method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// GetAllUsersData fetches user data and statistics concurrently.
func (c *Controller) GetAllUsersData(limit, offset int, orderBy, sort string) (map[string]interface{}, error) {
    methodName := "GetAllUsersData"
    c.Logger.Debug("method start", zap.String("method", methodName))
    start := time.Now()
    //ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    //defer cancel()

    g, _ := errgroup.WithContext(context.Background())

    var (
        userList          []*models.User
        count             string
        intUserHighAge    int
        intUserLowAge     int
        fltUserAvgAge     float64
        fltUserAvgSalary  float64
        fltUserLowSalary  float64
        fltUserHighSalary float64
    )

    g.Go(func() error {
        var err error
        userList, count, err = c.Repo.GetAllUserDB(limit, offset, orderBy, sort)
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        intUserHighAge, err = c.Repo.GetUserHighAge()
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        intUserLowAge, err = c.Repo.GetUserLowAge()
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        fltUserAvgAge, err = c.Repo.GetUserAvgAge()
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        fltUserLowSalary, err = c.Repo.GetUserLowSalary()
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        fltUserHighSalary, err = c.Repo.GetUserHighSalary()
        if err != nil {
            return err
        }
        return nil
    })

    g.Go(func() error {
        var err error
        fltUserAvgSalary, err = c.Repo.GetUserAvgSalary()
        if err != nil {
            return err
        }
        return nil
    })

    if err := g.Wait(); err != nil {
        return nil, err
    }
    responseUserObj := models.ResponseUser{
        HighAge:    strconv.Itoa(intUserHighAge),
        LowAge:     strconv.Itoa(intUserLowAge),
        AvgAge:     fmt.Sprintf("%.2f", fltUserAvgAge),
        HighSalary: fmt.Sprintf("%.2f", fltUserHighSalary),
        LowSalary:  fmt.Sprintf("%.2f", fltUserLowSalary),
        AvgSalary:  fmt.Sprintf("%.2f", fltUserAvgSalary),
        Count:      count,
        List:       userList,
    }

    var responseObj map[string]interface{}
    err := mapstructure.Decode(responseUserObj, &amp;amp;responseObj)
    if err != nil {
        return nil, err
    }

    duration := time.Since(start)
    seconds := int(duration.Seconds()) % 60
    milliseconds := duration.Milliseconds() % 1000
    strDuration := fmt.Sprintf("%02d.%03d", seconds, milliseconds)
    c.Logger.Debug("method end", zap.String("method", methodName), zap.String("Time elapsed for this method:", strDuration))
    return responseObj, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this version, each database query is executed in a separate Goroutine. The errgroup package ensures that all Goroutines complete successfully and handles any errors that may occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;After implementing concurrency, I ran the method and compared its performance with the original sequential version. The results were impressive:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without Concurrency: 64 milliseconds&lt;br&gt;
With Concurrency: 24 milliseconds&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By running the database queries concurrently, the method’s execution time was reduced by &lt;strong&gt;62.5%&lt;/strong&gt;. This is a significant improvement, especially in scenarios where the method is called frequently or where low latency is critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Concurrency is Powerful: Go’s Goroutines and channels make it easy to implement concurrency, allowing you to execute multiple tasks in parallel and improve performance.&lt;/li&gt;
&lt;li&gt;Error Handling Matters: Using errgroup simplifies error handling in concurrent operations, ensuring that any errors are properly propagated and handled.&lt;/li&gt;
&lt;li&gt;Measure and Optimize: Always measure the performance of your code before and after optimization to quantify the impact of your changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Introducing concurrency in the GetAllUsersData method was a straightforward yet highly effective way to improve its performance. By running database queries concurrently, I was able to reduce the execution time from 64 milliseconds to 24 milliseconds—a significant improvement. If you’re working on similar performance-critical applications, I highly recommend exploring Go’s concurrency features to unlock faster and more efficient code.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Singleton Pattern in Golang</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Mon, 03 Mar 2025 06:10:43 +0000</pubDate>
      <link>https://dev.to/syed_omair/the-singleton-pattern-in-golang-29lp</link>
      <guid>https://dev.to/syed_omair/the-singleton-pattern-in-golang-29lp</guid>
      <description>&lt;p&gt;In software development, there are times when you need to ensure that only one instance of a particular class exists. This is where the Singleton pattern comes into play. It's a creational design pattern that restricts the instantiation of a class to a single object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Singleton?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Resource Management: When dealing with resources like database connections, file systems, or configuration settings, having multiple instances can lead to conflicts and inefficiencies.&lt;/li&gt;
&lt;li&gt;Centralized Access: Singletons provide a global point of access, making it easy for different parts of your application to interact with a shared resource.&lt;/li&gt;
&lt;li&gt;Configuration: A singleton can hold application-wide configuration settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Go Implementation: Database Connection Example
&lt;/h2&gt;

&lt;p&gt;Let's illustrate the Singleton pattern with a practical example: managing database connections in Go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type PostgresAdapter struct {
    dbUrl         string
    dbMaxIdle     int
    dbMaxOpen     int
    dbMaxLifeTime int
    dbMaxIdleTime int
    gormConf      string
    db            *gorm.DB
    mu            sync.Mutex
}

func NewPostgresAdapter(url string, dbMaxIdle, dbMaxOpen, dbMaxLifeTime, dbMaxIdleTime int, gormConf string) *PostgresAdapter {
    return &amp;amp;PostgresAdapter{dbUrl: url,
        dbMaxIdle:     dbMaxIdle,
        dbMaxOpen:     dbMaxOpen,
        dbMaxLifeTime: dbMaxLifeTime,
        dbMaxIdleTime: dbMaxIdleTime,
        gormConf:      gormConf,
    }
}

func (p *PostgresAdapter) MakeConnection() (*gorm.DB, error) {
    p.mu.Lock()
    defer p.mu.Unlock()

    if p.db != nil {
        return p.db, nil
    }

    db, err := makeConnection(postgres.Open(p.dbUrl), p.dbUrl, p.dbMaxIdle, p.dbMaxOpen, p.dbMaxLifeTime, p.dbMaxIdleTime, p.gormConf)
    if err != nil {
        return nil, err
    }

    p.db = db
    return db, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, we have PostgresAdapter that handle database connections using the gorm library. We want to ensure that each adapter creates only one database connection. Here's how the Singleton pattern is implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Private Instance:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each adapter (PostgresAdapter) has a db *gorm.DB field to store the database connection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Mutex for Thread Safety:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A sync.Mutex (mu) is used to protect the critical section where the database connection is created. This prevents race conditions in concurrent environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Check for Existing Instance:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The MakeConnection() method checks if the db field is already populated. If it is, the existing connection is returned.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Instance Creation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the db field is nil, the makeConnection() function is called to create the database connection.&lt;br&gt;
 The newly created connection is then stored in the db field.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Singleton Elements in Action:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The mu.Lock() and mu.Unlock() calls ensure that only one goroutine can create the database connection at a time.&lt;/li&gt;
&lt;li&gt;The if p.db != nil checks prevent redundant database connection creation.&lt;/li&gt;
&lt;li&gt;By storing the database connection in the adapter struct, we maintain a single, persistent connection throughout the adapter's lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benefits of This Approach:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Efficiency: We avoid the overhead of creating multiple database connections.&lt;/li&gt;
&lt;li&gt;Consistency: All parts of the application using the adapter share the same database connection.&lt;/li&gt;
&lt;li&gt;Thread Safety: The mutex ensures that the Singleton implementation is safe for concurrent use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Important Considerations:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Testability: Singletons can make unit testing more challenging due to their global state. Consider using dependency injection to make your code more testable.&lt;/li&gt;
&lt;li&gt;Overuse: Avoid using the Singleton pattern when a simple object instance would suffice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;The Singleton pattern is a valuable tool for managing resources and ensuring centralized access in your applications. By understanding its principles and implementing it carefully, you can create more efficient and robust software. In our database connection example, the Singleton pattern helps us maintain a single, thread-safe connection, improving performance and resource utilization.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>go</category>
    </item>
    <item>
      <title>Using a Reverse Proxy to Expose Multiple Microservices Through a Single Port in Docker Compose</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Mon, 03 Mar 2025 05:13:11 +0000</pubDate>
      <link>https://dev.to/syed_omair/using-a-reverse-proxy-to-expose-multiple-microservices-through-a-single-port-in-docker-compose-4h9e</link>
      <guid>https://dev.to/syed_omair/using-a-reverse-proxy-to-expose-multiple-microservices-through-a-single-port-in-docker-compose-4h9e</guid>
      <description>&lt;p&gt;In modern microservices architectures, applications are often broken down into smaller, independent services that communicate with each other. Each microservice typically runs on its own port, which can lead to challenges when exposing these services to the outside world. For example, managing multiple ports, handling cross-origin requests, and ensuring consistent API gateways can become cumbersome.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;reverse proxy&lt;/strong&gt; is a powerful solution to these challenges. By using a reverse proxy like &lt;strong&gt;NGINX&lt;/strong&gt;, you can expose multiple microservices through a single port, simplifying access and improving manageability. Let’s dive into how this works and the benefits it brings.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;A reverse proxy acts as an intermediary between clients (e.g., browsers, mobile apps) and your microservices. It listens on a single port (e.g., 80 or 8180) and routes incoming requests to the appropriate microservice based on the request path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Setup&lt;/strong&gt;&lt;br&gt;
Here’s a simplified example using Docker Compose and NGINX:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Docker Compose File:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Define your microservices (user_service, department_service, etc.).&lt;/li&gt;
&lt;li&gt;Add an NGINX service as the reverse proxy.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.8"

services:
  reverse-proxy:
    image: nginx:latest
    ports:
      - "8180:80"  # Expose NGINX on port 8180
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf  # Custom NGINX configuration
    depends_on:
      - user_service
      - department_service

  user_service:
    image: your_user_service_image
    ports:
      - "5001:8185"  # Internal port for user_service
    environment:
      - PORT=8185

  department_service:
    image: your_department_service_image
    ports:
      - "5002:8185"  # Internal port for department_service
    environment:
      - PORT=8185
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. NGINX Configuration:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Define routing rules in nginx.conf to map paths to microservices.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;events {}

http {
  server {
    listen 80;

    # Route requests to the user_service
    location /api/users/ {
      proxy_pass http://user_service:8185/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Route requests to the department_service
    location /api/departments/ {
      proxy_pass http://department_service:8185/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Accessing the Services:
&lt;/h2&gt;

&lt;p&gt;Clients can now access the microservices through the reverse proxy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://localhost:8180/api/users" rel="noopener noreferrer"&gt;http://localhost:8180/api/users&lt;/a&gt; → Routes to user_service.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://localhost:8180/api/departments" rel="noopener noreferrer"&gt;http://localhost:8180/api/departments&lt;/a&gt; → Routes to department_service.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>docker</category>
      <category>nginx</category>
    </item>
    <item>
      <title>How to Deploy a Container from GitHub to AWS ECR and ECS through OIDC</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Thu, 27 Feb 2025 16:15:04 +0000</pubDate>
      <link>https://dev.to/syed_omair/how-to-deploy-a-container-from-github-to-aws-ecr-through-oidc-2ma5</link>
      <guid>https://dev.to/syed_omair/how-to-deploy-a-container-from-github-to-aws-ecr-through-oidc-2ma5</guid>
      <description>&lt;p&gt;Deploying a container from GitHub Actions to AWS Elastic Container Registry (ECR) and AWS Elastic Container Service (ECS) can be done securely using OpenID Connect (OIDC). This method eliminates the need to store long-lived AWS credentials, making your CI/CD pipeline more secure. This guide will walk you through setting up OIDC authentication for GitHub Actions to push Docker images to AWS ECR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Enable OIDC Provider in AWS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to AWS Console and navigate to IAM.&lt;/li&gt;
&lt;li&gt;Go to Identity providers &amp;gt; Add provider.&lt;/li&gt;
&lt;li&gt;Select OpenID Connect as the provider type.&lt;/li&gt;
&lt;li&gt;Enter the Provider URL:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://token.actions.githubusercontent.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Click Get thumbprint (AWS will auto-populate this).&lt;/li&gt;
&lt;li&gt;Under Audience, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Click Add provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Create an IAM Role for GitHub Actions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In the AWS Console, go to IAM &amp;gt; Roles &amp;gt; Create Role.&lt;/li&gt;
&lt;li&gt;Select Web identity as the trusted entity type.&lt;/li&gt;
&lt;li&gt;Choose the OIDC provider you just created.&lt;/li&gt;
&lt;li&gt;Under 'Audience' enter &lt;a href="https://github.com" rel="noopener noreferrer"&gt;https://github.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Under 'GitHub organization' enter your .&lt;/li&gt;
&lt;li&gt;Click Next.&lt;/li&gt;
&lt;li&gt;Under 'Name' enter github-role&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Attach Policies for ECR and ECS Access
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;create a custom policy for more control
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:DescribeRepositories",
                "ecr:CreateRepository",
                "ecr:ListImages",
                "ecr:BatchDeleteImage"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Click Next, then give the policy a name (e.g., GitHubActionsECR).&lt;/li&gt;
&lt;li&gt;Click Create policy.
Add another policy name githubECS for ECS Update
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecs:UpdateService",
                "ecs:DescribeServices"
            ],
            "Resource": [
                "&amp;lt;ARN of ECS service 1",
                "&amp;lt;ARN of ECS service 2",
                "&amp;lt;ARN of ECS service n"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc90pdr21chxcaa7ky79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc90pdr21chxcaa7ky79.png" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Update Trust PolicyModify the trust policy to allow GitHub Actions to assume this role:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Go to IAM &amp;gt; Roles &amp;gt; Select your role (GitHubActionsECR).&lt;/li&gt;
&lt;li&gt;Click Trust relationships &amp;gt; Edit trust policy.&lt;/li&gt;
&lt;li&gt;Replace the existing policy with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:oidc-provider/token.actions.githubusercontent.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "token.actions.githubusercontent.com:aud": "https://github.com"
                },
                "StringLike": {
                    "token.actions.githubusercontent.com:sub": "repo:&amp;lt;GITHUB_ORG_OR_USER&amp;gt;/&amp;lt;REPO_NAME&amp;gt;:*"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Replace  with your AWS account ID)&lt;br&gt;
(Replace / with your GitHub organization and repository name)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi48zhygy9ncvj4kp7cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi48zhygy9ncvj4kp7cy.png" alt=" " width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click Update policy.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 5: Configure GitHub Actions Workflow In your GitHub repository:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to Settings &amp;gt; Secrets and variables &amp;gt; Actions -&amp;gt;Add new repository secret:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: AWS_DB_URL&lt;br&gt;
Value: AWS RDS full path for example postgres://db_username:password@rds instance name:5432/DB name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: AWS_ECR_URI&lt;br&gt;
Value: ECS repo path for example AWS_ AccountID.dkr.ecr.us-east-2.amazonaws.com/backend-microservices/user-service-stage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: AWS_REGION&lt;br&gt;
Value: us-east-2 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: AWS_ROLE_ARN&lt;br&gt;
Value: The ARN of the IAM role you created (found in AWS IAM).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuhx3xqa3zr8xrgcuvmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuhx3xqa3zr8xrgcuvmn.png" alt=" " width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Update GitHub Actions Workflow (.github/workflows/deploy.yml)
&lt;/h2&gt;

&lt;p&gt;Modify your workflow YAML file to assume the IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy services to Stage AWS ECR and ECS

on:
  push:
    branches:
      - stage
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Go
        uses: actions/setup-go@v4
        with:
          go-version: 1.23

      - name: Install dependencies
        run: go mod tidy

      - name: Run Integration Tests
        run: go test -v ./integration_test/...

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          role-session-name: GitHubActionsSession
          aws-region: ${{ secrets.AWS_REGION}}
          audience: https://github.com

      - name: Load Environment Variables
        run: |
          cat .env_stage.example &amp;gt;&amp;gt; $GITHUB_ENV

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build Docker Image for user-service
        run: |
          docker build \
            --build-arg logLevelEnvVar=${LOG_LEVEL} \
            --build-arg databaseURLEnvVar=${{secrets.AWS_DB_URL}} \
            --build-arg portEnvVar=${PORT} \
            --build-arg dBEnvVar=${DB} \
            --build-arg dBMaxIdleEnvVar=${DB_MAX_IDLE} \
            --build-arg dBMaxOpenEnvVar=${DB_MAX_OPEN} \
            --build-arg dBMaxLifeTimeEnvVar=${DB_MAX_LIFE_TIME} \
            --build-arg dBMaxIdleTimeEnvVar=${DB_MAX_IDLE_TIME} \
            --build-arg zapConf=${ZAP_CONF} \
            --build-arg gormConf=${GORM_CONF} \
            --build-arg pprofEnable=${PPROF_ENABLE}  \
            -t ${{secrets.AWS_ECR_URI}}user-service-stage:latest \
            -f service/user_service/Dockerfile  \
            .
          docker push ${{secrets.AWS_ECR_URI}}user-service-stage:latest

      - name: Deploy user-service to AWS ECS
        run: |
          aws ecs update-service --cluster cluster-backend-microservice --service service-user --force-new-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Test the Workflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Push a commit to the repository.&lt;/li&gt;
&lt;li&gt;Navigate to Actions in GitHub and verify that the workflow runs successfully.&lt;/li&gt;
&lt;li&gt;Your Docker image should now be pushed to Amazon ECR and ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now your GitHub Actions workflow can push images to AWS ECR and ECS without requiring long-lived AWS credentials. This setup is more secure and efficient, enabling seamless container deployments from GitHub to AWS. 🎉&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>aws</category>
      <category>cicd</category>
      <category>github</category>
    </item>
    <item>
      <title>Understanding Dependency Injection Through a Practical Golang Example</title>
      <dc:creator>Syed Omair</dc:creator>
      <pubDate>Tue, 25 Feb 2025 12:19:12 +0000</pubDate>
      <link>https://dev.to/syed_omair/understanding-dependency-injection-through-a-practical-golang-example-4b3k</link>
      <guid>https://dev.to/syed_omair/understanding-dependency-injection-through-a-practical-golang-example-4b3k</guid>
      <description>&lt;p&gt;Dependency Injection (DI) is a design pattern that promotes loose coupling and testability in software applications. It allows you to inject dependencies (e.g., services, configurations, or databases) into a component rather than having the component create or manage them directly. This makes your code more modular, maintainable, and easier to test.&lt;/p&gt;

&lt;p&gt;In this post, we’ll explore the Dependency Injection pattern using a practical Golang example. We’ll break down the code and explain how DI is implemented in a real-world scenario.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Example: A Container for Managing Dependencies&lt;/strong&gt;&lt;br&gt;
The example consists of three Go files that work together to create a "container" for managing dependencies like a logger, database connection, and configuration. Let’s dive into the code and see how DI is applied.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Logger Setup
&lt;/h2&gt;

&lt;p&gt;The first file sets up a logger using the zap logging library. The logger is initialized using a configuration file, and the NewLogger function is responsible for creating the logger instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func NewLogger(zapConfig string) (*zap.Logger, error) {
    file, err := os.Open(zapConfig)
    if err != nil {
        return nil, fmt.Errorf("failed to open logger config file")
    }
    defer file.Close()

    var cfg zap.Config
    if err := json.NewDecoder(file).Decode(&amp;amp;cfg); err != nil {
        return nil, fmt.Errorf("failed to parse logger config json")
    }

    logger, err := cfg.Build()
    if err != nil {
        return nil, err
    }

    defer logger.Sync()
    logger.Debug("logger construction succeeded")
    return logger, nil
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the NewLogger function takes a configuration file path (zapConfig) as input and returns a zap.Logger instance. This is an example of constructor injection, where the dependency (logger configuration) is injected into the function.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Database Connection Setup
&lt;/h2&gt;

&lt;p&gt;The second file handles database connections using the gorm library. It defines an interface Db and two implementations (PostgresAdapter and MySQLAdapter) for connecting to PostgreSQL and MySQL databases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Db interface {
    MakeConnection() (*gorm.DB, error)
}

func NewDBConnectionAdapter(dbName, url string, dbMaxIdle, dbMaxOpen, dbMaxLifeTime, dbMaxIdleTime int, gormConf string) Db {
    switch dbName {
    case Postgres:
        return &amp;amp;PostgresAdapter{dbUrl: url, dbMaxIdle: dbMaxIdle, dbMaxOpen: dbMaxOpen, dbMaxLifeTime: dbMaxLifeTime, dbMaxIdleTime: dbMaxIdleTime, gormConf: gormConf}
    case Mysql:
        return &amp;amp;MySQLAdapter{dbUrl: url, dbMaxIdle: dbMaxIdle, dbMaxOpen: dbMaxOpen, dbMaxLifeTime: dbMaxLifeTime, dbMaxIdleTime: dbMaxIdleTime, gormConf: gormConf}
    }
    return &amp;amp;PostgresAdapter{dbUrl: url, dbMaxIdle: dbMaxIdle, dbMaxOpen: dbMaxOpen, dbMaxLifeTime: dbMaxLifeTime, dbMaxIdleTime: dbMaxIdleTime, gormConf: gormConf}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The NewDBConnectionAdapter function acts as a factory, creating the appropriate database adapter based on the dbName parameter. This is an example of factory injection, where the factory decides which implementation to inject.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Container for Managing Dependencies
&lt;/h2&gt;

&lt;p&gt;The third file defines a Container interface and its implementation. The container is responsible for managing all dependencies (logger, database, etc.) and injecting them where needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Container interface {
    Logger() *zap.Logger
    Db() *gorm.DB
    Port() string
    PprofEnable() string
}

type container struct {
    logger               *zap.Logger
    db                   *gorm.DB
    port                 string
    pprofEnable          string
    environmentVariables map[string]string
}

func New(envVars map[string]string) (Container, error) {
    c := &amp;amp;container{environmentVariables: envVars}

    var err error
    c.db, err = c.dbSetup()
    if err != nil {
        return c, err
    }
    c.logger, err = c.loggerSetup()
    if err != nil {
        return c, err
    }
    c.port, err = c.portSetup()
    if err != nil {
        return c, err
    }
    c.pprofEnable, err = c.pprofEnableSetup()
    if err != nil {
        return c, err
    }
    return c, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The New function initializes the container by setting up all dependencies. It uses constructor injection to pass environment variables and configuration to the container. Each dependency (logger, database, etc.) is initialized separately, making the code modular and easy to test.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of Dependency Injection in This Example
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Loose Coupling&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The Container does not directly create its dependencies. Instead, it relies on external configurations and factories to provide them. This makes the code more flexible and easier to modify.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testability&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Since dependencies are injected, you can easily mock them during testing. For example, you can replace the real database connection with a mock database for unit tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Responsibility Principle&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Each component (logger, database adapter, etc.) has a single responsibility. The Container is only responsible for managing dependencies, not for creating them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reusability&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The Db interface and its implementations can be reused across different parts of the application. You can switch between PostgreSQL and MySQL without changing the core logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use the Container
&lt;/h2&gt;

&lt;p&gt;Here’s how you can use the Container in your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
func main() {
        c, err := container.New(map[string]string{
        container.LogLevelEnvVar:      os.Getenv(container.LogLevelEnvVar),
        container.DatabaseURLEnvVar:   os.Getenv(container.DatabaseURLEnvVar),
        container.PortEnvVar:          os.Getenv(container.PortEnvVar),
        container.DBMaxIdleEnvVar:     os.Getenv(container.DBMaxIdleEnvVar),
        container.DBMaxOpenEnvVar:     os.Getenv(container.DBMaxOpenEnvVar),
        container.DBMaxLifeTimeEnvVar: os.Getenv(container.DBMaxLifeTimeEnvVar),
        container.DBMaxIdleTimeEnvVar: os.Getenv(container.DBMaxIdleTimeEnvVar),
        container.ZapConf:             os.Getenv(container.ZapConf),
        container.GormConf:            os.Getenv(container.GormConf),
        container.PprofEnable:         os.Getenv(container.PprofEnable),
    })
    if err != nil {
        defer func() {
            fmt.Println("server initialization failed error: %w", err)
        }()
        panic("server initialization failed")
    }


    logger := c.Logger()
    db := c.Db()

    logger.Info("Application started", zap.String("port", c.Port()))
    // Use db and logger in your application...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Dependency Injection pattern is a powerful tool for building modular, testable, and maintainable applications. In this example, we saw how DI can be implemented in Go using interfaces, factories, and a container to manage dependencies.&lt;/p&gt;

&lt;p&gt;By adopting DI, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decouple your application’s components.&lt;/li&gt;
&lt;li&gt;Improve testability.&lt;/li&gt;
&lt;li&gt;Make your code more reusable and maintainable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re new to Dependency Injection, I encourage you to try implementing it in your own projects. Start small, and gradually refactor your code to use DI where it makes sense. Happy coding!&lt;/p&gt;

</description>
      <category>dependencyinjection</category>
      <category>go</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
