<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arun SD</title>
    <description>The latest articles on DEV Community by Arun SD (@arunjagadishsd).</description>
    <link>https://dev.to/arunjagadishsd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arunjagadishsd"/>
    <language>en</language>
    <item>
      <title>A Developer’s Journey to the Cloud 6: My Path to Kubernetes and IaC</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Mon, 18 Aug 2025 11:56:43 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-6-my-path-to-kubernetes-and-iac-39f1</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-6-my-path-to-kubernetes-and-iac-39f1</guid>
      <description>&lt;h2&gt;
  
  
  From Herding Servers to Building Worlds with Code
&lt;/h2&gt;

&lt;p&gt;I had done it. I had achieved high availability. My application was running on a fleet of two identical servers, managed by a smart load balancer. If one server went down, the other would seamlessly take over. My application was resilient. It was professional. I felt invincible.&lt;/p&gt;

&lt;p&gt;That feeling lasted until it was time to deploy a new feature.&lt;/p&gt;

&lt;p&gt;My beautiful, simple CI/CD pipeline was now obsolete. It was designed to update one server. How was I supposed to update a whole fleet? My first attempt was a clumsy bash script—a for loop that would SSH into each server, one by one, pull the latest code, and restart the container.&lt;/p&gt;

&lt;p&gt;The first time I ran it, my heart was in my throat. I watched the logs scroll by, praying that server #1 would come back online before server #2 went down. It was a "rolling update" in the most literal, terrifying sense of the word. My fleet wasn't a clean, unified entity; it was a messy collection of individuals that I had to wrangle personally. I wasn't a developer anymore; I was the stressed-out admiral of a small, complicated armada, and I was spending all my time just keeping the ships sailing in the same direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 1: The Fleet Commander
&lt;/h2&gt;

&lt;p&gt;My life was no longer about building features. It was about managing the fleet. My evenings were spent writing and debugging deployment scripts. My anxieties shifted. "What if the deployment script fails halfway through?" "How do I roll back an update across all servers at once?" "What happens when I need to scale from two servers to five? Or ten?" I was back to micromanaging machines, and it felt like a huge step backward.&lt;/p&gt;

&lt;p&gt;This constant, low-grade fear of things getting out of sync led me down a late-night research rabbit hole of "container orchestration." My first stop was my cloud provider's own solution, Amazon ECS (Elastic Container Service). It seemed like the logical next step—simple, deeply integrated, and less complex than the other options. It felt like the "easy" path.&lt;/p&gt;

&lt;p&gt;But then I hesitated. A familiar feeling crept in—the same feeling I had when I chose the "easy" path of running Redis in a Docker container. Was I about to tie my entire application's fate to a single cloud provider's proprietary system? What if I wanted to move to another cloud in the future? Or run a hybrid setup? All my knowledge of ECS would be useless. I would be locked in. I had learned my lesson: the easy path is often a trap.&lt;/p&gt;

&lt;p&gt;This time, I decided to invest in the long term. I chose the other path, the one that was known for being more complex, but also more powerful and universal: Kubernetes.&lt;/p&gt;

&lt;p&gt;Learning Kubernetes felt like learning a new language. The initial tutorials were a flood of new concepts: Pods, Services, Deployments, ReplicaSets. It wasn't a tool you could master in an afternoon. But as I pushed through, a fundamental, game-changing idea began to crystallize.&lt;/p&gt;

&lt;p&gt;With my bash script, I was giving the servers a list of imperative commands: "Go here. Stop this. Pull that. Start this." I was the micromanager.&lt;/p&gt;

&lt;p&gt;Kubernetes didn't want my instructions. It wanted my intent.&lt;/p&gt;

&lt;p&gt;I stopped telling my servers what to do. Instead, I wrote a configuration file that declared the state I wanted, and Kubernetes worked tirelessly, like a powerful robot, to make that state a reality.&lt;/p&gt;

&lt;p&gt;I no longer commanded; I declared.&lt;/p&gt;

&lt;p&gt;Instead of a script that says "update server 1, then update server 2," I now wrote a Deployment manifest—a simple YAML file that acted as the sheet music for my application's orchestra.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deployment.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-awesome-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- I declare I want 3 copies.&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myusername/my-awesome-app:v1.1&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- I declare which version to run.&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3001&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To perform an update, I just changed the image tag in this file and applied it. Kubernetes handled the zero-downtime rolling update. If a server died, Kubernetes would just reschedule its containers elsewhere. The individual machines had become an invisible, abstract resource. I had finally stopped being an admiral and could go back to being an architect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 2: The Fragile Ground Beneath My Feet
&lt;/h2&gt;

&lt;p&gt;I had done it. My application was now managed by a powerful, automated fleet commander. I felt unstoppable. I decided to create a staging environment—a perfect replica of production for testing. So I went to my cloud provider's console to start building it all again.&lt;/p&gt;

&lt;p&gt;And that's when a quiet, sinking feeling set in.&lt;/p&gt;

&lt;p&gt;How did I create my production Kubernetes cluster in the first place? I had clicked through dozens of web UI forms. I had configured VPCs, subnets, security groups, and IAM roles manually. It had taken me a whole day. I had no record of what I did. I couldn't remember every setting. My entire production environment, the ground upon which my perfect Kubernetes setup stood, was a fragile, hand-made artifact. How could I ever hope to recreate it perfectly? What if I accidentally deleted something? The whole thing felt like a house of cards.&lt;/p&gt;

&lt;p&gt;I had automated my application, but the infrastructure itself was still a manual, brittle mess. This led me to my next discovery: Infrastructure as Code (IaC). The idea is to do for your infrastructure what Docker did for your application environment: define it all in code. For this, I found a powerful duo: Terraform for provisioning the infrastructure, and Ansible for configuring it.&lt;/p&gt;

&lt;p&gt;With Terraform, I could write files that described my entire cloud setup—the "what."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# main.tf&lt;/span&gt;

&lt;span class="c1"&gt;# Define the Virtual Private Cloud (VPC)&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Define a managed Kubernetes cluster within our VPC&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"production"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-awesome-app-cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;role_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_cluster_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;subnet_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Provision a separate EC2 instance to be our secure "bastion" host&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"bastion"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0c55b159cbfafe1f0"&lt;/span&gt; &lt;span class="c1"&gt;# An Amazon Linux 2 AMI&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform was brilliant at creating the empty house, but how did I install the specific tools I needed on that bastion host? That's where Ansible came in. It handled the "how"—the configuration of the software on the machines Terraform built. I wrote an Ansible "playbook" to define the desired state of my bastion server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# playbook.yml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bastion_hosts&lt;/span&gt; &lt;span class="c1"&gt;# This group is defined in an inventory file&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure standard monitoring tools are installed&lt;/span&gt;
      &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;htop&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ncdu&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create a specific user for developers&lt;/span&gt;
      &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev_user&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
        &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9xrj3ed6j81wwlbvmfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9xrj3ed6j81wwlbvmfd.png" alt="Architectural diagram with k8s, terraform and ansible" width="689" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I spent a week converting my entire hand-clicked setup into these declarative files. When I was done, I could destroy and recreate my entire production network, Kubernetes cluster, and all its supporting services from scratch with a single command: &lt;code&gt;terraform apply&lt;/code&gt;, followed by &lt;code&gt;ansible-playbook playbook.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;My infrastructure was no longer a fragile artifact; it was now a set of version-controlled text files living in my Git repository. Creating an identical staging environment was now as simple as running the same commands with a different variable.&lt;/p&gt;

&lt;p&gt;I had finally reached a new level of automation. Everything, from the virtual network cables up to the application replicas, was now code. The system felt truly robust. With a few keystrokes, I could scale my application containers, and with a few more, I could scale the very cluster they ran on. The system felt unstoppable. And as more users flocked to the app, I saw my cluster effortlessly adding more resources to meet the demand. But all this traffic, all these new users, were all being funneled to one place. The bottleneck had moved again. My application servers and infrastructure were an army, but they were all trying to get through a single door: my database.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stay tuned for the next post: A Developer’s Journey to the Cloud 7: Advanced Database Scaling.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>ansible</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A Developer’s Journey to the Cloud 5: Load Balancers &amp; Multiple Servers</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Mon, 18 Aug 2025 11:42:10 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-5-load-balancers-multiple-servers-3edc</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-5-load-balancers-multiple-servers-3edc</guid>
      <description>&lt;h2&gt;
  
  
  My Server Was a Superhero, and That Was the Problem
&lt;/h2&gt;

&lt;p&gt;I had finally done it. My application was a well-oiled machine.&lt;br&gt;&lt;br&gt;
The database and cache were offloaded to managed services, so they could scale on their own.&lt;br&gt;&lt;br&gt;
My deployments were a one-command, automated dream.&lt;br&gt;&lt;br&gt;
My single server was humming along, its memory usage was stable, and the app was faster than ever.  &lt;/p&gt;

&lt;p&gt;For the first time, I felt like I had a truly &lt;strong&gt;professional setup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And then, one Tuesday morning, AWS sent a routine email:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Scheduled maintenance for hardware upgrades in your server's host region. Expect a brief reboot..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My blood ran cold.&lt;/p&gt;

&lt;p&gt;A reboot. A &lt;strong&gt;brief&lt;/strong&gt; reboot. My entire application, my whole online presence, was going to just… turn off.&lt;br&gt;&lt;br&gt;
Sure, maybe it would only be for five minutes, but in that instant the &lt;em&gt;fragility&lt;/em&gt; of my architecture hit me.&lt;br&gt;&lt;br&gt;
Everything I had built—every feature, every user account, every bit of hard work—depended entirely on one single machine staying on.&lt;/p&gt;

&lt;p&gt;My server wasn't just a server; it was a superhero, single-handedly holding up my entire digital world.&lt;br&gt;&lt;br&gt;
And even superheroes have to sleep.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Vertical Scaling Trap
&lt;/h2&gt;

&lt;p&gt;My first instinct?&lt;br&gt;&lt;br&gt;
&lt;em&gt;"Maybe I just need a better server."&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;It’s an appealing idea—click a button, pay more money, and upgrade to a machine with more CPU cores and RAM.&lt;br&gt;&lt;br&gt;
That’s &lt;strong&gt;vertical scaling&lt;/strong&gt;: making your one thing bigger and stronger.&lt;/p&gt;

&lt;p&gt;But the maintenance email proved a brutal truth: even the biggest, most expensive server is still &lt;strong&gt;just one server&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
It still has to be rebooted. Its hard drive can still fail. Its power supply can still die.  &lt;/p&gt;

&lt;p&gt;Scaling vertically is like buying an “unsinkable” ship—it feels safe, but you’re still betting everything on a single vessel.&lt;br&gt;&lt;br&gt;
It doesn’t fix the real flaw: the &lt;strong&gt;single point of failure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I didn’t need a bigger boat.&lt;br&gt;&lt;br&gt;
I needed a &lt;strong&gt;fleet&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Power of &lt;em&gt;More&lt;/em&gt;, Not &lt;em&gt;Bigger&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;The only way to survive the failure of one thing is to have &lt;strong&gt;more than one&lt;/strong&gt; of it.&lt;br&gt;&lt;br&gt;
That’s &lt;strong&gt;horizontal scaling&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Instead of one big server, what if I had two smaller, identical ones?&lt;br&gt;&lt;br&gt;
If one went down for maintenance or failed unexpectedly, the other could keep running—and my users would never even know.&lt;/p&gt;

&lt;p&gt;This was the path to true resilience.&lt;br&gt;&lt;br&gt;
But it raised a new question:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If I have two servers, which one do my users connect to? And how is the traffic split?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That led me to AWS’s dashboard, where I met my new best friend: the &lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building the Fleet
&lt;/h2&gt;

&lt;p&gt;Surprisingly, the plan was straightforward.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Launch a Twin&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I spun up a second, identical VM and deployed my same Dockerized application to it.&lt;br&gt;&lt;br&gt;
Now I had &lt;strong&gt;two servers&lt;/strong&gt;, side-by-side, each capable of running the whole app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hire the Traffic Cop&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The ALB doesn’t just point at individual server IPs.&lt;br&gt;&lt;br&gt;
First, I created a &lt;strong&gt;Target Group&lt;/strong&gt;—a logical container for my servers.&lt;br&gt;&lt;br&gt;
I set up a health check that pinged &lt;code&gt;/health&lt;/code&gt; every 30 seconds.&lt;br&gt;&lt;br&gt;
If it got a &lt;code&gt;200 OK&lt;/code&gt;, the server was marked healthy.&lt;br&gt;&lt;br&gt;
&lt;em&gt;(Think of it like a backstage manager making sure every performer is ready before sending them on stage.)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Up the Listener&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
On the ALB itself, I configured a &lt;strong&gt;Listener&lt;/strong&gt; for port 80.&lt;br&gt;&lt;br&gt;
Its rule was simple:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“When a request comes in, send it to a healthy server in my Target Group.”&lt;br&gt;&lt;br&gt;
The ALB would automatically distribute requests evenly.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update the Address&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The magic moment—updating my DNS.&lt;br&gt;&lt;br&gt;
Instead of pointing &lt;code&gt;myapp.com&lt;/code&gt; to my server’s IP, I pointed it to the &lt;strong&gt;public DNS name of the ALB&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F901kz2zyottxi7zq0ro2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F901kz2zyottxi7zq0ro2.png" alt="Architectural diagram with ALB" width="539" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Test
&lt;/h2&gt;

&lt;p&gt;I shut down one server manually, refreshed my site… and nothing happened.&lt;br&gt;&lt;br&gt;
It stayed online, smooth as ever.  &lt;/p&gt;

&lt;p&gt;Behind the scenes, the load balancer had noticed the outage during a health check and was silently routing all traffic to the surviving server.&lt;br&gt;&lt;br&gt;
When I restarted the downed server, the ALB welcomed it back into rotation without any downtime.&lt;/p&gt;

&lt;p&gt;That was it—I had built &lt;strong&gt;high availability&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
No single point of failure.&lt;br&gt;&lt;br&gt;
A system that could take a punch and keep running.&lt;/p&gt;




&lt;h2&gt;
  
  
  The New Problem
&lt;/h2&gt;

&lt;p&gt;As I admired my two-server fleet, a thought crept in.  &lt;/p&gt;

&lt;p&gt;My CI/CD pipeline was perfect for one server.&lt;br&gt;&lt;br&gt;
But now? Two servers meant two deployments.  &lt;/p&gt;

&lt;p&gt;What if I needed &lt;strong&gt;five servers&lt;/strong&gt;?&lt;br&gt;&lt;br&gt;
Or ten?&lt;br&gt;&lt;br&gt;
How would I update them all at once without breaking things?  &lt;/p&gt;

&lt;p&gt;I could already picture the nightmare:&lt;br&gt;&lt;br&gt;
half my servers running old code, the other half on a new version, users getting inconsistent results.  &lt;/p&gt;

&lt;p&gt;I had solved the single-point-of-failure problem…&lt;br&gt;&lt;br&gt;
and opened the door to the &lt;strong&gt;complexity-at-scale problem&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Next up:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;A Developer’s Journey to the Cloud 6: Managing Complexity with Kubernetes&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>microservices</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Developer’s Journey to the Cloud 4: Caching with Redis</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Mon, 18 Aug 2025 11:38:30 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-4-caching-with-redis-3h8k</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-4-caching-with-redis-3h8k</guid>
      <description>&lt;h2&gt;
  
  
  My App Was Getting Popular, and It Was Starting to Hurt
&lt;/h2&gt;

&lt;p&gt;For the first time in this journey, I felt a sense of &lt;strong&gt;true peace&lt;/strong&gt;. My deployments were fully automated. I could push a new feature, walk away to make a cup of tea, and return to find it live in production.&lt;br&gt;&lt;br&gt;
No manual checklists. No “Did I forget to restart that service?” anxiety. The high-wire act of deploying by hand was gone. For a developer, this was bliss. I could finally focus purely on the application itself.&lt;/p&gt;

&lt;p&gt;And that’s when I started to notice things.&lt;/p&gt;

&lt;p&gt;The main dashboard took &lt;em&gt;just a hair&lt;/em&gt; longer to load.&lt;br&gt;&lt;br&gt;
A user emailed me to say their profile page “felt sticky.”&lt;br&gt;&lt;br&gt;
Nothing was crashing. Nothing was broken. But a new kind of unease began to creep in — &lt;strong&gt;the quiet, creeping dread of a system that is silently starting to buckle under its own weight&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Investigation: A Different Kind of Broken
&lt;/h2&gt;

&lt;p&gt;My first instinct was the usual: check the server’s health.&lt;br&gt;&lt;br&gt;
I SSH’d in, ran my standard CPU and memory checks… all green. No spikes. No memory leaks. So why did everything feel &lt;em&gt;sluggish&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;I dug deeper — one layer down — into my managed database’s monitoring dashboard. And that’s when the story changed. The CPU utilization graph looked like an EKG for a hummingbird: constant, jagged peaks. My database was &lt;em&gt;working incredibly hard&lt;/em&gt;.  &lt;/p&gt;

&lt;p&gt;I enabled query logging, leaned back in my chair, and watched the flood of requests pour in.&lt;/p&gt;

&lt;p&gt;And then I saw it: my application was asking my database &lt;strong&gt;the exact same questions&lt;/strong&gt; over and over.&lt;br&gt;&lt;br&gt;
It was like sending the same intern to the library hundreds of times a minute to fetch the same book.&lt;br&gt;&lt;br&gt;
The database, bless its heart, dutifully sprinted to the shelves every single time — never pausing to wonder if maybe it could just keep a copy on its desk.&lt;/p&gt;

&lt;p&gt;My app wasn’t broken. It was just… tired. And it was tiring out my database.&lt;/p&gt;


&lt;h2&gt;
  
  
  The "Easy" Fix — and the Mistake I Didn’t See Coming
&lt;/h2&gt;

&lt;p&gt;The solution seemed obvious: give my app a short-term memory.&lt;br&gt;&lt;br&gt;
In other words — &lt;strong&gt;caching&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Redis was the obvious choice. It’s an in-memory, high-speed store designed for exactly this problem.&lt;/p&gt;

&lt;p&gt;I already had my &lt;code&gt;docker-compose.yml&lt;/code&gt; set up. What’s one more service?&lt;br&gt;&lt;br&gt;
It felt clean. Simple. No meetings required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml (The "easy" but flawed approach)&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.8'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ... my app config ...&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- Added this dependency&lt;/span&gt;

  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ... my managed db is external now, so this is gone ...&lt;/span&gt;

  &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- The new service&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:6-alpine&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6379:6379"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I wired the caching logic into my code, redeployed, and the results were instant.&lt;br&gt;&lt;br&gt;
The app was flying. The once frantic database CPU graph became a calm, glass-smooth line.&lt;/p&gt;

&lt;p&gt;For a few days, I walked taller. I’d done it again — problem solved.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Crash That Felt Familiar
&lt;/h2&gt;

&lt;p&gt;Then, one afternoon, the alerts started firing.&lt;br&gt;&lt;br&gt;
The site wasn’t just slow — it was timing out.&lt;/p&gt;

&lt;p&gt;My stomach tightened as I SSH’d into the server and ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;top
The truth stared back:  
MEM &lt;span class="nt"&gt;---&lt;/span&gt; 98.7% used
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My little server’s RAM was choking. Sometimes Redis was the culprit. Other times, my Node.js process. Either way, the system was suffocating.&lt;/p&gt;

&lt;p&gt;And there it was — the wave of déjà vu.&lt;br&gt;&lt;br&gt;
Just months ago, I’d been losing sleep over my database. Now I was losing sleep over my cache.&lt;br&gt;&lt;br&gt;
I hadn’t really solved the problem. I’d just moved the stress around, like shifting a heavy box from one arm to the other.&lt;/p&gt;

&lt;p&gt;The lesson was starting to crystalize:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;The goal isn’t just to use the right tool — it’s to use it in a way that reduces your operational anxiety, not just relocates it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs1ntomqqm6ybc2rsgag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs1ntomqqm6ybc2rsgag.png" alt="Architectural diagram with docker redis" width="779" height="240"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The Real Fix: Outsourcing My Anxiety (Again)
&lt;/h2&gt;

&lt;p&gt;Humbled, I shut down the Redis container.&lt;br&gt;&lt;br&gt;
Then I went shopping for the &lt;em&gt;right&lt;/em&gt; kind of Redis.&lt;/p&gt;

&lt;p&gt;My cloud provider had exactly what I needed: &lt;strong&gt;Amazon ElastiCache&lt;/strong&gt;, a fully managed Redis service.&lt;br&gt;&lt;br&gt;
A few clicks later, I had a production-grade cache that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Didn’t touch my app server’s RAM&lt;/li&gt;
&lt;li&gt;Was scalable and secure&lt;/li&gt;
&lt;li&gt;Was patched and monitored by people whose &lt;em&gt;full-time job&lt;/em&gt; was making Redis run perfectly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The migration was almost embarrassingly simple. All I had to do was swap the connection string in my &lt;code&gt;.env&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;From this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REDIS_URL=redis://cache:6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REDIS_URL=redis://my-app-cache.x1y2z.ng.0001.aps1.cache.amazonaws.com:6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I redeployed.&lt;br&gt;&lt;br&gt;
The app was still fast.&lt;br&gt;&lt;br&gt;
My server’s RAM sat at a comfortable 30%.&lt;br&gt;&lt;br&gt;
And for the first time in weeks, I wasn’t worried about my cache exploding at 2 a.m.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vcsrxyjhqndxsz7p9qe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vcsrxyjhqndxsz7p9qe.png" alt="Architectural diagram with managed redis" width="724" height="475"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Next Problem, Already Knocking
&lt;/h2&gt;

&lt;p&gt;But as I watched my healthy server hum along, a new thought crept in.&lt;/p&gt;

&lt;p&gt;I’d offloaded my database.&lt;br&gt;&lt;br&gt;
I’d offloaded my cache.&lt;br&gt;&lt;br&gt;
But my &lt;strong&gt;application code&lt;/strong&gt; — the heart of the product — still lived on &lt;em&gt;one&lt;/em&gt; single server.&lt;/p&gt;

&lt;p&gt;What happens when even without extra baggage, my app needs more CPU or RAM than one box can give?&lt;br&gt;&lt;br&gt;
What happens when &lt;em&gt;the process itself&lt;/em&gt; becomes the bottleneck?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for the next post:&lt;br&gt;&lt;br&gt;
&lt;em&gt;A Developer’s Journey to the Cloud — Part 5: Load Balancers &amp;amp; Multiple Servers&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>redis</category>
      <category>devops</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>A Developer’s Journey to the Cloud 3: Building a CI/CD Pipeline</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Wed, 13 Aug 2025 19:03:45 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline-3c3f</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline-3c3f</guid>
      <description>&lt;h2&gt;
  
  
  My Deployments Were a Ritual, Not a Process
&lt;/h2&gt;

&lt;p&gt;We've come so far. Our application is neatly containerized in Docker, and our data is safe and sound in managed cloud services. I had eliminated the &lt;em&gt;"works on my machine"&lt;/em&gt; curse and outsourced my 3 AM data-loss fears. I should have been happy.&lt;/p&gt;

&lt;p&gt;But a new kind of dread was creeping in—a dread that arrived every time I had to ship a new feature: &lt;strong&gt;the deployment ritual&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Manual Dance
&lt;/h2&gt;

&lt;p&gt;It was a clunky, manual dance that I had perfected:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;git push&lt;/code&gt; my changes.
&lt;/li&gt;
&lt;li&gt;Open a terminal and SSH into my server.
&lt;/li&gt;
&lt;li&gt;Navigate to the project folder.
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker-compose down&lt;/code&gt;, &lt;code&gt;git pull&lt;/code&gt;, and finally &lt;code&gt;docker-compose up --build&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every. Single. Time.&lt;/p&gt;

&lt;p&gt;It &lt;em&gt;worked&lt;/em&gt;, but it felt wrong. It was slow. It was nerve-wracking—what if I accidentally typed the wrong command on my production server?  &lt;/p&gt;

&lt;p&gt;Most importantly, it was a &lt;strong&gt;bottleneck&lt;/strong&gt;. I was the only one who could do it. I had become the &lt;em&gt;deployment guy&lt;/em&gt;, a title I never wanted.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Day I Locked Myself Out
&lt;/h2&gt;

&lt;p&gt;The breaking point came in a coffee shop on shaky Wi-Fi. I SSH'd into my server to deploy a critical hotfix, but my connection dropped midway through.&lt;/p&gt;

&lt;p&gt;The application stopped.&lt;br&gt;&lt;br&gt;
The server never got the command to bring it back up.&lt;br&gt;&lt;br&gt;
The site was down.&lt;/p&gt;

&lt;p&gt;It took me ten frantic minutes to get a stable connection and fix it, but the damage was done.&lt;/p&gt;

&lt;p&gt;I realized my manual process wasn’t just inefficient—it was &lt;strong&gt;fragile&lt;/strong&gt;. It relied on me, my laptop, and a stable internet connection.&lt;/p&gt;


&lt;h2&gt;
  
  
  What If Deployment Just… Happened?
&lt;/h2&gt;

&lt;p&gt;That night, I asked myself:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What if deploying wasn't a ritual I had to perform? What if it was just… something that happened automatically when the code was ready?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That question led me to &lt;strong&gt;CI/CD&lt;/strong&gt; (Continuous Integration / Continuous Deployment)—an &lt;em&gt;assembly line for code&lt;/em&gt;.  &lt;/p&gt;

&lt;p&gt;Tools like GitHub Actions or GitLab CI act as robots on this assembly line: you give them a recipe, and they execute it perfectly, every time.&lt;/p&gt;


&lt;h2&gt;
  
  
  Giving the Robot My Ritual
&lt;/h2&gt;

&lt;p&gt;I built a recipe so that GitHub itself would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build and push my Docker image.
&lt;/li&gt;
&lt;li&gt;Tell my server to pull and run the new version.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After a day of tinkering, I had my workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/deploy.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to Production&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Run on every push to main branch&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-and-deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Docker Hub&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_TOKEN }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Push Docker Image&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myusername/my-awesome-app:${{ github.sha }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to Server&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appleboy/ssh-action@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SERVER_HOST }}&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SERVER_USER }}&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_PRIVATE_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;cd /home/user/app&lt;/span&gt;
            &lt;span class="s"&gt;export IMAGE_TAG=${{ github.sha }}&lt;/span&gt;
            &lt;span class="s"&gt;docker-compose up -d --no-build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time the pipeline ran successfully, it felt like magic.&lt;br&gt;&lt;br&gt;
I pushed my code, and a few minutes later—&lt;strong&gt;changes were live&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;I hadn’t even opened my terminal. My server was no longer a sacred place I had to log into. It was just a machine that ran containers. The “keys” now lived securely in GitHub’s secrets, not in my pocket.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Ritual to Repeatable Code
&lt;/h2&gt;

&lt;p&gt;My deployment process was no longer trapped in my head—it was now &lt;strong&gt;code&lt;/strong&gt; in my repository. Version-controlled. Repeatable. Secure.&lt;/p&gt;

&lt;p&gt;Deployments went from being a 10-minute, high-anxiety event to a complete non-event. They just… happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  A New Problem Emerges
&lt;/h2&gt;

&lt;p&gt;For the first time, I felt real peace. I could focus entirely on the application itself.&lt;/p&gt;

&lt;p&gt;But then I noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A slight lag when the dashboard loaded.
&lt;/li&gt;
&lt;li&gt;A user emailing to say their profile page &lt;em&gt;"felt sticky"&lt;/em&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing was crashing. Nothing was broken. But the system was &lt;strong&gt;straining&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
The irony? My new, efficient deployments made it easier for more users to sign up—creating the very load that was slowing things down.&lt;/p&gt;

&lt;p&gt;Diving into the logs, I saw the truth:&lt;br&gt;&lt;br&gt;
My database was getting hammered with the same queries over and over.  &lt;/p&gt;

&lt;p&gt;The app wasn’t broken. It was tired.&lt;br&gt;&lt;br&gt;
It was time to give it a break.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Stay tuned for the next post:&lt;/strong&gt; &lt;em&gt;A Developer’s Journey to the Cloud 4: Caching with Redis&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>cicd</category>
      <category>git</category>
    </item>
    <item>
      <title>A Developer’s Journey to the Cloud 2: My Database Lived in a Shoebox, and I Didn’t Even Know It</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Wed, 13 Aug 2025 19:03:25 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it-3ei6</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it-3ei6</guid>
      <description>&lt;p&gt;Previous post: &lt;/p&gt;

&lt;h2&gt;
  
  
  My Database Lived in a Shoebox, and I Didn’t Even Know It
&lt;/h2&gt;

&lt;p&gt;We did it. In the last post, we took our application, boxed it up with Docker, and shipped it to a server. It was running, stable, and consistent. The "works on my machine" curse was broken. I felt like I had conquered the cloud.  &lt;/p&gt;

&lt;p&gt;For about a week, I was a DevOps king, basking in the glory of my perfectly containerized world.  &lt;/p&gt;

&lt;p&gt;Then, one evening, as I was about to shut my laptop, a cold thought washed over me:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Where does my data actually live?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Shoebox Realization
&lt;/h2&gt;

&lt;p&gt;It hit me like a bad database query: my entire database — every user, every post, every precious row of information — was running inside that same Docker container, on that same single server.  &lt;/p&gt;

&lt;p&gt;And it wasn’t just the database. My user-uploaded images? Just sitting in a &lt;code&gt;/uploads&lt;/code&gt; folder on that same hard drive, quietly piling up like old photos in a forgotten attic.  &lt;/p&gt;

&lt;p&gt;The whole thing was one fragile digital shoebox. If the lid blew off (or the drive failed), it would all scatter into the void.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 3 AM Fear
&lt;/h2&gt;

&lt;p&gt;That night I lay in bed thinking about &lt;code&gt;rm -rf /&lt;/code&gt; nightmares and spinning disks giving their last click of life.  &lt;/p&gt;

&lt;p&gt;What if the server’s hard drive failed? It’s just a machine, after all. Everything would be gone. Instantly.  &lt;/p&gt;

&lt;p&gt;What about backups? Sure, I could write a script, maybe a cron job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_dump mydb &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But… where would that backup go?&lt;br&gt;
Another folder? On the same server?&lt;br&gt;
That’s like hiding your spare house key under the doormat of a house that’s on fire.&lt;/p&gt;

&lt;p&gt;The more I thought about it, the more absurd it became.&lt;/p&gt;
&lt;h3&gt;
  
  
  Googling Myself Into DBA Territory
&lt;/h3&gt;

&lt;p&gt;I started Googling “how to back up a database properly” and promptly fell into a black hole: replication strategies, point-in-time recovery, WAL archiving, security patching.&lt;/p&gt;

&lt;p&gt;I wasn’t just a developer anymore — I was now an unwilling, unqualified, and mildly terrified part-time Database Administrator and Storage Manager.&lt;/p&gt;

&lt;p&gt;This wasn’t the dream.&lt;br&gt;
The dream was building my app, not babysitting a database and a pile of user images like some digital hoarder.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Cloud’s Best-Kept Secret
&lt;/h3&gt;

&lt;p&gt;Defeated, I wandered through my cloud provider’s dashboard, clicking through services with names I didn’t fully understand.&lt;/p&gt;

&lt;p&gt;And then I saw them — two shiny lifeboats in a sea of uncertainty:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Relational Database Service (RDS): “A managed relational database service... handles provisioning, patching, backup, recovery, failure detection, and repair.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simple Storage Service (S3): “Object storage designed to store and retrieve any amount of data... with 99.999999999% durability.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was almost comical. Of course the cloud companies were good at this. This is their entire business!&lt;/p&gt;

&lt;p&gt;Here I was, ready to script a janky nightly backup, while they had teams of engineers whose only job was to make sure data never disappears.&lt;/p&gt;
&lt;h3&gt;
  
  
  Handing Over the Keys
&lt;/h3&gt;

&lt;p&gt;The next day, I stopped being stubborn and started migrating.&lt;/p&gt;
&lt;h4&gt;
  
  
  Database Migration
&lt;/h4&gt;

&lt;p&gt;With a few clicks, I spun up an RDS instance. Automatic backups? done&lt;br&gt;
High availability? done&lt;br&gt;
Security patches? done&lt;/p&gt;

&lt;p&gt;I just updated my app’s connection string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASE_URL=postgres://user:password@database-1.abcdefghij12.us-east-1.rds.amazonaws.com:5432/mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  File Storage Migration
&lt;/h3&gt;

&lt;p&gt;Instead of saving files locally, I integrated the S3 SDK and changed my upload logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upload&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-app-bucket&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`uploads/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suddenly, my images weren’t trapped in &lt;code&gt;/uploads;&lt;/code&gt; they were in a globally redundant, highly durable vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Stronger Foundation
&lt;/h3&gt;

&lt;p&gt;From the outside, my app looked exactly the same.&lt;br&gt;
But beneath the surface, the foundation had gone from a shoebox on a wobbly shelf to a bank vault inside a fortress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ma8sl4wdoobmvzqx28d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ma8sl4wdoobmvzqx28d.png" alt=" " width="424" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was no longer the single point of failure. I could finally focus on writing code without the looming fear of catastrophic data loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  But One Problem Remained…
&lt;/h3&gt;

&lt;p&gt;Even with the data safe, I still had to deploy my code the old-fashioned way: SSH into the server, run some commands, cross my fingers, and hope nothing broke.&lt;/p&gt;

&lt;p&gt;It felt clunky. Slow. Archaic. There had to be a better way.&lt;/p&gt;

&lt;p&gt;Next up: &lt;a href="https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-3-building-a-cicd-pipeline-3c3f"&gt;A Developer’s Journey to the Cloud 3: Building a CI/CD Pipeline&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>devops</category>
      <category>cloud</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>A Developer’s Journey to the Cloud 1: From Localhost to Dockerized Deployment</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Wed, 13 Aug 2025 19:02:32 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-1-from-localhost-to-dockerized-deployment-25d3</link>
      <guid>https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-1-from-localhost-to-dockerized-deployment-25d3</guid>
      <description>&lt;h2&gt;
  
  
  About This Series
&lt;/h2&gt;

&lt;p&gt;Over the past 8 years, I’ve built and deployed a variety of applications—each with its own unique set of challenges, lessons, and occasionally, hard-earned scars. Instead of presenting those experiences as isolated technical write-ups, I’ve woven them into a single, continuous narrative: A Developer’s Journey to the Cloud.&lt;/p&gt;

&lt;p&gt;While the “developer” in this story is fictional, the struggles, breakthroughs, and aha-moments are all drawn from real projects I’ve worked on—spanning multiple tech stacks, deployment models, and problem domains. Each post captures the why and what behind key decisions and technologies, without drowning in step-by-step tutorials.&lt;/p&gt;

&lt;p&gt;Think of it as a mix between a memoir and a guide—part storytelling, part practical insight—chronicling the messy, funny, and sometimes painful path of learning to build in the cloud.&lt;/p&gt;




&lt;h2&gt;
  
  
  That Time I Thought localhost Was Lying to Me
&lt;/h2&gt;

&lt;p&gt;It all starts with that beautiful, electric feeling. The final line of code clicks into place, and your application—your glorious, bug-free masterpiece—purrs to life on localhost. You've tested every button, every form, every feature. It's perfect. All that's left is to share it with the world.&lt;/p&gt;

&lt;p&gt;"How hard can that be?" I thought, basking in the glow of my monitor. And so began my glorious, agonizing, and unintentionally hilarious journey into the wild.&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Launch
&lt;/h2&gt;

&lt;p&gt;My plan was simple: rent the cheapest virtual server I could find. For a few hundred rupees, I was the proud owner of a blank command line with a public IP address. It felt like being handed the keys to the internet.&lt;/p&gt;

&lt;p&gt;With the confidence of someone who had successfully used npm install more than once, I manually installed Node.js and PostgreSQL on the server. Then came the big moment: getting my code onto the server. The process was painfully manual.&lt;/p&gt;

&lt;p&gt;First, I prepared my project files locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# On my laptop&lt;/span&gt;
zip &lt;span class="nt"&gt;-r&lt;/span&gt; my-awesome-app.zip &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I used scp (Secure Copy) to upload the zipped file to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# On my laptop&lt;/span&gt;
scp my-awesome-app.zip user@your_server_ip:/home/user/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I logged into the server to unpack and run everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# On the server&lt;/span&gt;
ssh user@your_server_ip
unzip my-awesome-app.zip &lt;span class="nt"&gt;-d&lt;/span&gt; app
&lt;span class="nb"&gt;cd &lt;/span&gt;app
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I typed the IP address into my browser. It worked. My app was alive. I was, for all intents and purposes, a genius.&lt;/p&gt;

&lt;p&gt;The first crack appeared a day later. I found a tiny typo. "Easy fix," I thought. I corrected the text, zipped everything up again, and repeated the entire upload-unzip-restart dance. I restarted the app.&lt;/p&gt;

&lt;p&gt;And the entire thing crashed.&lt;/p&gt;

&lt;p&gt;Somehow, in that simple process, something had gone terribly wrong. I had no history, no undo button. It took me an hour of frantic re-uploading to get it back online. I decided the "F" in FTP stood for "fragile."&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Smarter, a Little
&lt;/h3&gt;

&lt;p&gt;Okay, no more zip files. I was a professional, and professionals use Git. I SSH'd into my server and set up a "bare" repository, a special repo just for receiving pushes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# On the server&lt;/span&gt;
git init &lt;span class="nt"&gt;--bare&lt;/span&gt; /var/repos/app.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I configured a hook that would automatically check out the code into my live directory whenever I pushed to it. My deployment process was now a sleek and sophisticated git push production main. I had leveled up.&lt;/p&gt;

&lt;p&gt;This new system worked beautifully for weeks. I built a major new feature, a complex image upload and processing tool. To get access to some new performance improvements, I developed it locally using the &lt;strong&gt;latest&lt;/strong&gt; version of Node.js. As always, it ran like a dream on my laptop. I pushed the code, and the deployment hook ran. I restarted the app with a confident smirk.&lt;/p&gt;

&lt;p&gt;It crashed. Instantly.&lt;/p&gt;

&lt;p&gt;The error message was a nightmare. A function I was using simply didn't exist. But... I had just used it. It was right there in my code. I spent the next six hours in a state of pure disbelief.&lt;/p&gt;

&lt;p&gt;Then, it hit me. My server, being a server, had been prudently set up with the stable, Long-Term Support (LTS) version of Node.js. My feature, built with the shiny new tools of the "latest" version, was trying to call a function that hadn't been introduced in the "stable" release yet. It was then I understood that the most soul-crushing phrase in our industry, "but it works on my machine," isn't a joke. It's a curse. My code wasn't the problem; the entire universe it was running in was fundamentally, fatally different.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqw0z87hsvw8hyzazuhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqw0z87hsvw8hyzazuhk.png" alt="1 tier server" width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Shipping the Entire Universe
&lt;/h3&gt;

&lt;p&gt;Defeated, I started searching for answers, and I kept stumbling upon the same word: Docker. The promise was simple: what if, instead of just shipping your code, you could ship your code's entire world along with it?&lt;/p&gt;

&lt;p&gt;The idea was to define your exact environment in a text file—a Dockerfile. This file acts as a blueprint to create a "container," a lightweight, standardized box holding your app and its perfect environment.&lt;/p&gt;

&lt;p&gt;I spent a weekend tinkering. My Dockerfile looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Dockerfile&lt;/span&gt;
&lt;span class="c"&gt;# Use the 'latest' Node.js version we need&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:latest&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory in the container&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy package files and install dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;span class="c"&gt;# Copy the rest of our application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose the port and start the server&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3001&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [ "node", "index.js" ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run my database alongside my app and manage secrets properly, I created a &lt;code&gt;docker-compose.yml&lt;/code&gt; file and a separate &lt;code&gt;.env&lt;/code&gt; file for my credentials.&lt;/p&gt;

&lt;p&gt;This is the &lt;code&gt;.env&lt;/code&gt; file, which should never be committed to Git:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code snippet

# .env
# Database credentials
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mysecretpassword
POSTGRES_DB=myapp_db
And this is the `docker-compose.yml` that uses it:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.8'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3001:3001"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
    &lt;span class="c1"&gt;# Load environment variables from the .env file&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env&lt;/span&gt;
    &lt;span class="c1"&gt;# Pass necessary variables to the app container&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
      &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${POSTGRES_USER}&lt;/span&gt;
      &lt;span class="na"&gt;PGPASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${POSTGRES_PASSWORD}&lt;/span&gt;
      &lt;span class="na"&gt;PGDATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${POSTGRES_DB}&lt;/span&gt;
      &lt;span class="na"&gt;PGPORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;

  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:13&lt;/span&gt;
    &lt;span class="c1"&gt;# Use the same .env file to configure the database&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I ran &lt;code&gt;docker-compose up&lt;/code&gt; on my laptop. It worked. But this was the real test. I installed Docker on my server, copied my project over (including the &lt;code&gt;.env&lt;/code&gt; file), and ran the exact same command.&lt;/p&gt;

&lt;p&gt;And it just... worked. No errors. No version conflicts. No drama.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25fwep5c0qvgsq8roxhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25fwep5c0qvgsq8roxhm.png" alt="1 tier server with docker" width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
It wasn't magic. It was simply that for the first time, the environment on my server was not just similar to my laptop's; it was identical. I hadn't just deployed my code. I had shipped its entire universe in a box, and it didn't care where it was opened.&lt;/p&gt;

&lt;p&gt;That's when it clicked. The goal was never just to get the code onto a server. It was to get a predictable, repeatable result, every single time. And my journey, I realized, was just getting started.&lt;/p&gt;

&lt;p&gt;Our code is now safe, but our data is living dangerously. Let's fix that.&lt;/p&gt;

&lt;p&gt;Stay tuned for the next post: &lt;a href="https://dev.to/arunjagadishsd/a-developers-journey-to-the-cloud-2-my-database-lived-in-a-shoebox-and-i-didnt-even-know-it-3ei6"&gt;A Developer’s Journey to the Cloud 2: Managed Databases &amp;amp; Cloud Storage&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>docker</category>
      <category>git</category>
      <category>devops</category>
    </item>
    <item>
      <title>AI vs(and?) Software Engineers</title>
      <dc:creator>Arun SD</dc:creator>
      <pubDate>Thu, 19 Dec 2024 12:36:39 +0000</pubDate>
      <link>https://dev.to/arunjagadishsd/ai-vsand-software-engineers-462k</link>
      <guid>https://dev.to/arunjagadishsd/ai-vsand-software-engineers-462k</guid>
      <description>&lt;p&gt;Artificial intelligence is revolutionizing industries across the board, and software engineering is no exception. From automating repetitive tasks to assisting in the design of user interfaces, AI tools are becoming integral to developers’ daily workflows. As these tools evolve, they prompt critical questions about the future of the profession and the skills needed to stay ahead in an increasingly AI-driven world.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Advantage: A Powerful Ally
&lt;/h2&gt;

&lt;p&gt;Let’s start by recognizing the undeniable advantages AI brings to software development. Tasks that were once time-consuming—like writing boilerplate code, generating basic UI elements, creating database schemas, or drafting initial logic—are now much quicker and easier with AI assistance. By handling these repetitive tasks, AI enables developers to dedicate more time to the complex, creative aspects of their work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Personally, I used AI for in past few months:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Boilerplate Reduction:&lt;/strong&gt; Generating repetitive code structures, saving both time and mental energy.&lt;br&gt;
&lt;strong&gt;Rapid UI Prototyping:&lt;/strong&gt; Quickly creating basic UI layouts for faster iteration and experimentation.&lt;br&gt;
&lt;strong&gt;Database Schema Generation:&lt;/strong&gt; Automating database table creation, ensuring consistency and reducing potential errors.&lt;br&gt;
Logic Drafting: Offering initial code drafts, which I can then refine and optimize.&lt;br&gt;
&lt;strong&gt;Code Refactoring:&lt;/strong&gt; Automatically suggesting improvements in existing code to enhance readability and performance.&lt;br&gt;
&lt;strong&gt;Unit Test Generation:&lt;/strong&gt; Creating basic test cases to validate functionality, reducing the effort of manual test creation.&lt;br&gt;
&lt;strong&gt;Documentation Assistance:&lt;/strong&gt; Auto-generating docstrings or comments based on code functionality, which helps in maintaining better documentation.&lt;br&gt;
This increased efficiency accelerates development cycles, reduces costs, and ultimately leads to more innovative software solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust Factor: A Double-Edged Sword
&lt;/h2&gt;

&lt;p&gt;However, with AI’s ability to generate code comes a crucial question: How much should we trust the code it produces? While AI can often provide functional and well-structured code, it’s not perfect. Bugs, inefficiencies, and security vulnerabilities can easily slip through the cracks. This raises an important issue, do we, as software engineers, have the expertise to thoroughly review, refine, and adapt AI-generated code for our specific needs?&lt;/p&gt;

&lt;p&gt;AI’s speed and convenience are incredibly helpful, but it’s essential that we remain vigilant. Blindly trusting AI-generated code without proper scrutiny risks building software on an unstable foundation. We must retain the ability to understand the underlying logic, identify potential flaws, and adapt the code accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Black Box" (like credit scores :D) Problem
&lt;/h2&gt;

&lt;p&gt;One big problem is that we don't always know how AI makes decisions. It's like a "black box"—we see the output (the code), but not the process. This makes it harder to find and fix problems in the code. We need better ways to understand how AI writes code.&lt;br&gt;
Created the image below with plantUML(great tool to creat UML diagrams try it).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frefb64sg27tutu1lqffv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frefb64sg27tutu1lqffv.png" alt="AI code geenration representation" width="333" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Engineer Dilemma: Learning vs. Reliance
&lt;/h2&gt;

&lt;p&gt;One of the most pressing concerns about AI’s role in development is its potential impact on new engineers entering the field. If a recent graduate starts relying heavily on AI tools right away, will they truly learn the fundamental principles of software engineering? Will they develop the critical thinking and problem-solving skills necessary to debug complex issues and design effective solutions?&lt;/p&gt;

&lt;h2&gt;
  
  
  The concern here is that over-reliance on AI could result in:
&lt;/h2&gt;

&lt;p&gt;Reduced "Learning by Doing": Hands-on experience is key to mastering software development. If AI does much of the work, new engineers might miss out on this essential learning process.&lt;br&gt;
Diminished Debugging Skills: Debugging is a core competency for software engineers. If AI takes over the debugging process, new engineers may not develop the diagnostic skills needed to tackle complex issues.&lt;br&gt;
Stunted Problem-Solving Abilities: Software engineering is about solving problems. If AI supplies ready-made solutions, engineers may not hone their critical thinking skills and might struggle to tackle challenges independently.&lt;br&gt;
The Path Forward: Collaboration, Not Replacement&lt;/p&gt;

&lt;p&gt;It’s crucial to emphasize that AI is not here to replace software engineers. Rather, it’s a powerful tool that can enhance our capabilities. The goal is to find a balance—leveraging AI’s strengths while maintaining our expertise and creativity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Here are a few key takeaways:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Embrace AI as a Tool:&lt;/strong&gt; Use AI to automate repetitive tasks, boost productivity, and free up time for more creative and high-value work.&lt;br&gt;
&lt;strong&gt;Maintain Critical Thinking:&lt;/strong&gt; Always review and understand the code AI generates. Don’t rely on it blindly.&lt;br&gt;
&lt;strong&gt;Focus on Fundamentals:&lt;/strong&gt; Ensure a solid grasp of core software engineering principles—these will always be the foundation of great development, regardless of technological advances.&lt;br&gt;
Prioritize Continuous Learning: Stay current with both AI developments and the latest software engineering techniques. The two will evolve together, and staying informed ensures we remain competitive.&lt;br&gt;
&lt;strong&gt;Foster Collaboration:&lt;/strong&gt; Rather than viewing AI as a threat, embrace it as a partner that can augment human creativity and problem-solving. Together, we can innovate faster and more effectively.&lt;br&gt;
&lt;strong&gt;Stay Cautious with Dependencies:&lt;/strong&gt; AI can automate many tasks, but it’s essential to understand when and where it’s appropriate to rely on it, ensuring we don’t become overly dependent on AI without understanding its limitations.&lt;br&gt;
The future of software engineering lies in collaboration—human expertise combined with AI-powered efficiency. By embracing this partnership thoughtfully, we can unlock new possibilities, drive innovation, and build software that is not only more efficient but also more impactful.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;PS: AI tools helped a lot in writing this post like sounding more professional, using passive tone and generating the header image😉 *&lt;/em&gt; &lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>beginners</category>
      <category>softwareengineering</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
