<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gbenga Kusade</title>
    <description>The latest articles on DEV Community by Gbenga Kusade (@jagkush).</description>
    <link>https://dev.to/jagkush</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jagkush"/>
    <language>en</language>
    <item>
      <title>SRE in Action: Understanding How Real Teams Use SLOs, SLIs, and Error Budgets to Stay Reliable Through Case Studies - Part 1</title>
      <dc:creator>Gbenga Kusade</dc:creator>
      <pubDate>Sun, 16 Nov 2025 13:33:26 +0000</pubDate>
      <link>https://dev.to/jagkush/sre-in-action-understanding-how-real-teams-use-slos-slis-and-error-budgets-to-stay-reliable-27k6</link>
      <guid>https://dev.to/jagkush/sre-in-action-understanding-how-real-teams-use-slos-slis-and-error-budgets-to-stay-reliable-27k6</guid>
      <description>&lt;p&gt;When people talk about Site Reliability Engineering (SRE), they often share abstract principles about SLIs, SLOs, and error budgets. But here's the problem: understanding the concepts isn't the same as knowing how to apply them.&lt;/p&gt;

&lt;p&gt;The truth is, reliability challenges look radically different depending on where you sit. This article presents two SRE implementations from completely different perspectives, a complete walkthrough for beginners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For startups (CompanyA):&lt;/strong&gt; it's about moving fast without breaking everything as you scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For enterprises (CompanyB):&lt;/strong&gt; it's about coordinating dozens of teams who can't agree on what "reliable" even means.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both need SRE principles. But the implementation couldn't be more different.&lt;/p&gt;

&lt;p&gt;Let's dive in.&lt;/p&gt;




&lt;h2&gt;
  
  
  CASE 1: How a FinTech Startup Moved from Firefighting to Measurable Reliability
&lt;/h2&gt;




&lt;h2&gt;
  
  
  What You will Learn
&lt;/h2&gt;

&lt;p&gt;By the end of this case study, you will understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to identify what metrics actually matter to users (SLIs)&lt;/li&gt;
&lt;li&gt;How to set realistic reliability targets (SLOs)&lt;/li&gt;
&lt;li&gt;What an error budget is and why it is your secret weapon&lt;/li&gt;
&lt;li&gt;How to balance shipping features with maintaining reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No complex theory, just a startup's journey from chaos to confidence.&lt;/p&gt;




&lt;h2&gt;
  
  
  Meet CompanyA: A Growing Startup with Growing Pains
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CompanyA&lt;/strong&gt; is a x-year-old fintech startup providing digital wallets and payment APIs to small businesses across Africa. They recently crossed 1 million users, exciting news! But with growth came pain.&lt;/p&gt;

&lt;h3&gt;
  
  
  CompanyA Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React web app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Containerized services on AWS ECS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; PostgreSQL on RDS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; API Gateway + CloudFront CDN&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;During high-traffic periods (Black Friday, salary week, etc.), things started breaking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Payment success rates dropped to 96%&lt;/li&gt;
&lt;li&gt;Users complained: "Transfers hang for minutes!"&lt;/li&gt;
&lt;li&gt;Engineers were burning out from constant alerts&lt;/li&gt;
&lt;li&gt;Every issue felt equally urgent&lt;/li&gt;
&lt;li&gt;No one could agree on what "reliable" meant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this sounds familiar, let us see how it was fixed. We will pick the process up step by step.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Understanding SLIs (Service Level Indicators)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is an SLI?
&lt;/h3&gt;

&lt;p&gt;Think of SLIs as the vital signs of your service. Just like a doctor checks your heart rate and blood pressure, SLIs tell you what your users are actually experiencing.&lt;/p&gt;

&lt;h3&gt;
  
  
  CompanyA's Journey: Finding What Matters
&lt;/h3&gt;

&lt;p&gt;The team started by asking: &lt;em&gt;What does success look like from a user's perspective?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;They mapped out the critical user journey:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi7h6c7qbrnk8qwv9i3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi7h6c7qbrnk8qwv9i3g.png" alt="user journey" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each step, they asked: &lt;em&gt;What metric shows if this step is working well?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The SLIs They Chose
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlx0k9acl596zx3u4fo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlx0k9acl596zx3u4fo2.png" alt="chosen SLI" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why 95th Percentile?
&lt;/h3&gt;

&lt;p&gt;Instead of looking at average response time (which hides problems), the 95th percentile shows: &lt;em&gt;95% of users experience this speed or better.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average latency: 1.5s (looks good!)&lt;/li&gt;
&lt;li&gt;95th percentile: 5s (problem! 5% of users wait too long)
The 95th percentile catches issues that averages hide.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 2: Setting SLOs (Service Level Objectives)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What's an SLO?
&lt;/h3&gt;

&lt;p&gt;An SLO is your reliability target: a specific, measurable goal you commit to internally. It answers: &lt;em&gt;How reliable should this service be?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CompanyA's Approach: Data-Driven Targets
&lt;/h3&gt;

&lt;p&gt;They did not guess. They analyzed 3 months of real data to see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What reliability were they currently achieving&lt;/li&gt;
&lt;li&gt;Where users dropped off&lt;/li&gt;
&lt;li&gt;What was realistically achievable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is what they decided:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnivbnaf5tip6391gfr74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnivbnaf5tip6391gfr74.png" alt="decided SLO" width="604" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Critical Decision: Why Not 100%?
&lt;/h3&gt;

&lt;p&gt;CompanyA learned that chasing 100% is a trap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is impossibly expensive&lt;/li&gt;
&lt;li&gt;It slows innovation to a crawl&lt;/li&gt;
&lt;li&gt;Real-world systems have dependencies that fail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, they accepted 0.1% failure (about 43 minutes of downtime per month). This is not giving up; it is being realistic.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Golden Question
&lt;/h3&gt;

&lt;p&gt;When setting SLOs, ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the minimum reliability that keeps users happy and the business healthy?
By setting the bar too low -&amp;gt; users leave.
By setting the bar too high -&amp;gt; you never ship features&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 3: Understanding Error Budgets
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is an Error Budget?
&lt;/h3&gt;

&lt;p&gt;This is the game-changer concept. Your error budget is the amount of unreliability you can afford before breaking your SLO.&lt;/p&gt;

&lt;p&gt;Think of it like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SLO says: "Be available 99.9% of the time"&lt;/li&gt;
&lt;li&gt;That means you can be down 0.1% of the time&lt;/li&gt;
&lt;li&gt;That 0.1% is your error budget&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CompanyA's Error Budget Calculation
&lt;/h3&gt;

&lt;p&gt;For their 99.9% availability SLO over 30 days:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total time in month = 30 days × 24 hours × 60 minutes = 43,200 minutes
Allowed downtime (0.1%) = 43.2 minutes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is their error budget: 43.2 minutes of downtime per month&lt;/p&gt;

&lt;h3&gt;
  
  
  The Policy That Changed Everything
&lt;/h3&gt;

&lt;p&gt;CompanyA created a simple rule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If the Error budget &amp;gt; 50% remaining -&amp;gt; Ship new features confidently

If the Error budget is 25-50% remaining  -&amp;gt; Review what's burning the budget, slow down risky changes

If the Error budget is &amp;lt; 25% remaining -&amp;gt; FREEZE new features, focus only on reliability

If the Error budget is exhausted (100% used) -&amp;gt; Complete feature freeze until the budget recovers the next month
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f7vq7d4vm0rv2sb4h6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f7vq7d4vm0rv2sb4h6y.png" alt="why error budget matters" width="790" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The error budget gave engineers objective authority to say "not now" when reliability was at risk.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Building Visibility with Dashboards
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Making It Real: The Dashboard
&lt;/h3&gt;

&lt;p&gt;CompanyA built a simple Grafana dashboard that everyone (incl. engineers, product managers, executives, etc.) could understand:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsb611hhrk2skjvdqk4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsb611hhrk2skjvdqk4z.png" alt="dashboard" width="509" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture Behind It
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhqbt3iygx6n6r468nhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhqbt3iygx6n6r468nhr.png" alt="arch behind" width="406" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Made This Dashboard Effective
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Single source of truth:&lt;/strong&gt; No more "it works on my machine"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time visibility:&lt;/strong&gt; See problems as they happen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear status:&lt;/strong&gt; Green/Yellow/Red, no ambiguity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actionable alerts:&lt;/strong&gt; Only fires when SLOs are actually at risk&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 5: Using Data to Drive Improvements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problem 1: Slow API Response Times
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What the data showed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;P95 latency: 3.2s (breaching the 2.5s SLO)&lt;/li&gt;
&lt;li&gt;Happened during peak traffic&lt;/li&gt;
&lt;li&gt;Correlated with database query spikes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root cause investigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database connection pool maxing out&lt;/li&gt;
&lt;li&gt;Repeated queries for the same data&lt;/li&gt;
&lt;li&gt;No caching layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fixes implemented:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Increased connection pool size&lt;/li&gt;
&lt;li&gt;Added Redis caching for frequent queries&lt;/li&gt;
&lt;li&gt;Implemented query result pagination
&lt;strong&gt;Result:&lt;/strong&gt; P95 latency dropped to 1.8s&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem 2: Success Rate Drops During Promotions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What the data showed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Success rate dropped to 96% during Black Friday&lt;/li&gt;
&lt;li&gt;Error budget consumed 40% in one week&lt;/li&gt;
&lt;li&gt;Alerts everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root cause investigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System couldn't handle 5x normal traffic&lt;/li&gt;
&lt;li&gt;No load testing before promotional launches&lt;/li&gt;
&lt;li&gt;Services crashed under load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fixes implemented:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Added load testing to CI/CD pipeline&lt;/li&gt;
&lt;li&gt;Implemented auto-scaling based on request rate&lt;/li&gt;
&lt;li&gt;Added circuit breakers to prevent cascade failures
&lt;strong&gt;Result:&lt;/strong&gt; Next promotion maintained 99.9% success rate&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem 3: Alert Fatigue
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What the data showed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On-call engineers getting 50+ pages per week&lt;/li&gt;
&lt;li&gt;80% of alerts resolved themselves&lt;/li&gt;
&lt;li&gt;Team morale is suffering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root cause investigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alerts triggered on any spike, not SLO breaches&lt;/li&gt;
&lt;li&gt;No distinction between warning and critical&lt;/li&gt;
&lt;li&gt;Noisy metrics creating false positives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fixes implemented:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rewrote Prometheus alerting rules to focus on SLO breaches&lt;/li&gt;
&lt;li&gt;Added "sustained for 10 minutes" threshold&lt;/li&gt;
&lt;li&gt;Differentiated between "watch" and "act now" alerts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; Pages reduced by 65%, MTTR improved by 40% &lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6: Understanding SLAs (Service Level Agreements)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What's an SLA?
&lt;/h3&gt;

&lt;p&gt;An SLA is your external promise to customers, usually with financial consequences if you break it.&lt;/p&gt;

&lt;h3&gt;
  
  
  CompanyA's SLA Design
&lt;/h3&gt;

&lt;p&gt;They created customer-facing SLAs backed by internal SLOs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Internal SLO: 99.9% availability
Customer SLA: 99.5% availability (with buffer!)

Why the buffer?
- Protects against measurement differences  
- Gives room for maintenance windows
- SLO breach doesn't automatically mean SLA breach
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Customer Agreement
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;CompanyA Payment API - Service Level Agreement

Availability Guarantee: 99.5% uptime monthly
Sample Credit Structure:
&lt;span class="p"&gt;-&lt;/span&gt; 99.0-99.49% uptime → 10% credit
&lt;span class="p"&gt;-&lt;/span&gt; 98.0-98.99% uptime → 25% credit  
&lt;span class="p"&gt;-&lt;/span&gt; Below 98.0% uptime → 50% credit

Exclusions:
&lt;span class="p"&gt;-&lt;/span&gt; Scheduled maintenance (announced 48hrs ahead)
&lt;span class="p"&gt;-&lt;/span&gt; Customer's infrastructure issues
&lt;span class="p"&gt;-&lt;/span&gt; Force majeure events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Lesson
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Your SLAs should be less strict than your SLOs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This buffer means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have time to fix issues before customers are affected&lt;/li&gt;
&lt;li&gt;You're not paying out credits for every tiny breach&lt;/li&gt;
&lt;li&gt;You can meet customer expectations consistently&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Results: 6 Months Later
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir7zyc3k3wsmcs8k4kvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir7zyc3k3wsmcs8k4kvm.png" alt="the metrics" width="490" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cultural Wins
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsj3li222para87yoywr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsj3li222para87yoywr.png" alt="Cultural wins" width="697" height="124"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways for Your Team
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start Small:&lt;/strong&gt; You don't need to instrument everything. Pick one critical service and one user journey.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Real Data:&lt;/strong&gt; Don't guess at SLOs. Look at 2-3 months of actual performance and user behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your Error Budgets Are Power:&lt;/strong&gt; They transform reliability from a vague concept into something you can negotiate with data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboards Create Alignment:&lt;/strong&gt; When everyone sees the same numbers, conversations shift from blame to improvement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SLAs Need Buffers:&lt;/strong&gt; Your internal targets (SLOs) should be stricter than your customer promises (SLAs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perfect Is the Enemy of Reliable:&lt;/strong&gt; Chasing 100% uptime kills innovation. Accept some failure, measure it, and stay within budget.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Want To Try This With Your Team?
&lt;/h2&gt;

&lt;p&gt;The template below can help&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 1: Define
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pick one critical service&lt;/li&gt;
&lt;li&gt;Map the user journey&lt;/li&gt;
&lt;li&gt;Choose 2-3 key SLIs&lt;/li&gt;
&lt;li&gt;Pull historical data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Week 2: Measure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Set realistic SLOs based on data&lt;/li&gt;
&lt;li&gt;Calculate error budgets&lt;/li&gt;
&lt;li&gt;Set up basic monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Week 3: Monitor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Build a simple dashboard&lt;/li&gt;
&lt;li&gt;Review daily: Are we within SLOs?&lt;/li&gt;
&lt;li&gt;Document what's consuming the budget&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Week 4: Improve
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Hold a team review&lt;/li&gt;
&lt;li&gt;Pick the top budget-burner&lt;/li&gt;
&lt;li&gt;Plan improvements&lt;/li&gt;
&lt;li&gt;Iterate!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Resources to Go Deeper
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://sre.google/sre-book/table-of-contents/" rel="noopener noreferrer"&gt;Google SRE Book&lt;/a&gt; - Free, comprehensive&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sre.google/workbook/table-of-contents/" rel="noopener noreferrer"&gt;Google SRE Workbook&lt;/a&gt; - Practical exercises&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://prometheus.io/docs/practices/naming/" rel="noopener noreferrer"&gt;Prometheus Best Practices&lt;/a&gt; - Metrics collection&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/grafana/plugins/grafana-slo-app/" rel="noopener noreferrer"&gt;Grafana SLO Plugin&lt;/a&gt; - Dashboard tooling&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Up Next!
&lt;/h2&gt;

&lt;p&gt;In the next case study (final), we will explore how &lt;strong&gt;CompanyB&lt;/strong&gt;, a big Telecom organization with both legacy and modern systems, applied these same principles across multiple teams and vendors. &lt;/p&gt;

&lt;p&gt;Watch Out! &lt;/p&gt;




&lt;p&gt;&lt;em&gt;For your questions, thoughts, additions, or suggestions, please share in the comments section!&lt;/em&gt; &lt;/p&gt;

</description>
      <category>sre</category>
      <category>devops</category>
      <category>observability</category>
      <category>fintech</category>
    </item>
    <item>
      <title>Breaking Things on Purpose: What I Learned from Netflix’s Chaos Monkey</title>
      <dc:creator>Gbenga Kusade</dc:creator>
      <pubDate>Mon, 06 Oct 2025 07:56:16 +0000</pubDate>
      <link>https://dev.to/jagkush/breaking-things-on-purpose-what-i-learned-from-netflixs-chaos-monkey-2f8p</link>
      <guid>https://dev.to/jagkush/breaking-things-on-purpose-what-i-learned-from-netflixs-chaos-monkey-2f8p</guid>
      <description>&lt;p&gt;When I first heard that Netflix built a tool designed to deliberately crash their own servers, I thought it was a joke. For most of us, system reliability means avoiding failures at all costs (patching bugs, adding monitoring, and building redundancy, etc.). But Netflix took a counterintuitive, almost radical approach: they built a tool that intentionally breaks their own systems.&lt;/p&gt;

&lt;p&gt;That tool is called &lt;strong&gt;&lt;a href="https://netflix.github.io/chaosmonkey/" rel="noopener noreferrer"&gt;Chaos Monkey&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmha59s2srd26j97vng2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmha59s2srd26j97vng2.png" alt=" " width="225" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Chaos Monkey?
&lt;/h2&gt;

&lt;p&gt;Chaos Monkey is part of Netflix’s "Simian Army," a suite of tools designed to test system resilience. Its job is deceptively simple: to randomly terminate production instances and virtual machines.&lt;/p&gt;

&lt;p&gt;Imagine running critical services in the cloud, and without warning, one of your servers vanishes. That is Chaos Monkey in action. It sounds brutal, yes, it is!. Here is the catch: if your system can survive a random server failure in the middle of a busy workday, that is a strong sign you are on the right track toward true resilience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Would You Break Your Own System?
&lt;/h2&gt;

&lt;p&gt;In the real world, failures are inevitable. Servers crash, network cables get unplugged, and entire cloud regions can go dark. The absolute worst time to discover you are unprepared is during an actual crisis.&lt;/p&gt;

&lt;p&gt;By deliberately injecting failure, Netflix forced its engineers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design systems that inherently tolerate instance loss.&lt;/li&gt;
&lt;li&gt;Write and practice recovery playbooks.&lt;/li&gt;
&lt;li&gt;Build genuine confidence in their infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, Chaos Monkey transformed the fearful question, "What if it fails?" into a confident statement: "When it fails, we are ready."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Lesson for System Reliability
&lt;/h2&gt;

&lt;p&gt;At the heart of reliability engineering is accepting that failure is not an "if," but a "when." The true measure of a system is not whether it never breaks, but how gracefully it responds when it does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chaos Monkey embodies this mindset by:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing Assumptions: Do we truly have redundancy, or just a diagram that says we do?&lt;/li&gt;
&lt;li&gt;Exposing Weak Spots: What happens when a critical dependency suddenly vanishes?&lt;/li&gt;
&lt;li&gt;Forcing Resilience by Design: Teams can no longer hope for the best; they must build for the worst.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is one thing to claim your system is reliable. Chaos Monkey demands &lt;strong&gt;proof&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should You Unleash the Monkey?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are operating in the cloud, for example, the short answer is "&lt;strong&gt;not immediately&lt;/strong&gt;". You do not start with Chaos Monkey on day one. First, you need a solid foundation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comprehensive monitoring and alerting.&lt;/li&gt;
&lt;li&gt;Automated scaling and recovery processes.&lt;/li&gt;
&lt;li&gt;Well-practiced incident response procedures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once these fundamentals are in place, a tool like Chaos Monkey becomes the ultimate test, validating your resilience under real-world pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;System reliability is not about building a fortress that never falls. It is about building a system that can take a hit, bounce back, and keep running. Netflix's Chaos Monkey is the ultimate expression of this philosophy.&lt;/p&gt;

&lt;p&gt;Instead of fearing failure, they embraced it, trained for it, and emerged stronger. It is a powerful lesson for any system we build.&lt;/p&gt;

&lt;p&gt;So, would you dare unleash Chaos Monkey on your production stack?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://netflix.github.io/chaosmonkey/" rel="noopener noreferrer"&gt;https://netflix.github.io/chaosmonkey/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/" rel="noopener noreferrer"&gt;https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>chaosengineering</category>
      <category>sre</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Jenkins - Auto-build your dockerised local Environment on code commit</title>
      <dc:creator>Gbenga Kusade</dc:creator>
      <pubDate>Sun, 31 Dec 2023 16:57:41 +0000</pubDate>
      <link>https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-part-2-bbh</link>
      <guid>https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-part-2-bbh</guid>
      <description>&lt;p&gt;Previously, we &lt;a href="https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-2ffc"&gt;set up a local JS and Python environment using docker and docker-compose&lt;/a&gt;. In this post, we will automate the build process in Docker using Jenkins. This will allow us focus more on our code while developing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please read the first part of this tutorial &lt;a href="https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-2ffc"&gt;here&lt;/a&gt; if you haven't.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Jenkins?
&lt;/h2&gt;

&lt;p&gt;Jenkins is an open-source automation server. It's used to support building, deploying, and automating any project. Jenkins is a continuous integration and delivery (CI/CD) tool that is widely used interchangeably. Jenkins's capacity to divide tasks among several nodes is one of its most powerful features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Jenkins
&lt;/h2&gt;

&lt;p&gt;To follow along, clone the repository using the following commands.&lt;br&gt;
git clone &lt;a href="https://github.com/jagkt/local_dev_env.git" rel="noopener noreferrer"&gt;https://github.com/jagkt/local_dev_env.git&lt;/a&gt;&lt;br&gt;
cd local_dev_env&lt;br&gt;
Now you should have the following project structure to start with:&lt;br&gt;
.&lt;br&gt;
├── node&lt;br&gt;
│ ├── index.js&lt;br&gt;
│ └── package.json&lt;br&gt;
└── py&lt;br&gt;
│ ├── Dockerfile&lt;br&gt;
│ ├── requirements.txt&lt;br&gt;
│ └── main.py&lt;br&gt;
├── LICENSE&lt;br&gt;
├── Makefile&lt;br&gt;
├── README.md&lt;br&gt;
├── Jenkinsfile&lt;br&gt;
└── docker-compose.yml&lt;/p&gt;

&lt;p&gt;Here, Jenkinsfile is introduced to the project directory, this is where we will declare our declarative pipeline script telling Jenkins how to go about the build process. Also, the Jenkins service is added in the docker-compose.yml file with the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jenkins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jenkins/jenkins:lts&lt;/span&gt;
    &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8080:8080&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;50000:50000&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jenkins&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/home/${whoami}/jenkins:/var/jenkins_home&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/usr/bin/docker:/usr/bin/docker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the Jenkins service is defined by pulling the latest Jenkins image with root privileges. The container is run with host networking, and we tell Docker to redirect ports 8080 and 50000 to the host’s network. Also, we gave it a container name called Jenkins. &lt;br&gt;&lt;/p&gt;

&lt;p&gt;Finally, the /home/${whoami}/Jenkins (which will contain the Jenkins configuration) is mapped to /var/jenkins_home in the container. Ensure to create the Jenkins configuration directory on your host and change /home/${whoami}/ to your user’s home directory or the path you created the new directory. Also, the /var/run/docker.sock and /usr/bin/docker on the host are mapped to the /var/run/docker.sock and /usr/bin/docker respectively. This is to allow the Jenkins container to have access to the host docker daemon.&lt;/p&gt;

&lt;h3&gt;
  
  
  Now run the Jenkins service
&lt;/h3&gt;

&lt;p&gt;Run docker-compose in the directory where you placed docker-compose.yml file.&lt;br&gt;
&lt;code&gt;$ docker-compose up -d jenkins&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Allow Jenkins to use docker-compose commands
&lt;/h3&gt;

&lt;p&gt;Because we will be using the docker-compose command in our Jenkins pipeline script, we need to ensure Jenkins can run the commands.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On your host, exec into the running Jenkins container.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it jenkins /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;run the following commands inside the Jenkins container.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;groupadd -g 997 docker
gpasswd -a jenkins docker
curl -L https://github.com/docker/compose/releases/tag/1.29.2/docker-compose-`uname -s`-`uname -m` &amp;gt; /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

#confirm the docker-compose is installed
docker-compose version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;the last command output should be similar to this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wutxx75gbli902yq44f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wutxx75gbli902yq44f.png" alt="docker compose installation check"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  Jenkins Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Now point a web browser at port 8080 on your host system and use the actual IP address or domain name for the server you are using Jenkins on. In our case &lt;code&gt;http://localhost:8080&lt;/code&gt;.
A page opens prompting you to Unlock Jenkins. Obtain the required administrator password in the next step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zo0marxitsrgemd2vy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zo0marxitsrgemd2vy1.png" alt="Jenkins unlock page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Obtain the default Jenkins unlock password by opening the terminal and running the following command:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system returns an alphanumeric code. Enter that code in the Administrator password field and click Continue.&lt;/li&gt;
&lt;li&gt;The setup prompts to either install suggested plugins or Select plugins to install. It’s fine to simply install the suggested plugins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeprmh20qgea1jgfewhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeprmh20qgea1jgfewhp.png" alt="Jenkins plugins installation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The next step is the Create First Admin User. Enter the credentials you want to use for your Jenkins administrator, then click Save and Continue. Here, Jenkins user is used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37k3f0i9nzdexo75elcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37k3f0i9nzdexo75elcn.png" alt="create Jenkins admin user"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;click “Save and Continue”&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcj5prnxpmdpzziklogg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcj5prnxpmdpzziklogg.png" alt="save and continue"&gt;&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;click “Start using Jenkins”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0jjurcb7eolx1992p2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0jjurcb7eolx1992p2f.png" alt="start using Jenkins"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Jenkins project
&lt;/h3&gt;

&lt;p&gt;we need to create a pipeline that builds our images automatically. From the Jenkins dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;select "New Item", on the pop-up window, enter the project name in the "Item Name" textboard, then select Pipeline and click "OK"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uwajxf1p6qkrozbzgs1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uwajxf1p6qkrozbzgs1.png" alt="Dashboard view"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the project's configuration page, under the "General" tab, scroll down to "Build trigger" and select "GitHub hook trigger for GITScm polling"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavxyfztz1sr3t8479fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavxyfztz1sr3t8479fa.png" alt="configuration page-build trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down to the Pipeline section, under "Definition", select "Pipeline script from SCM", and under SCM, select "Git".&lt;/li&gt;
&lt;li&gt;fill in the "repository URL" and add your GitHub credential (you can generate a token on GitHub for authentication)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3u2asj5qvwcwifodxh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3u2asj5qvwcwifodxh4.png" alt="Configuration page-pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the branch to build from, specify the branch name (in this case, the main branch).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis4lcvj1a85diockoe6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis4lcvj1a85diockoe6o.png" alt="configuration page-branch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll to the script path and ensure Jenkinsfile is specified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2qu70xwaw9zgz3ok653.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2qu70xwaw9zgz3ok653.png" alt="configuration page-script path"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, select "Manage Jenkins" on your dashboard and select "Plugins". Under the "Available Plugins", search for "Docker Pipeline" and install.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxw6wji2uqbfqvabvl9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxw6wji2uqbfqvabvl9v.png" alt="manage Jenkins- install plugins"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Jenkinsfile
&lt;/h3&gt;

&lt;p&gt;The Jenkinsfile we added to the project directory contains the below declarative pipeline script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;pipeline {&lt;/span&gt;

    &lt;span class="s"&gt;agent any&lt;/span&gt;

    &lt;span class="s"&gt;stages {&lt;/span&gt;
        &lt;span class="s"&gt;stage('Checkout Source') {&lt;/span&gt;
            &lt;span class="s"&gt;steps {&lt;/span&gt;
                &lt;span class="s"&gt;git branch&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://github.com/jagkt/local_dev_env.git'&lt;/span&gt;
            &lt;span class="err"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;}&lt;/span&gt;

        &lt;span class="s"&gt;stage('Docker Compose Build image') {&lt;/span&gt;
            &lt;span class="s"&gt;steps {&lt;/span&gt;
                    &lt;span class="s"&gt;sh "docker-compose build"&lt;/span&gt;
                    &lt;span class="s"&gt;sh "docker-compose up -d py_app"&lt;/span&gt;
                    &lt;span class="s"&gt;sh "docker-compose up -d node_app"&lt;/span&gt;
            &lt;span class="s"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt; 
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pipeline&lt;/code&gt; is Declarative Pipeline-specific syntax that defines a "block" containing all content and instructions for executing the entire Pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;agent&lt;/code&gt; is Declarative Pipeline-specific syntax that instructs Jenkins to allocate an executor (on a node) and workspace for the entire Pipeline. Here, any agent is specified.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stage&lt;/code&gt; is a syntax block that describes a stage of this Pipeline. &lt;code&gt;stage&lt;/code&gt; blocks are optional in Scripted Pipeline syntax. Here we have two stages build process: checkout source and image build process&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;steps&lt;/code&gt; is Declarative Pipeline-specific syntax that describes the steps to be run in this stage.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sh&lt;/code&gt; is a Pipeline step (provided by the Pipeline: Nodes and Processes plugin) that executes the given shell command. First, We build our images before running their containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read more about Jenkins' declarative pipeline &lt;a href="https://www.jenkins.io/doc/book/pipeline/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Also, ensure to set the checkout source URL to your GitHub project repository and specify which branch to build from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatically Run Build on Code Commit
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;First, we need to get the hook URL of Jenkins. From the Dashboard, click "Manage Jenkins", then click "System"&lt;/li&gt;
&lt;li&gt;Scroll down to the "Github" section, click "Advance" under the " Github server", and check the box for "Specify another hook URL for GitHub configuration". Click "Save"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvjizhcwj2rqlephye7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvjizhcwj2rqlephye7x.png" alt="manage jenkins-system"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As shown above, the URL is &lt;a href="http://localhost:8080/github-webhook/" rel="noopener noreferrer"&gt;http://localhost:8080/github-webhook/&lt;/a&gt;. This will be used to add a  webhook to the GitHub project repository. The challenge here is, how does GitHub determine the host where our Jenkins container is running?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A quick way to solve the challenge above is to make the Jenkins service accessible in the public domain. I recently wrote an &lt;a href="https://dev.to/jagkush/a-quick-way-to-access-your-local-server-on-the-internet-4kei"&gt;article&lt;/a&gt; on how to quickly set this up using Localrun.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Read the article &lt;a href="https://dev.to/jagkush/a-quick-way-to-access-your-local-server-on-the-internet-4kei"&gt;here &lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Obtain the domain given to you after running Localrun on your host. say for example we have &lt;a href="https://2d70a750dc2554.lhr.life" rel="noopener noreferrer"&gt;https://2d70a750dc2554.lhr.life&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to your project repository on GitHub, click "Settings", click "Webhooks", and click "Add webhook".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure as shown below and click "Add Webhook".&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feurmc3lsnx8nlhywsud9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feurmc3lsnx8nlhywsud9.png" alt="Github webhook configuration"&gt;&lt;/a&gt;&lt;br&gt;
Here, we have replaced &lt;a href="http://localhost:8080/" rel="noopener noreferrer"&gt;http://localhost:8080/&lt;/a&gt; with &lt;a href="https://2d70a750dc2554.lhr.life/" rel="noopener noreferrer"&gt;https://2d70a750dc2554.lhr.life/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; localrun domain is active for a few minutes. This means you need to update the webhook payload URL every time with the new domain to continue the auto-build process. Alternatively, you can use other services that provide longer time usage.&lt;/p&gt;

&lt;p&gt;Now that the setup is completed, Jenkins will now run the build whenever you push/commit code to the main branch of your GitHub repository.&lt;/p&gt;

&lt;p&gt;On your Dashboard, you can see your created pipeline, click on it, and you can see your build history or manually run your build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjo53h70iwes0uz8pep6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjo53h70iwes0uz8pep6z.png" alt="Jenkins build history"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope you now have a solid idea of setting up Jenkins to automate your local environment build process. In summary, we observed the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created a Jenkins service and allowed it to access the host docker daemon.&lt;/li&gt;
&lt;li&gt;configured Jenkins pipeline job to auto-build whenever we commit code changes to the GitHub repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you experience difficulty while going through this tutorial, you can drop the challenge(s) faced in the comment section.&lt;/em&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cloudbees.com/blog/how-to-install-and-run-jenkins-with-docker-compose" rel="noopener noreferrer"&gt;https://www.cloudbees.com/blog/how-to-install-and-run-jenkins-with-docker-compose&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devpress.csdn.net/cloudnative/6304d439c67703293080e206.html" rel="noopener noreferrer"&gt;https://devpress.csdn.net/cloudnative/6304d439c67703293080e206.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.edureka.co/community/49753/auto-build-job-jenkins-there-change-code-github-repository" rel="noopener noreferrer"&gt;https://www.edureka.co/community/49753/auto-build-job-jenkins-there-change-code-github-repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.liatrio.com/blog/building-with-docker-using-jenkins-pipelines" rel="noopener noreferrer"&gt;https://www.liatrio.com/blog/building-with-docker-using-jenkins-pipelines&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jenkins.io/doc/pipeline/steps/docker-compose-build-step/" rel="noopener noreferrer"&gt;https://www.jenkins.io/doc/pipeline/steps/docker-compose-build-step/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/reference/" rel="noopener noreferrer"&gt;https://docs.docker.com/reference/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>jenkins</category>
      <category>automation</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A quick way to access your local server on the internet</title>
      <dc:creator>Gbenga Kusade</dc:creator>
      <pubDate>Thu, 14 Dec 2023 14:26:42 +0000</pubDate>
      <link>https://dev.to/jagkush/a-quick-way-to-access-your-local-server-on-the-internet-4kei</link>
      <guid>https://dev.to/jagkush/a-quick-way-to-access-your-local-server-on-the-internet-4kei</guid>
      <description>&lt;p&gt;Sometimes you may need to test and market your locally hosted applications on the internet while developing in a local environment. Having a tool that allows you to create a tunnel between your local development environment and a remote server using some tunneling services that enables you to share your work, test and deploy hosted application without the need for public infrastructure comes in handy here.&lt;/p&gt;

&lt;p&gt;You'll see how to quickly access your local webserver over the internet using &lt;a href="https://localhost.run/" rel="noopener noreferrer"&gt;Localhost.run&lt;/a&gt; in this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Localhost.run
&lt;/h2&gt;

&lt;p&gt;Using localhost.run is as simple as running the following ssh command in your terminal to connect an internet domain to an application running locally on port 8080 &lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -R 80:localhost:8080 ssh.localhost.run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;you will be presented with the following similar output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbb9af1i4y49yfwxnacl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbb9af1i4y49yfwxnacl.png" alt="localhost.run connection output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scan the QR code with your mobile and your local server will be accessible in your phone browser.&lt;/p&gt;

&lt;p&gt;There are instances you can receive permission denied error due to some ssh key issue, to fix, do the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; change directory to the current user home directory &lt;br&gt;
&lt;code&gt;Cd ~&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt; Check if "id_dsa.pub" file exist in your ".ssh" folder&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; If exist, skip to step 5. if not, run ssh-keygen command to generate a public ssh key:&lt;br&gt;
&lt;code&gt;ssh-keygen&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;4.&lt;/strong&gt; Follow the screen prompt until the key is generated &lt;br&gt;
&lt;strong&gt;5.&lt;/strong&gt; Add the generated public key to the authorized_key file, which contains public keys for public key authentication: &lt;br&gt;
&lt;code&gt;cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;6.&lt;/strong&gt; Run localhost.run command again: &lt;br&gt;
&lt;code&gt;ssh -R 80:localhost:8080 ssh.localhost.run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;While this may seem like a straightforward approach, the free version is however time limited, and may not be suitable if you need to stay longer using it. You will need to create an account to stay longer. &lt;/p&gt;

&lt;p&gt;Other services providing similar function include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://ngrok.com" rel="noopener noreferrer"&gt;Ngrok&lt;/a&gt;: This provides about 2hours on the free account but requires account &lt;a href="https://dashboard.ngrok.com/signup" rel="noopener noreferrer"&gt;registration&lt;/a&gt; and adding your &lt;a href="https://dashboard.ngrok.com/get-started/your-authtoken" rel="noopener noreferrer"&gt;authtoken&lt;/a&gt;, and starting it is as simple as running &lt;code&gt;ngrok http 8080&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxx0l8e9tnbb3m0pj75p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxx0l8e9tnbb3m0pj75p.png" alt="ngrok connection output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.cloudflare.com/products/tunnel/" rel="noopener noreferrer"&gt;Cloudflare Tunnel&lt;/a&gt;: requires account registration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://serveo.net/" rel="noopener noreferrer"&gt;serveo.net&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Please know that following this tutorial may introduce a potential for reduced security and privacy. So, use cautiously!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://localhost.run/" rel="noopener noreferrer"&gt;https://localhost.run/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ngrok.com" rel="noopener noreferrer"&gt;https://ngrok.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cloudflare.com/products/tunnel/" rel="noopener noreferrer"&gt;https://www.cloudflare.com/products/tunnel/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>productivity</category>
      <category>testing</category>
    </item>
    <item>
      <title>Docker - Setup a local JS and Python Development environment</title>
      <dc:creator>Gbenga Kusade</dc:creator>
      <pubDate>Sat, 02 Dec 2023 12:20:27 +0000</pubDate>
      <link>https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-2ffc</link>
      <guid>https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-2ffc</guid>
      <description>&lt;p&gt;It is difficult to develop locally for modern systems because they typically incorporate various services. The process of setting up a local development environment has drawn concerns from certain developers. One major problem that affects developers is the "this works on my computer" problem, which arises when they develop an application that works well on their local computer but not at all when it is deployed to other environments. These situations make it more difficult to collaborate or deploy effectively.&lt;/p&gt;

&lt;p&gt;This post is for you if you encounter the above problem during the development process. We'll use Docker to set up a local development environment in this tutorial. You will know how to create and configure a local development environment for Node.js and Python by the time you finish reading this post.&lt;/p&gt;



&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Install the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/install/" rel="noopener noreferrer"&gt;docker-compose&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git" rel="noopener noreferrer"&gt;git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Basic knowledge of setting up a NodeJS and Python project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example Architectural Design
&lt;/h2&gt;

&lt;p&gt;Suppose we have a set of services with the following architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxraatwibrmwb2y4ecn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxraatwibrmwb2y4ecn8.png" alt="sample architecture design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see from the diagram, we have:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node&lt;/strong&gt; - A NodeJS service running on port 5000 &lt;br&gt;
&lt;strong&gt;Py&lt;/strong&gt; - A Python service running on port 8000 &lt;/p&gt;



&lt;h2&gt;
  
  
  Setting it up
&lt;/h2&gt;

&lt;p&gt;We'll establish a basic Python and Node.js service setup as described above. To follow along, clone the repository using the following commands.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone https://github.com/jagkt/local_dev_env.git
&lt;span class="nb"&gt;cd &lt;/span&gt;local_dev_env


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you should have the following project structure to start with:&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
├── node&lt;br&gt;
│   ├── index.js&lt;br&gt;
│   └── package.json&lt;br&gt;
└── py&lt;br&gt;
│   ├── Dockerfile&lt;br&gt;
│   ├── requirements.txt&lt;br&gt;
│   └── main.py&lt;br&gt;
├── LICENSE&lt;br&gt;
├── Makefile&lt;br&gt;
├── README.md&lt;br&gt;
└── docker-compose.yml&lt;/p&gt;

&lt;p&gt;To spin up the services, run the script below from the project directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

make up  
make run_con #check the services are up and running


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;you can access both py and node applications from the browser&lt;/p&gt;

&lt;p&gt;&lt;code&gt;py: http://localhost:8000 or &lt;br&gt;
node: http://localhost:5000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Now, let's deep dive into the project setup details.&lt;/p&gt;



&lt;h3&gt;
  
  
  Build a Docker Image for the Python Environment
&lt;/h3&gt;

&lt;p&gt;Here, we’ll build a Docker image for the Python environment from scratch, based on the official Python image, and build a FastAPI application in it. First, let's create a package requirements.&lt;/p&gt;

&lt;p&gt;Create a directory named “py” and inside the directory, create a requirements.txt file and copy the below dependencies to run the FastAPI application in it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

fastapi
uvicorn[standard]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, create a main.py with the FastAPI application code below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;home&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello from py_app!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/hello/{user}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;greetings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Containerized our python environment
&lt;/h3&gt;

&lt;p&gt;We must first create a Dockerfile with the instructions needed to build the image in order to generate a Docker image. After that, the Docker builder processes the Dockerfile and creates the Docker image. The Python service is then launched and a container is created using a simple docker run command.&lt;/p&gt;


&lt;h4&gt;
  
  
  Dockerfile
&lt;/h4&gt;

&lt;p&gt;One way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it. The steps are sketched below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.docker.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fcontainerized-python.png.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.docker.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fcontainerized-python.png.webp" title="Logo Title Text 2" alt="docker build process"&gt;&lt;/a&gt;&lt;/p&gt;

_credit: docker.com_

&lt;p&gt;Now in the same project directory create a Dockerfile file with the following code:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;


&lt;span class="c"&gt;# # Pull the official docker image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11.1-slim&lt;/span&gt;
&lt;span class="c"&gt;# # set work directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /home/py/app&lt;/span&gt;
&lt;span class="c"&gt;# # set env variables&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONDONTWRITEBYTECODE 1 \ &lt;/span&gt;
    PYTHONUNBUFFERED 1 \
    PIP_NO_CACHE_DIR=1
&lt;span class="c"&gt;# # install dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt 
&lt;span class="c"&gt;# copy project&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /home/py/app&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's take a deep dive into the Dockerfile code.&lt;br&gt;
&lt;strong&gt;FROM:&lt;/strong&gt; specifies the slim version of the Python 3.11 base image to build a Docker image&lt;br&gt;
&lt;strong&gt;WORKDIR:&lt;/strong&gt; This command sets the active directory (/home/py/app) on which all the following commands run.&lt;br&gt;
&lt;strong&gt;ENV&lt;/strong&gt; variables are set to optimize the behavior of pip during the installation of the packages (in the requirements.txt file) in the Docker container.&lt;br&gt;
    -- &lt;strong&gt;PYTHONUNBUFFERED=1&lt;/strong&gt; -- Allow statements and log messages to immediately appear&lt;br&gt;
    -- &lt;strong&gt;PIP_DISABLE_PIP_VERSION_CHECK=1&lt;/strong&gt; -- disable a pip version check to reduce run-time &amp;amp; log-spam&lt;br&gt;
    -- &lt;strong&gt;PIP_NO_CACHE_DIR=1&lt;/strong&gt; – this is to disable cache to reduce the docker image size&lt;br&gt;
&lt;strong&gt;COPY:&lt;/strong&gt; This copies the requirements.txt file from the host to the container’s WORKDIR&lt;br&gt;
&lt;strong&gt;RUN:&lt;/strong&gt; invokes installing the container applications or package dependencies in the requirements.txt file&lt;br&gt;
&lt;strong&gt;COPY:&lt;/strong&gt; copies the application source codes from the host to the WORKDIR.&lt;/p&gt;

&lt;p&gt;Now, Let’s create a container image, run the container, and test the Python application.&lt;br&gt;
To build the container image, switch to the py directory and run the following command &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; python-application:0.1.0 &lt;span class="nb"&gt;.&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check your image &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker image &lt;span class="nb"&gt;ls&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, proceed to create a container. To access your application from your host machine, you need to do “port forwarding” to forward or proxy traffic on a specific port on the host machine to the port inside your container.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 4000:4000 python-application:0.1.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check your container&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker container &lt;span class="nb"&gt;ls&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, access your application by running a curl command, testing with the browser, or running Postman.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl http://127.0.0.1:4000


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl http://127.0.0.1:4000/hello/James


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Build the Node Environment
&lt;/h3&gt;

&lt;p&gt;First, create a new directory named "node" in the "local_dev_env" directory, and create an index.js file in it with the code below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello from node_app&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server running on port &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Also, create a new file name package.json with the code below in the node directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A sample nodejs application"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"index.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node index.js"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"author"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"admin@admin.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"license"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MIT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"engines"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;gt;=10.1.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"express"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^4.18.2"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that we have our basic script to run the Node application, we'll create our base image. This time we will not be using the Dockerfile as we did earlier with the Python environment, but we will pull directly from the Docker Hub registry.&lt;br&gt;
Because we have multi-container services, it's best to orchestrate our services from a single file rather than building the services individually from a Dockerfile, which could be a daunting task if we need to build many services. Therefore, spinning up our Node containers with &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt; can be pretty handy in these situations. Note that Docker compose does not replace Dockerfile. Rather, the latter is part of a process to build Docker images, which are part of containers.&lt;br&gt;
&lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt; allows us to operate the Node app alongside other services (assuming we have many services we need to spin up). In our case, it will be alongside our py service.&lt;/p&gt;



&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;In the "local_dev_env" directory, create a "docker-compose.yml" file with the code below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;py_app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./py&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;py&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uvicorn main:app --host 0.0.0.0 --reload&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FastAPI_ENV=development&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=8000&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8000:8000'&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./py:/home/py/app&lt;/span&gt; 

  &lt;span class="na"&gt;node_app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:12.3-alpine&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;node"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NODE_ENV=development&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=5000&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sh -c "npm install &amp;amp;&amp;amp; npm start"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;5000:5000'&lt;/span&gt;
    &lt;span class="na"&gt;working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/home/node/app&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./node:/home/node/app:cached&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As seen from the compose file, two services were defined: &lt;strong&gt;py&lt;/strong&gt; and &lt;strong&gt;node&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, The &lt;strong&gt;py&lt;/strong&gt; service uses an image that's built from the Dockerfile in the py directory we created above using the &lt;strong&gt;build&lt;/strong&gt; key. it is assigned a container name of "py" with the &lt;strong&gt;container_name&lt;/strong&gt; key, and upon starting, it runs the FastAPI web server with the &lt;strong&gt;command&lt;/strong&gt; key (uvicorn main:app --host 0.0.0.0 --reload). &lt;br&gt;
The &lt;strong&gt;volumes&lt;/strong&gt; key mounts the py directory in the project directory (./py) on the host to (/home/py/app) directory inside the container, allowing you to modify the code on the fly, without having to rebuild the image. The &lt;strong&gt;environment&lt;/strong&gt; key sets the FastAPI_ENV and PORT environment variables, which tells FastAPI to run in development mode and listen on PORT 8000.&lt;br&gt;
It then binds the container and the host machine to the exposed port, 8000 with the &lt;strong&gt;ports&lt;/strong&gt; key. This example service uses the default port for the FastAPI web server, 8000.&lt;/p&gt;

&lt;p&gt;Similar to the py service declaration, the &lt;strong&gt;node&lt;/strong&gt; service uses most of the declared keys but instead of building its image from a Dockerfile, it uses a public node 12.3 alpine image pulled from the Docker Hub registry with the &lt;strong&gt;image&lt;/strong&gt; key. The &lt;strong&gt;user&lt;/strong&gt; key lets you run your container as an unprivileged user. This follows the principle of least privilege. The &lt;strong&gt;working_dir&lt;/strong&gt; key is used to set the working directory in the container to /home/node/app.&lt;/p&gt;

&lt;p&gt;Now, let us build and run our services&lt;br&gt;
To jumpstart your services (py and node) containers, build the app with the compose file from your project directory, and run it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker-compose up -d


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify that all services' images have been created, run the following command. The py and node images should be returned within the CLI.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker image ls 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify that all services are running, run the following command. This will display all existing containers within the CLI&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker container ls --all


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let us try one more thing, execute a bash command from the py container to list the contents in the directory (/home/py/app).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker exec -it py bash
ls -l  #list the directory in the py container


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Makefile
&lt;/h3&gt;

&lt;p&gt;Here, we took an additional step and created a Makefile, which simplifies dealing with the tools by enabling us to use shortcuts rather than typing out lengthy commands. A command can be defined in the Makefile and used by using the syntax of the make command.&lt;/p&gt;

&lt;p&gt;We can spin up all the containers, execute commands from within the container, check logs, and spin down the containers as shown below.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

make up
make py &lt;span class="c"&gt;# execute shell script from our py container&lt;/span&gt;
make py_log  &lt;span class="c"&gt;#ouput logs from py container&lt;/span&gt;
make node_log &lt;span class="c"&gt;#ouput logs from node container&lt;/span&gt;
make down  &lt;span class="c"&gt;#spin down our services including network created&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope you now have a solid idea of how to set up your local development environment from this tutorial. In summary, we observed the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Making images for Docker and running a container from a Dockerfile.&lt;/li&gt;
&lt;li&gt;Spin up several Docker containers using docker-compose.&lt;/li&gt;
&lt;li&gt;Using Makefile to simplify the execution of complicated commands&lt;/li&gt;
&lt;/ul&gt;






&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UPNEXT!:&lt;/strong&gt; &lt;a href="https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-part-2-bbh"&gt;&lt;em&gt;let us automate our build process&lt;/em&gt;&lt;/a&gt;. Click &lt;a href="https://dev.to/jagkush/docker-setup-a-local-js-and-python-development-environment-part-2-bbh"&gt;here&lt;/a&gt; to read.&lt;/p&gt;
&lt;/blockquote&gt;






&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.startdataengineering.com/post/local-dev/#1-introduction" rel="noopener noreferrer"&gt;https://www.startdataengineering.com/post/local-dev/#1-introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://middleware.io/blog/microservices-architecture-docker/" rel="noopener noreferrer"&gt;https://middleware.io/blog/microservices-architecture-docker/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fastapi.tiangolo.com/deployment/dockera" rel="noopener noreferrer"&gt;https://fastapi.tiangolo.com/deployment/dockera&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@chaewonkong/beginners-guide-simple-node-js-application-with-docker-and-docker-compose-11e4e0297de9" rel="noopener noreferrer"&gt;https://medium.com/@chaewonkong/beginners-guide-simple-node-js-application-with-docker-and-docker-compose-11e4e0297de9&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com" rel="noopener noreferrer"&gt;https://docs.docker.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://training.play-with-docker.com" rel="noopener noreferrer"&gt;https://training.play-with-docker.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>python</category>
      <category>node</category>
    </item>
  </channel>
</rss>
