<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sahan</title>
    <description>The latest articles on DEV Community by Sahan (@sahan).</description>
    <link>https://dev.to/sahan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sahan"/>
    <language>en</language>
    <item>
      <title>Introducing gh-weekly-updates - Automate Your Weekly GitHub Impact Summaries</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Sat, 21 Mar 2026 22:22:00 +0000</pubDate>
      <link>https://dev.to/sahan/introducing-gh-weekly-updates-automate-your-weekly-github-impact-summaries-1f1c</link>
      <guid>https://dev.to/sahan/introducing-gh-weekly-updates-automate-your-weekly-github-impact-summaries-1f1c</guid>
      <description>&lt;p&gt;If you are anything like me, you’ve probably spent a Friday afternoon trying to remember everything you did that week. Maybe it’s for a standup, a 1:1 with your manager, or just to keep track of your own progress. You end up clicking through PRs, issues, and Slack threads, trying to piece together a coherent story. It’s tedious, and honestly, it’s time you could spend doing actual work.&lt;/p&gt;

&lt;p&gt;That’s why I built &lt;a href="https://github.com/sahansera/gh-weekly-updates" rel="noopener noreferrer"&gt;&lt;strong&gt;gh-weekly-updates&lt;/strong&gt;&lt;/a&gt; - a CLI tool that automatically collects your GitHub activity and generates a structured weekly summary using AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc015gr9a7oohb9o2m8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc015gr9a7oohb9o2m8r.png" alt="pypi" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;GitHub repo&lt;/strong&gt; : &lt;a href="https://github.com/sahansera/gh-weekly-updates" rel="noopener noreferrer"&gt;github.com/sahansera/gh-weekly-updates&lt;/a&gt;. It’s open source and available on PyPI!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;As engineers, we’re constantly shipping code, reviewing PRs, filing issues, and jumping into discussions. But when it comes time to reflect on the week, all that context is scattered across repos. I wanted something that could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull all my GitHub activity into one place&lt;/li&gt;
&lt;li&gt;Summarise it in a way that highlights what actually matters&lt;/li&gt;
&lt;li&gt;Run on a schedule so I don’t have to think about it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I couldn’t find anything that did exactly this, so I built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;gh-weekly-updates&lt;/code&gt; connects to the GitHub API, collects your activity for a given period, and sends it to an AI model (via &lt;a href="https://github.com/marketplace/models" rel="noopener noreferrer"&gt;GitHub Models&lt;/a&gt;) to produce a structured Markdown summary.&lt;/p&gt;

&lt;p&gt;Here’s what it picks up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pull requests&lt;/strong&gt; you authored (with merge status, additions/deletions, changed files)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull requests&lt;/strong&gt; you reviewed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issues&lt;/strong&gt; you created&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue comments&lt;/strong&gt; you left&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discussions&lt;/strong&gt; you started or participated in&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output is grouped by project or theme and structured into sections like Wins, Challenges, and What’s Next. You can also customise the prompt to match whatever format your team uses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;It’s a Python CLI tool, so you can install it with pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;gh-weekly-updates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# If you're already logged in with the GitHub CLI&lt;/span&gt;
gh auth login
gh-weekly-updates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. It will auto-discover repos you contributed to in the past week and generate a summary.&lt;/p&gt;

&lt;p&gt;You can also point it at specific repos and date ranges:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh-weekly-updates &lt;span class="nt"&gt;--since&lt;/span&gt; 2026-02-09 &lt;span class="nt"&gt;--until&lt;/span&gt; 2026-02-16 &lt;span class="nt"&gt;--repos&lt;/span&gt; my-org/my-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;For more control, you can create a &lt;code&gt;config.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;org&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-org&lt;/span&gt;

&lt;span class="na"&gt;repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-org/api-service&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-org/web-app&lt;/span&gt;

&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openai/gpt-4.1&lt;/span&gt;

&lt;span class="c1"&gt;# Automatically push the summary to a repo&lt;/span&gt;
&lt;span class="na"&gt;push_repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-user/my-weekly-updates&lt;/span&gt;

&lt;span class="c1"&gt;# Customise the AI prompt&lt;/span&gt;
&lt;span class="na"&gt;prompt_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-prompt.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The config supports everything from repo lists to custom prompts. You can even swap out the AI model if you have a preference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running on a Schedule with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;This is where it gets really useful. You can set up a GitHub Actions workflow to run it every Monday morning and push the summary to a repo automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Weekly Summary&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;9&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt; &lt;span class="c1"&gt;# Every Monday at 9am UTC&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;summarise&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.12'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip install gh-weekly-updates&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate summary&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GH_PAT }}&lt;/span&gt; &lt;span class="c1"&gt;# must be named GITHUB_TOKEN&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gh-weekly-updates --config config.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now every Monday, you get a fresh summary committed to your repo. No manual effort required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Prompts
&lt;/h2&gt;

&lt;p&gt;The default prompt produces a summary with Wins, Challenges, and What’s Next sections. But you can tailor it to your needs. For example, if your team does impact-style updates, you might want sections like Strategic Influence or Next Steps.&lt;/p&gt;

&lt;p&gt;Just create a text file with your prompt and reference it in your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;prompt_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-custom-prompt.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The prompt receives all your raw activity data as context, so you can shape the output however you like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source?
&lt;/h2&gt;

&lt;p&gt;I initially built this for myself to automate my own weekly updates at work. But I figured other engineers probably have the same problem, so I cleaned it up and open-sourced it. The tool is intentionally simple - it does one thing and tries to do it well.&lt;/p&gt;

&lt;p&gt;If you find it useful, give it a ⭐ on &lt;a href="https://github.com/sahansera/gh-weekly-updates" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. And if you have ideas for improvements, PRs and issues are always welcome!&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;A few things I’m thinking about for future releases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;More activity sources&lt;/strong&gt; : Picking up commit messages, release notes, and code review comments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple output formats&lt;/strong&gt; : Slack messages, email digests, Notion pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team summaries&lt;/strong&gt; : Aggregate activity across a whole team, not just one person&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of these sound interesting to you, feel free to open an issue or start a discussion on the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt; : &lt;a href="https://github.com/sahansera/gh-weekly-updates" rel="noopener noreferrer"&gt;github.com/sahansera/gh-weekly-updates&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI&lt;/strong&gt; : &lt;a href="https://pypi.org/project/gh-weekly-updates/" rel="noopener noreferrer"&gt;pypi.org/project/gh-weekly-updates&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Models&lt;/strong&gt; : &lt;a href="https://github.com/marketplace/models" rel="noopener noreferrer"&gt;github.com/marketplace/models&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading! If you have any questions, feel free to reach out on &lt;a href="https://twitter.com/_SahanSera" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or drop a comment below. 🤗&lt;/p&gt;

</description>
      <category>github</category>
      <category>python</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Deploying GitHub Self-Hosted Runners on Your Home Kubernetes Cluster with ARC</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Tue, 05 Aug 2025 12:28:00 +0000</pubDate>
      <link>https://dev.to/sahan/deploying-github-self-hosted-runners-on-your-home-kubernetes-cluster-with-arc-3gaj</link>
      <guid>https://dev.to/sahan/deploying-github-self-hosted-runners-on-your-home-kubernetes-cluster-with-arc-3gaj</guid>
      <description>&lt;p&gt;If you followed my last post, &lt;a href="https://sahansera.dev/building-home-lab-kubernetes-cluster-old-hardware-k3s/" rel="noopener noreferrer"&gt;Building a Home Lab Kubernetes Cluster with Old Hardware and k3s&lt;/a&gt;, you now have a proper x86 Kubernetes cluster humming away on your old laptops. So, what’s next? Time to put that cluster to work—let’s run GitHub Actions jobs on your own hardware!&lt;/p&gt;

&lt;p&gt;Why? Because if you have got decent hardware - self-hosting your runners might be faster, gives you full control (no GitHub minutes limit!), and lets you run bigger jobs (CI, builds, ML, you name it) on your home infra. And with &lt;a href="https://github.com/actions/actions-runner-controller" rel="noopener noreferrer"&gt;Actions Runner Controller - ARC&lt;/a&gt;, managing runners at scale on Kubernetes is surprisingly easy.&lt;/p&gt;

&lt;p&gt;Here’s how to set it all up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ARC and Why Should You Care?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/actions/actions-runner-controller" rel="noopener noreferrer"&gt;ARC (Actions Runner Controller)&lt;/a&gt; is an open-source Kubernetes operator from GitHub. It spins up and manages GitHub Actions runners as Kubernetes pods—no more manually registering runners, no more pets, just cattle. Runners auto-scale up and down as jobs arrive. It’s perfect for CI/CD, especially on clusters you own.&lt;/p&gt;

&lt;p&gt;Here’s a high-level view of how it works under the hood&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feie9qn3jfwmtedxba37s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feie9qn3jfwmtedxba37s.png" alt="ARC Architecture" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Working Kubernetes cluster (see &lt;a href="https://sahansera.dev/building-home-lab-kubernetes-cluster-old-hardware-k3s/" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; and &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;&lt;code&gt;helm&lt;/code&gt;&lt;/a&gt; installed on your machine&lt;/li&gt;
&lt;li&gt;A GitHub Personal Access Token (PAT) with &lt;code&gt;repo&lt;/code&gt; and &lt;code&gt;admin:org&lt;/code&gt; scopes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1️⃣ Pre-Setup: Quick Checks
&lt;/h2&gt;

&lt;p&gt;Make sure you have what you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;which helm
kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt;
helm list &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If those commands work, you’re good to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  2️⃣ Install ARC Controller
&lt;/h2&gt;

&lt;p&gt;Let’s install the ARC controller into your control plane namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NAMESPACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"actions-runner-controller"&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;arc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NAMESPACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deploys the controller which manages your runners.&lt;/p&gt;

&lt;h2&gt;
  
  
  3️⃣ Deploy a Runner Scale Set
&lt;/h2&gt;

&lt;p&gt;Time to create the runners that will actually do the work. Replace the example GitHub URL and PAT with your details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;INSTALLATION_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"arc-runner-set"&lt;/span&gt;
&lt;span class="nv"&gt;NAMESPACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"arc-runners"&lt;/span&gt;
&lt;span class="nv"&gt;GITHUB_CONFIG_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/youruser/yourrepo"&lt;/span&gt;
&lt;span class="nv"&gt;GITHUB_PAT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ghp_123456..."&lt;/span&gt;

helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;INSTALLATION_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NAMESPACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;githubConfigUrl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_CONFIG_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; githubConfigSecret.github_token&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_PAT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GITHUB_CONFIG_URL&lt;/code&gt;: The repo or org you want to run jobs for.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GITHUB_PAT&lt;/code&gt;: Your Personal Access Token.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4️⃣ Check That It’s Working
&lt;/h2&gt;

&lt;p&gt;Verify the controller and runner pods are up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Controller&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; actions-runner-controller

&lt;span class="c"&gt;# Runners&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; arc-runners

&lt;span class="c"&gt;# Runner set status&lt;/span&gt;
kubectl get AutoscalingRunnerSet &lt;span class="nt"&gt;-A&lt;/span&gt;
kubectl describe AutoscalingRunnerSet arc-runner-set &lt;span class="nt"&gt;-n&lt;/span&gt; arc-runners
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your runners show up as pods. If you trigger a workflow in your GitHub repo, you’ll see a pod spin up, do the job, and then shut down—magic.&lt;/p&gt;

&lt;h2&gt;
  
  
  5️⃣ Testing It Out
&lt;/h2&gt;

&lt;p&gt;Here’s the fun part. Create a simple GitHub Actions workflow in your repo to test the runners:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Test ARC Runners&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arc-runners&lt;/span&gt; &lt;span class="c1"&gt;# This tells GitHub to use your self-hosted runners. Use the NAMESPACE name you defined in step 3.&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run a script&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "Hello from ARC Runner!"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;List files&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ls -la&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s ci.yaml from my githubstats repo if you need a working example: &lt;a href="https://github.com/sahansera/githubstats/blob/main/.github/workflows/ci.yml" rel="noopener noreferrer"&gt;ci.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make a commit to trigger the workflow. You should see the runner pod spin up, execute the job, and then terminate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaaybcapar2anb3sr9a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaaybcapar2anb3sr9a6.png" alt="arc self hosted github runners k8s 1" width="456" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;
Here's the workflow running on my ARC self-hosted runner
&lt;/center&gt;

&lt;h2&gt;
  
  
  6️⃣ Monitoring (Bonus: Grafana)
&lt;/h2&gt;

&lt;p&gt;Want to geek out and monitor your runners? If you’ve set up Prometheus/Grafana (see my upcoming post if not!), you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check pod CPU/memory usage&lt;/li&gt;
&lt;li&gt;Track how many runners are running&lt;/li&gt;
&lt;li&gt;See logs for each pod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Handy queries for Grafana dashboards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Number of ARC runner pods
kube_pod_status_phase{namespace="arc-runners", phase="Running"}

# Pod CPU usage
rate(container_cpu_usage_seconds_total{namespace="arc-runners"}[5m])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's mine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmeo8nezem3f4fvqiv15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmeo8nezem3f4fvqiv15.png" alt="arc self hosted github runners k8s 2" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;
Here's what the workflow looks like in action
&lt;/center&gt;

&lt;h2&gt;
  
  
  6️⃣ Useful Commands for Day-to-Day Ops
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Watch runner scaling in real-time&lt;/span&gt;
kubectl get AutoscalingRunnerSet &lt;span class="nt"&gt;-n&lt;/span&gt; arc-runners &lt;span class="nt"&gt;-w&lt;/span&gt;

&lt;span class="c"&gt;# See events and troubleshoot&lt;/span&gt;
kubectl get events &lt;span class="nt"&gt;-n&lt;/span&gt; arc-runners &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.metadata.creationTimestamp

&lt;span class="c"&gt;# Pod logs (for a specific runner)&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; arc-runners &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Troubleshooting Tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runner not connecting?&lt;/strong&gt; Double-check your PAT and network access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pods stuck or crash-looping?&lt;/strong&gt; Check logs for clues and make sure your cluster has enough resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Can’t see runners in GitHub?&lt;/strong&gt; Make sure the config URL matches your repo/org and the PAT has correct scopes.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  That’s It! You’re Running GitHub Actions on Your Own Cluster
&lt;/h2&gt;

&lt;p&gt;You now have GitHub Actions jobs running &lt;em&gt;at home&lt;/em&gt; on your cluster, scaling up and down automatically. No more slow or limited runners. Your home lab just levelled up—CI/CD, builds, ML, you name it.&lt;/p&gt;

&lt;p&gt;Stay tuned for my next post where I’ll show you how to get beautiful observability dashboards and set up alerting for your home cluster.&lt;br&gt;&lt;br&gt;
Happy automating!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Questions? Want to show off your setup? Ping me on &lt;a href="https://twitter.com/_SahanSera" rel="noopener noreferrer"&gt;X (Twitter)&lt;/a&gt; or drop a comment below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>github</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a Home Lab Kubernetes Cluster with Old Hardware and k3s</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Fri, 01 Aug 2025 10:30:00 +0000</pubDate>
      <link>https://dev.to/sahan/building-a-home-lab-kubernetes-cluster-with-old-hardware-and-k3s-f76</link>
      <guid>https://dev.to/sahan/building-a-home-lab-kubernetes-cluster-with-old-hardware-and-k3s-f76</guid>
      <description>&lt;p&gt;If you've read my previous &lt;a href="https://sahansera.dev/building-your-own-private-kubernetes-cluster-on-a-raspberry-pi-4-with-k3s/" rel="noopener noreferrer"&gt;post&lt;/a&gt; about building a Raspberry Pi k3s cluster, you know I'm a huge fan of home labs. There's something uniquely satisfying about getting distributed systems running on a bunch of hardware you already own. This time, though, I wanted something a bit more powerful-a cluster that could handle not just learning and tinkering, but also heavier dev, CI/CD, and even ML workloads.&lt;/p&gt;

&lt;p&gt;And as it turns out, there's a ton you can do with a handful of old laptops, a simple switch, and Ubuntu Server. If you're thinking about upgrading your home cluster or want to avoid vendor lock-in, this one's for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Move From Raspberry Pi to x86?
&lt;/h2&gt;

&lt;p&gt;Don't get me wrong-RPi clusters are awesome for learning, hacking, and even some light home automation. But if you've ever tried running real dev pipelines, CI/CD, or any data-heavy ML stuff, you'll hit those limits &lt;em&gt;fast&lt;/em&gt;. Plus, WiFi can get a bit flaky when you're trying to keep nodes connected under load.&lt;/p&gt;

&lt;p&gt;I had a few spare laptops sitting around, and it made perfect sense to give them a second life and push them to their limits. Bonus: with Ethernet and a decent switch, you get much more reliable connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Setup: Keep It Simple (and Reliable)
&lt;/h2&gt;

&lt;p&gt;For this build, I kept things straightforward: all nodes are wired to a simple network switch. This gives much better performance and reliability compared to WiFi, but honestly, you could still pull this off over wireless if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j0sn4wz9bupla03rrkg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j0sn4wz9bupla03rrkg.jpg" alt="Network Switch" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;
   My network switch - how it all started 🛜
&lt;/center&gt;

&lt;p&gt;&lt;strong&gt;A quick tip:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reserve static IPs for each node using your router's DHCP reservations.
&lt;/li&gt;
&lt;li&gt;This makes node management, SSH, and Kubernetes networking so much easier.
&lt;/li&gt;
&lt;li&gt;Check your DHCP table and make sure every machine has a unique, predictable IP address.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Getting Ubuntu Server Set Up on Each Node
&lt;/h2&gt;

&lt;p&gt;Here's a high level diagram of what my setup looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5y197omtv8e22ipk8m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5y197omtv8e22ipk8m8.png" alt="Home Lab Setup Diagram" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's talk about prepping each machine. My old laptops had a new lease on life running Ubuntu Server. I recommend Ubuntu Server LTS for its simplicity and compatibility. If you're using VMs, just make sure to set the NIC to bridged mode so each VM acts as a full member of your home LAN.&lt;/p&gt;

&lt;p&gt;Here's my go-to checklist for each node:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update the system and install SSH:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;openssh-server &lt;span class="nt"&gt;-y&lt;/span&gt;
   &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;ssh
   &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set a memorable hostname:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname master   &lt;span class="c"&gt;# or worker-1, etc.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;(Optional) Update /etc/hosts for local resolution:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change any lines like &lt;code&gt;127.0.1.1 old-hostname&lt;/code&gt; to your new hostname.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Find your node's IP address:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ip a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this IP for your DHCP reservation so it always stays the same.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Test SSH from another machine:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ssh &amp;lt;username&amp;gt;@&amp;lt;node-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Installing k3s: Lightweight, Powerful, Familiar
&lt;/h2&gt;

&lt;p&gt;This process feels pretty magical the first time you see it all come together. I stuck with &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;-lightweight and perfect for home or edge clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  On the Control Plane Node ("master")
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install k3s:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check that k3s is running:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Get your cluster join token:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo cat&lt;/span&gt; /var/lib/rancher/k3s/server/node-token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy this somewhere safe-you'll need it for your worker nodes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Get the node's LAN IP:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ip a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's say your master node IP is &lt;code&gt;192.168.0.200&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check your cluster status:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;If you ever want to reset/reinstall k3s, just run:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /usr/local/bin/k3s-uninstall.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  On Each Worker Node ("worker-1", "worker-2", etc.)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Join the cluster:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;K3S_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://192.168.0.200:6443 &lt;span class="nv"&gt;K3S_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your-token-here&amp;gt; &lt;span class="nv"&gt;INSTALL_K3S_EXEC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"--node-ip=&amp;lt;worker-ip&amp;gt;"&lt;/span&gt; sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;192.168.0.200&lt;/code&gt; with your control plane's IP.&lt;br&gt;&lt;br&gt;
   Replace &lt;code&gt;&amp;lt;your-token-here&amp;gt;&lt;/code&gt; with the token from your master node.&lt;br&gt;&lt;br&gt;
   Replace &lt;code&gt;&amp;lt;worker-ip&amp;gt;&lt;/code&gt; with this worker's IP, e.g., &lt;code&gt;192.168.0.201&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;That's it!&lt;/strong&gt; The node will auto-register with your cluster. No manual kubeconfig needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check the cluster again from the master:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see both &lt;code&gt;master&lt;/code&gt; and your worker(s) as &lt;code&gt;Ready&lt;/code&gt;!&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting and Gotchas
&lt;/h2&gt;

&lt;p&gt;As with any home lab project, there are always a few snags-here's what I ran into and how to fix it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Node not joining?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Double-check your token (no spaces, copy the whole thing).&lt;/li&gt;
&lt;li&gt;Make sure you can &lt;code&gt;ping&lt;/code&gt; the control plane from your worker node.&lt;/li&gt;
&lt;li&gt;Firewalls can get in the way. Ensure port 6443 is open between nodes.&lt;/li&gt;
&lt;li&gt;If you get hostname conflicts, just change the hostname and restart the k3s agent (&lt;code&gt;sudo systemctl restart k3s-agent&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cluster state looks weird after hostname change?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You might see both the old and new hostnames in &lt;code&gt;kubectl get nodes&lt;/code&gt;. Just delete the old one:
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete node &amp;lt;old-node-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;IP address keeps changing?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make a DHCP reservation for each node's MAC address in your router.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;WiFi unreliable?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ethernet is always the best bet for clusters, especially for heavy workloads.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;With the basics in place, you're ready to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run and test real-world apps and dev environments&lt;/li&gt;
&lt;li&gt;Build your own CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Experiment with ML workloads on dedicated nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way, you get all the power of a real Kubernetes lab, but full control and no monthly surprises from a cloud provider.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RPi clusters are great for edge computing and learning,&lt;/strong&gt; but once you need real muscle, x86 hardware makes a &lt;em&gt;huge&lt;/em&gt; difference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WiFi is convenient, but Ethernet is still king&lt;/strong&gt; for stable clusters-especially if you're doing anything performance-sensitive.&lt;/li&gt;
&lt;li&gt;Old laptops/desktops are a goldmine for home lab builds. Don't let them collect dust!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you have old machines and a bit of curiosity, you can build a cluster that's surprisingly capable-no Raspberry Pis required.&lt;br&gt;&lt;br&gt;
And if you've already done it with RPis, this is the perfect upgrade path.&lt;br&gt;&lt;br&gt;
Keep an eye out for my next post where I'll deep-dive into adding observability and tooling!&lt;/p&gt;

&lt;p&gt;Happy clustering! 🫡&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>distributedsystems</category>
      <category>linux</category>
    </item>
    <item>
      <title>How Does the Python Virtual Environment Work?</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Mon, 28 Jul 2025 00:17:00 +0000</pubDate>
      <link>https://dev.to/sahan/how-does-the-python-virtual-environment-work-2e1l</link>
      <guid>https://dev.to/sahan/how-does-the-python-virtual-environment-work-2e1l</guid>
      <description>&lt;p&gt;When you start working with Python, one of the first recommendations you’ll hear is to use a “virtual environment.” But what exactly is a Python virtual environment, and how does it work under the hood?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Dependency Hell
&lt;/h2&gt;

&lt;p&gt;Python projects often rely on third-party libraries. If you install packages globally, different projects can end up fighting over package versions. This is called “dependency hell.” For example, Project A might require &lt;code&gt;requests==2.25&lt;/code&gt;, while Project B needs &lt;code&gt;requests==2.31&lt;/code&gt;. Installing both globally can cause conflicts and break your projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Virtual Environments
&lt;/h2&gt;

&lt;p&gt;A virtual environment is an isolated workspace for your Python project. It lets you install packages locally, so each project can have its own dependencies, regardless of what’s installed elsewhere on your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;When you create a virtual environment (using &lt;code&gt;python -m venv myenv&lt;/code&gt; or &lt;code&gt;virtualenv myenv&lt;/code&gt;), Python does the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Creates a Dedicated Directory Structure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A bin/ or Scripts/ directory with a Python executable and activation scripts&lt;/li&gt;
&lt;li&gt;A lib/ directory with a copy of the Python standard library&lt;/li&gt;
&lt;li&gt;A pyvenv.cfg config file for metadata
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;myenv/
├── bin/                &lt;span class="c"&gt;# (Note: Scripts\ on Windows)&lt;/span&gt;
│   ├── activate        &lt;span class="c"&gt;# Shell script to activate the environment (Unix)&lt;/span&gt;
│   ├── activate.bat    &lt;span class="c"&gt;# Batch script (Windows CMD)&lt;/span&gt;
│   ├── Activate.ps1    &lt;span class="c"&gt;# PowerShell script (Windows PowerShell)&lt;/span&gt;
│   ├── pip             &lt;span class="c"&gt;# Environment-specific pip&lt;/span&gt;
│   └── python          &lt;span class="c"&gt;# Environment-specific Python interpreter&lt;/span&gt;
├── lib/
│   └── pythonX.Y/
│       └── site-packages/  &lt;span class="c"&gt;# Installed packages go here&lt;/span&gt;
├── pyvenv.cfg          &lt;span class="c"&gt;# Configuration file with environment metadata&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configures a Standalone Python Interpreter:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The environment includes its own Python executable (or a symlink to it), ensuring that all commands run from within the environment use the correct interpreter.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On most systems, this is a &lt;strong&gt;symlink or copy&lt;/strong&gt; of the base Python binary&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This interpreter respects only the packages installed within the environment&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;which python
/path/to/myenv/bin/python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Sets Up Local Package Management:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each environment gets its own site-packages directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When you run pip install, packages go here instead of the global location&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This isolation prevents version conflicts and makes dependency management predictable&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Creates Activation Scripts:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Activation scripts help you &lt;em&gt;enter&lt;/em&gt; the environment by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Modifying your &lt;code&gt;$PATH&lt;/code&gt; so that python and pip point to the virtual environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally updating your shell prompt (e.g., showing (myenv))&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensuring commands are scoped to the environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These scripts are OS-specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unix/macOS: &lt;code&gt;source myenv/bin/activate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Windows CMD: &lt;code&gt;myenv\Scripts\activate.bat&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PowerShell: &lt;code&gt;myenv\Scripts\Activate.ps1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Includes a Configuration File:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;pyvenv.cfg&lt;/code&gt; file records metadata about the environment, including the Python version and the location of the base interpreter.&lt;/p&gt;

&lt;p&gt;This file stores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Python version used&lt;/li&gt;
&lt;li&gt;The path to the base interpreter&lt;/li&gt;
&lt;li&gt;Whether system site packages are accessible (default: no)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This metadata is used when running the environment to preserve consistent behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Command Resolution
&lt;/h2&gt;

&lt;p&gt;So what happens when you are in a venv as opposed to running python command globally? The diagram below illustrates how Python and pip commands are resolved with and without a virtual environment. When no virtual environment is active, your system’s PATH directs these commands to the globally installed Python interpreter and packages.&lt;/p&gt;

&lt;p&gt;However, once a virtual environment is activated, the PATH is modified to point to the environment’s own executables. This ensures that all Python commands and package installations stay isolated within the virtual environment, avoiding conflicts with system-wide installations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyheyolmuaa7jq0zfioq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyheyolmuaa7jq0zfioq6.png" alt="Command Resolution" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is This Powerful?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation:&lt;/strong&gt; Each project gets its own dependencies and versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproducibility:&lt;/strong&gt; You can lock dependencies with a &lt;code&gt;requirements.txt&lt;/code&gt; or &lt;code&gt;pyproject.toml&lt;/code&gt; file, making it easy for others (or yourself in the future) to recreate the environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Admin Rights Needed:&lt;/strong&gt; You don’t need system-wide permissions to install packages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Python Versions:&lt;/strong&gt; Use virtual environments to test your code against different Python versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Activation Scripts:&lt;/strong&gt; Modify the activation script to set environment variables specific to your project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with CI/CD:&lt;/strong&gt; Virtual environments are essential for setting up isolated builds in CI/CD pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Under the Hood: What’s Really Happening?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The virtual environment is just a directory with a specific structure.&lt;/li&gt;
&lt;li&gt;No containers or VMs are involved-just clever manipulation of paths and environment variables.&lt;/li&gt;
&lt;li&gt;Deleting the virtual environment directory removes all installed packages for that project.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Debugging Tips
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If activation doesn’t work, check your shell configuration.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;python -m site&lt;/code&gt; to inspect the site-packages directory.&lt;/li&gt;
&lt;li&gt;Verify the &lt;code&gt;pyvenv.cfg&lt;/code&gt; file for any misconfigurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Alternative Tools:&lt;/strong&gt; &lt;code&gt;venv&lt;/code&gt; is standard for Python 3.3+, but tools like &lt;code&gt;virtualenv&lt;/code&gt;, &lt;code&gt;conda&lt;/code&gt;, or &lt;code&gt;pipenv&lt;/code&gt; exist for advanced use cases or older Python versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Python virtual environments are a foundational tool for modern Python development. They solve the problem of dependency conflicts, make projects more portable, and keep your system clean. Whether you’re building a quick script or a large application, understanding how virtual environments work will save you countless headaches down the road.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.python.org/3/library/venv.html" rel="noopener noreferrer"&gt;Official Python venv Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://peps.python.org/pep-0405/" rel="noopener noreferrer"&gt;PEP 405: Python Virtual Environments&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>fundamentals</category>
    </item>
    <item>
      <title>Deep Dive - How Chunked Transfer Encoding Works</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Fri, 04 Apr 2025 05:51:00 +0000</pubDate>
      <link>https://dev.to/sahan/deep-dive-how-chunked-transfer-encoding-works-4o9n</link>
      <guid>https://dev.to/sahan/deep-dive-how-chunked-transfer-encoding-works-4o9n</guid>
      <description>&lt;p&gt;Chunked transfer encoding is a key HTTP/1.1 feature that allows servers to stream data incrementally without knowing the total size of the response upfront. It’s particularly useful in streaming APIs, live updates, and large or dynamically-generated responses.&lt;/p&gt;

&lt;p&gt;In this post, we’ll practically explore how chunked transfer encoding works using the backend we developed in my previous blog post on &lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part2/" rel="noopener noreferrer"&gt;Streaming APIs with FastAPI and Next.js — Part 2&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 What is Chunked Transfer Encoding?
&lt;/h2&gt;

&lt;p&gt;Chunked transfer encoding modifies HTTP responses into a series of &lt;strong&gt;chunks&lt;/strong&gt; , each prefixed with its size in bytes. It allows servers to start sending response data immediately, without having to calculate the full content length beforehand.&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt; is present, the client receives data &lt;strong&gt;incrementally&lt;/strong&gt; and knows the response has ended when a &lt;strong&gt;zero-length&lt;/strong&gt; chunk appears.&lt;/p&gt;

&lt;h2&gt;
  
  
  💻 Hands-on Example
&lt;/h2&gt;

&lt;p&gt;Let’s use the &lt;a href="https://github.com/sahansera/streaming-apis/tree/main/backend" rel="noopener noreferrer"&gt;FastAPI backend&lt;/a&gt; we built.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;make start-backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzpb8h76qu1vujzibmqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzpb8h76qu1vujzibmqr.png" alt="understanding chunked transfer encoding 1" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s hit the &lt;code&gt;/stream&lt;/code&gt; endpoint with &lt;code&gt;curl&lt;/code&gt; to see how chunked transfer encoding works in practice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--raw&lt;/span&gt; http://localhost:8000/stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34i8vtmna38udcolr4e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34i8vtmna38udcolr4e2.png" alt="understanding chunked transfer encoding-2" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt;: Include headers in output.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--raw&lt;/code&gt;: Disable curl’s automatic decoding, revealing raw chunked encoding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--raw&lt;/span&gt; localhost:8000/stream
HTTP/1.1 200 OK
&lt;span class="nb"&gt;date&lt;/span&gt;: Mon, 31 Mar 2025 09:51:47 GMT
server: uvicorn
content-type: text/plain&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Transfer-Encoding: chunked

1f
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...

1f
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...

1f
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...

1f
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...

1f
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...

30
Simulated log entry at Mon Mar 31 20:21:53 2025

0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a diagram to help visualize the chunked transfer encoding:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuokatgjbcg4qrp3p8ep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuokatgjbcg4qrp3p8ep.png" alt="understanding chunked transfer encoding 3" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what’s happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each chunk starts with its length in hexadecimal (&lt;code&gt;1f&lt;/code&gt; = 31 bytes).&lt;/li&gt;
&lt;li&gt;The data follows the length, and the next chunk starts after a newline.&lt;/li&gt;
&lt;li&gt;The chunk with 30 represents a simulated log entry (&lt;code&gt;30&lt;/code&gt; = 48 bytes).&lt;/li&gt;
&lt;li&gt;The response ends with a zero-length chunk (&lt;code&gt;0&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; This aligns with the techniques demonstrated in my previous blog series &lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part1/" rel="noopener noreferrer"&gt;Streaming APIs with FastAPI and Next.js (Part 1)&lt;/a&gt; and &lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part2/" rel="noopener noreferrer"&gt;Part 2&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  🛠️ &lt;strong&gt;Step-by-Step Breakdown of Chunking&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We’ll be using the &lt;a href="https://github.com/sahansera/streaming-apis/blob/main/backend/api/index.py" rel="noopener noreferrer"&gt;index.py&lt;/a&gt;. Here’s exactly what’s happening under the hood:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Generator (&lt;code&gt;yield&lt;/code&gt;)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every time &lt;code&gt;yield&lt;/code&gt; is executed, the Starlette framework (used internally by FastAPI) receives a new piece of data to stream to the client.&lt;/li&gt;
&lt;li&gt;Each yielded data segment corresponds &lt;strong&gt;directly to one HTTP chunk&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, the &lt;a href="https://github.com/sahansera/streaming-apis/blob/main/backend/api/index.py#L34" rel="noopener noreferrer"&gt;yielded line&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Waiting for new log entries...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;is packaged into one HTTP chunk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Starlette’s StreamingResponse Handling&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;StreamingResponse&lt;/code&gt; from Starlette wraps the async generator.&lt;/li&gt;
&lt;li&gt;Starlette doesn’t wait until the generator finishes (which might be infinite). Instead, it immediately pushes each yielded chunk to the underlying ASGI server, typically Uvicorn.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Uvicorn’s Chunk Formatting&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Uvicorn (the ASGI server you’re using) receives the yielded chunk from Starlette and &lt;strong&gt;formats it according to the HTTP/1.1 chunked transfer encoding specification&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Each chunk is transmitted as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;chunk-size &lt;span class="k"&gt;in &lt;/span&gt;hexadecimal&amp;gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;
&amp;lt;chunk-data&amp;gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s how one of your actual data chunks might look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1f&lt;span class="se"&gt;\r\n&lt;/span&gt;
Waiting &lt;span class="k"&gt;for &lt;/span&gt;new log entries...&lt;span class="se"&gt;\n\r\n&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;1f&lt;/code&gt; = 31 bytes, the exact length of &lt;code&gt;"Waiting for new log entries...\n"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Continuous Chunk Transmission&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uvicorn immediately sends each formatted chunk down the TCP connection.&lt;/li&gt;
&lt;li&gt;Your client (like &lt;code&gt;curl&lt;/code&gt;) receives each chunk as soon as it’s sent, which allows incremental processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Ending the Stream&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your generator ever completes (or if the server shuts down the connection), Uvicorn sends a special &lt;strong&gt;zero-length chunk&lt;/strong&gt; (&lt;code&gt;0\r\n\r\n&lt;/code&gt;) to indicate that transmission has ended.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example final chunk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;0&lt;span class="se"&gt;\r\n&lt;/span&gt;
&lt;span class="se"&gt;\r\n&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🙋 What about HTTP/2 and HTTP/3?
&lt;/h2&gt;

&lt;p&gt;The short answer: HTTP/2+ does not use chunked encoding at all. In fact, the HTTP/2 specification explicitly forbids the use of the &lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt; header; if a client incorrectly tries to send it, it’s considered a protocol error.&lt;/p&gt;

&lt;p&gt;Instead, HTTP/2 uses a more efficient binary framing layer that allows multiplexing multiple streams over a single connection. This means that chunked transfer encoding is not necessary in HTTP/2 and HTTP/3, as the protocol itself handles streaming more efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Through these practical examples, you’ve seen firsthand how chunked transfer encoding enables incremental streaming of data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responses are sent as a series of chunks, each with a defined size.&lt;/li&gt;
&lt;li&gt;The end of data transmission is indicated by a zero-length chunk.&lt;/li&gt;
&lt;li&gt;Tools like &lt;code&gt;curl&lt;/code&gt;, Python frameworks like FastAPI, and browser developer tools help visualize and debug chunked encoding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding this helps you build better streaming APIs and debug complex HTTP interactions effectively.&lt;/p&gt;

&lt;p&gt;Happy Streaming! 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc9112#section-7.1" rel="noopener noreferrer"&gt;RFC 9112 – HTTP/1.1 Specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding" rel="noopener noreferrer"&gt;MDN Web Docs: Transfer-Encoding&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part1/" rel="noopener noreferrer"&gt;Streaming APIs with FastAPI and Next.js (Part 1)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part2/" rel="noopener noreferrer"&gt;Streaming APIs with FastAPI and Next.js (Part 2)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>http</category>
      <category>python</category>
      <category>fundamentals</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Upgrading sahansera.dev to Gatsby 5</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Wed, 02 Apr 2025 08:10:00 +0000</pubDate>
      <link>https://dev.to/sahan/upgrading-sahanseradev-to-gatsby-5-3p99</link>
      <guid>https://dev.to/sahan/upgrading-sahanseradev-to-gatsby-5-3p99</guid>
      <description>&lt;p&gt;The time is now 1:45 AM and I’m finally done upgrading my blog to Gatsby V5. It’s been one heck of a ride, but it was worth it.&lt;/p&gt;

&lt;p&gt;This blog post is a reflection of my experience upgrading my blog to Gatsby V5. I’ll cover the challenges I faced, the solutions I found, and the lessons learned along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Upgrade Journey
&lt;/h2&gt;

&lt;p&gt;After a two year long hiatus from blogging, I decided to dust off my old blog and give it a much-needed facelift. When I tried to run &lt;code&gt;gatsby develop&lt;/code&gt;, I was greeted with a slew of warnings and errors. It was clear that my blog was long overdue for an upgrade.&lt;/p&gt;

&lt;p&gt;I took a crack at it a few months ago, but I quickly got overwhelmed by the number of breaking changes and decided to put it on hold. This time it’s different because I was determined to fully utilize LLMs for my advantage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kivd29wcna5ip0iw0e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kivd29wcna5ip0iw0e5.png" alt="upgrading gatsby 5 1" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sneak peak of my last 3 commits&lt;/em&gt; 🤣&lt;/p&gt;

&lt;p&gt;Being a weekend, which is filled with family time and chores, I decided to timebox it just for a few hours. I thought, “How hard can it be? Just a few package updates and some code tweaks, right?” Little did I know that this would turn into a full-blown adventure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependency Overhauls
&lt;/h3&gt;

&lt;p&gt;One of the first tasks was to ensure all my plugins were updated to their latest non-breaking minor versions. I used &lt;a href="https://www.npmjs.com/package/npm-check-updates" rel="noopener noreferrer"&gt;&lt;code&gt;npm-check-updates&lt;/code&gt;&lt;/a&gt; to help with this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ncu &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt; group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6m9vy8d1v93lk9qvp7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6m9vy8d1v93lk9qvp7t.png" alt="Example of ncu CLI tool in action" width="800" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example of ncu CLI tool in action&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Tip: Always stay on a separate branch, use commits for each change you make, while doing updates.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There were bunch of warnings, but nothing major. Next, I upgraded React and ReactDOM to v18. This was a bit tricky because I had to ensure that all my dependencies were compatible with React 18. I also had to update my Babel configuration to support the new JSX transform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;react@^18.0.0 react-dom@^18.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, I bumped up the Gatsby version to &lt;code&gt;^5.0.0&lt;/code&gt; and ran &lt;code&gt;npm install&lt;/code&gt;. This is where the fun really began 😅&lt;/p&gt;

&lt;h3&gt;
  
  
  Plugin Upgrades
&lt;/h3&gt;

&lt;p&gt;Many of my plugins were still on versions designed for Gatsby V4. I had to update or replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;gatsby-plugin-catch-links, gatsby-plugin-feed, gatsby-plugin-google-gtag, gatsby-plugin-layout, gatsby-plugin-manifest, gatsby-plugin-offline, gatsby-plugin-react-helmet, gatsby-plugin-sharp, and gatsby-transformer-sharp etc..&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Each of these required an upgrade to versions like &lt;code&gt;^5.14.0&lt;/code&gt; or &lt;code&gt;^6.14.0&lt;/code&gt; to resolve the peer dependency conflicts with Gatsby V5.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Handling:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I migrated from the deprecated &lt;code&gt;gatsby-image&lt;/code&gt; to the modern &lt;a href="https://www.gatsbyjs.com/docs/reference/release-notes/image-migration/" rel="noopener noreferrer"&gt;&lt;code&gt;gatsby-plugin-image&lt;/code&gt;&lt;/a&gt;. This required updating my GraphQL queries from using &lt;code&gt;fluid&lt;/code&gt; and &lt;code&gt;fixed&lt;/code&gt; fragments to using the &lt;code&gt;gatsbyImageData&lt;/code&gt; field. I updated imports, replaced &lt;code&gt;&amp;lt;Img&amp;gt;&lt;/code&gt; with &lt;code&gt;&amp;lt;GatsbyImage&amp;gt;&lt;/code&gt;, and used &lt;code&gt;getImage()&lt;/code&gt; to extract image data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Much of this is covered in the official migration &lt;a href="https://www.gatsbyjs.com/docs/reference/release-notes/migrating-from-v4-to-v5/" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then I ran into the infamous TypeComposer error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Cannot create as TypeComposer the following value:
  GraphQLScalarType&lt;span class="o"&gt;({&lt;/span&gt; name: &lt;span class="s2"&gt;"Date"&lt;/span&gt;, description: &lt;span class="s2"&gt;"A date string, such as 2007-12-03, compliant with the
 ISO 8601 standard for representation of dates and times using the Gregorian calendar."&lt;/span&gt;,
specifiedByURL: undefined, serialize: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;function &lt;/span&gt;String], parseValue: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;function &lt;/span&gt;String], parseLiteral:
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;function &lt;/span&gt;parseLiteral], extensions: &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;, astNode: undefined, extensionASTNodes: &lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 The tl;dr here is there were multiple versions of &lt;code&gt;graphql&lt;/code&gt; being used by other dependencies. I had to ensure that all my dependencies were using the same version of &lt;code&gt;graphql&lt;/code&gt;. I did this by adding a &lt;code&gt;resolutions&lt;/code&gt; field in my &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"resolutions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"graphql"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^16.6.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I ran &lt;code&gt;rm -rf node_modules package-lock.json &amp;amp;&amp;amp; npm install&lt;/code&gt; again to ensure all dependencies were using the same version of &lt;code&gt;graphql&lt;/code&gt;. TypeComposer error was gone, but I still had a few warnings.&lt;/p&gt;

&lt;p&gt;Next, I transformed my old GraphQL queries to the new format using the Gatsby codemod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx gatsby-codemods@latest sort-and-aggr-graphql &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This worked like a charm and fixed most of the query related issues. Everything else, I just asked ChatGPT to help me with. I had to tweak a few queries here and there, but nothing major.&lt;/p&gt;

&lt;h3&gt;
  
  
  Styling Challenges
&lt;/h3&gt;

&lt;p&gt;I also relied heavily on &lt;a href="https://github.com/vercel/styled-jsx" rel="noopener noreferrer"&gt;styled-jsx&lt;/a&gt; for component-scoped CSS. However, Gatsby’s official plugin, &lt;code&gt;gatsby-plugin-styled-jsx&lt;/code&gt;, only supports styled-jsx v3—and I needed styled-jsx v5 for React 18 compatibility. After some consideration, I decided to remove the plugin entirely and instead configured Babel to handle styled-jsx directly.&lt;/p&gt;

&lt;p&gt;This is where ChatGPT had most of problems. It went through a diamond dependency resolution problem and got stuck in a loop. I had to manually intervene and guide it through the process.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enabling Nested CSS with styled-jsx
&lt;/h4&gt;

&lt;p&gt;Even after removing the old plugin, I ran into issues when my &lt;code&gt;&amp;lt;style jsx&amp;gt;&lt;/code&gt; blocks used nested CSS rules. By default, styled-jsx doesn’t support nesting. I resolved this by integrating &lt;a href="https://github.com/vercel/styled-jsx-plugin-postcss" rel="noopener noreferrer"&gt;styled-jsx-plugin-postcss&lt;/a&gt; along with the PostCSS Nested plugin:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Installed the Packages:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Created a Configuration File:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I added a &lt;code&gt;styled-jsx.config.js&lt;/code&gt; at the project root:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Updated Babel Config:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
In my &lt;code&gt;.babelrc&lt;/code&gt;, I ensured the styled-jsx plugin was configured to use the PostCSS plugin:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This allowed my nested CSS in components like the header, footer, and blog items to compile correctly. I would say this the part where LLM helped me the most! 💪&lt;/p&gt;

&lt;h3&gt;
  
  
  PostCSS Configuration Adjustments
&lt;/h3&gt;

&lt;p&gt;During the upgrade, I also encountered warnings about duplicate autoprefixer instances. My initial &lt;code&gt;postcss.config.js&lt;/code&gt; was using &lt;code&gt;postcss-cssnext&lt;/code&gt;, which is now deprecated. After some experiments and research, I updated the config to use &lt;code&gt;postcss-preset-env&lt;/code&gt;—a modern alternative that handles vendor prefixes efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Upgrade in Small Steps:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tackling dependency conflicts one by one and verifying functionality helps isolate problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read Plugin Changelogs:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Understanding what’s changed in plugin APIs (like the migration from &lt;code&gt;fluid&lt;/code&gt;/&lt;code&gt;fixed&lt;/code&gt; to &lt;code&gt;gatsbyImageData&lt;/code&gt;) is crucial.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Be Ready to Reconfigure:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Sometimes it’s necessary to remove old plugins (like &lt;code&gt;gatsby-plugin-styled-jsx&lt;/code&gt;) and configure Babel or PostCSS directly to maintain compatibility with the latest React and Gatsby versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test Thoroughly:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Both in development mode and via production builds (&lt;code&gt;gatsby build&lt;/code&gt; and &lt;code&gt;gatsby serve&lt;/code&gt;), to ensure SSR and dynamic behaviors work as expected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Upgrading to Gatsby V5 has not only improved performance and introduced new features, but it also forced me to re-examine and modernize the entire toolchain—from image handling and styling to PostCSS configurations.&lt;/p&gt;

&lt;p&gt;If you’re planning a similar upgrade, take your time, tackle one dependency at a time, and don’t hesitate to experiment with configurations until everything clicks.&lt;/p&gt;

&lt;p&gt;Prepare to do some A/B testing by having a develop branch and a production branch. This way, you can compare the performance and functionality of your old setup with the new one.&lt;/p&gt;

&lt;p&gt;Happy coding, and enjoy your faster, modernized Gatsby blog! 💜&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.gatsbyjs.com/docs/reference/release-notes/migrating-from-v4-to-v5/" rel="noopener noreferrer"&gt;Gatsby V5 Migration Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Feel free to leave a comment if you have any questions or if you’d like to share your upgrade experiences!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>gatsby</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Streaming APIs with FastAPI and Next.js — Part 2</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Mon, 31 Mar 2025 08:10:00 +0000</pubDate>
      <link>https://dev.to/sahan/streaming-apis-with-fastapi-and-nextjs-part-2-2jof</link>
      <guid>https://dev.to/sahan/streaming-apis-with-fastapi-and-nextjs-part-2-2jof</guid>
      <description>&lt;p&gt;In &lt;a href="https://sahansera.dev/streaming-apis-python-nextjs-part1" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, we explored how to stream data into a React component using modern browser APIs. Now it’s time to build the other half: the &lt;strong&gt;FastAPI backend&lt;/strong&gt; that makes it all work.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through setting up a streaming endpoint using FastAPI and discuss how chunked transfer encoding works on the server side.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Code Repository&lt;/strong&gt; : The complete code is available on &lt;a href="https://github.com/sahansera/streaming-apis" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. You can clone it and run it locally to follow along.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛠️ Building the Streaming Backend with FastAPI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwibei3s0difnbdlijti3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwibei3s0difnbdlijti3.jpg" alt="How the data will flow from Backend to the Frontend" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FastAPI makes it really easy to return a streaming response using its &lt;a href="https://fastapi.tiangolo.com/advanced/custom-response/#streamingresponse?" rel="noopener noreferrer"&gt;&lt;code&gt;StreamingResponse&lt;/code&gt;&lt;/a&gt; class from &lt;code&gt;starlette.responses&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s build a &lt;code&gt;/stream&lt;/code&gt; endpoint in &lt;a href="https://github.com/sahansera/streaming-apis/blob/main/backend/api/index.py" rel="noopener noreferrer"&gt;&lt;code&gt;index.py&lt;/code&gt;&lt;/a&gt; that simulates real-time data like server logs or chat messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 Simulating a Real-Time Log Stream
&lt;/h3&gt;

&lt;p&gt;Here’s a minimal FastAPI app with a streaming endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/index.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi.responses&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StreamingResponse&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi.middleware.cors&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CORSMiddleware&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uvicorn&lt;/span&gt; &lt;span class="c1"&gt;# Import uvicorn for running the server
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;threading&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Add CORS middleware to allow cross-origin requests
# ...
&lt;/span&gt;
&lt;span class="n"&gt;LOG_FILE_PATH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logs/server.log&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;log_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_file_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_file_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Move to the end of the file
&lt;/span&gt;            &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seek&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SEEK_END&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readline&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;
                &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Waiting for new log entries...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="c1"&gt;# Heartbeat message
&lt;/span&gt;                    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Wait for new lines to be written
&lt;/span&gt;    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;FileNotFoundError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Log file not found.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error reading log file: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;simulate_log_generation&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Simulate log entries being written to the log file.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LOG_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Simulated log entry at &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ctime&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Write a new log entry every 5 seconds
&lt;/span&gt;
&lt;span class="nd"&gt;@app.on_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;startup&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start_log_simulation&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Start the log simulation in a background thread.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;threading&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;simulate_log_generation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/stream&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;StreamingResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;log_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LOG_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;media_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/plain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠 How It Works
&lt;/h2&gt;

&lt;p&gt;Let’s break it down:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Synchronous Generator&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;log_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_file_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_file_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seek&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SEEK_END&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readline&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;
                &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Waiting for new log entries...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;FileNotFoundError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Log file not found.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error reading log file: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define a synchronous generator function that yields data one chunk at a time. Each &lt;code&gt;yield&lt;/code&gt; becomes a chunk sent to the client. The &lt;code&gt;time.sleep(1)&lt;/code&gt; simulates delay between events (like logs being written).&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;StreamingResponse&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nc"&gt;StreamingResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;log_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LOG_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;media_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/plain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;StreamingResponse&lt;/code&gt; tells FastAPI to send the data as it becomes available, rather than waiting for the entire response to be generated. The &lt;code&gt;media_type&lt;/code&gt; is optional but helps inform the browser how to handle the content.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Running the Server
&lt;/h2&gt;

&lt;p&gt;To run the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;make start-backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then hit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://localhost:8000/stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sahansera.dev%2Fstatic%2Fe8560d80474db7fc252b8f2e7cd3d05f%2F5a190%2Fstreaming-apis-python-nextjs-part2-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sahansera.dev%2Fstatic%2Fe8560d80474db7fc252b8f2e7cd3d05f%2F5a190%2Fstreaming-apis-python-nextjs-part2-2.png" title="streaming-apis-python-nextjs-part2-2.png" alt="streaming-apis-python-nextjs-part2-2.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see logs appear one line at a time in the terminal or browser, depending on how you call the endpoint.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Tips for Production
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ &lt;strong&gt;Keep the Stream Alive&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a real app, your data stream might be longer-running. Our example already implements this with a heartbeat:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Waiting for new log entries...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="c1"&gt;# Heartbeat message
&lt;/span&gt;    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Wait for new lines to be written
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the client knows the connection is still alive even when there’s no new data.&lt;/p&gt;

&lt;h3&gt;
  
  
  🧹 &lt;strong&gt;Handle Disconnects Gracefully&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If the client disconnects, your generator should stop yielding. Starlette handles this internally, but you can wrap the stream in a &lt;code&gt;try/except&lt;/code&gt; block to catch cancellations if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔐 &lt;strong&gt;Secure Your Stream&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add authentication if you’re streaming sensitive data.&lt;/li&gt;
&lt;li&gt;Rate-limit the endpoint to avoid abuse.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 Testing with Curl
&lt;/h2&gt;

&lt;p&gt;You can test the streaming endpoint with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8000/stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see each message appear line by line, every second.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗 Hooking it up with the Frontend
&lt;/h2&gt;

&lt;p&gt;Now that our backend is streaming correctly, the frontend from Part 1 will handle it smoothly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:8000/stream&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As each &lt;code&gt;yield&lt;/code&gt; in the backend emits data, your React UI will update in near real-time. ✨&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Recap
&lt;/h2&gt;

&lt;p&gt;In this post, we built a real-time streaming API using FastAPI with just a few lines of code. Here’s what we did:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built a synchronous generator to simulate streaming logs&lt;/li&gt;
&lt;li&gt;Used &lt;code&gt;StreamingResponse&lt;/code&gt; to stream text over HTTP&lt;/li&gt;
&lt;li&gt;Connected it to our frontend from Part 1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, the backend and frontend make a simple but powerful full-stack streaming system.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧱 Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🔌 Add dynamic data (e.g. logs from a file or DB)&lt;/li&gt;
&lt;li&gt;📦 Stream structured data (like JSON Lines)&lt;/li&gt;
&lt;li&gt;📈 Use this setup for real-time dashboards or log viewers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Useful Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://fastapi.tiangolo.com/advanced/custom-response/#streamingresponse" rel="noopener noreferrer"&gt;FastAPI: StreamingResponse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.python.org/3/glossary.html#term-generator" rel="noopener noreferrer"&gt;Python Generators&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.uvicorn.org/" rel="noopener noreferrer"&gt;Uvicorn Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/sahansera/streaming-apis" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you enjoyed this series, feel free to star the repo ⭐ or share it with a friend. Got ideas or feedback? Hit me up on &lt;a href="https://twitter.com/_sahansera" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or drop an issue in the GitHub repo.&lt;/p&gt;

&lt;p&gt;Happy streaming! 🚀&lt;/p&gt;

</description>
      <category>python</category>
      <category>backend</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Streaming APIs with FastAPI and Next.js — Part 1</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Sun, 30 Mar 2025 09:00:00 +0000</pubDate>
      <link>https://dev.to/sahan/streaming-apis-with-fastapi-and-nextjs-part-1-3ndj</link>
      <guid>https://dev.to/sahan/streaming-apis-with-fastapi-and-nextjs-part-1-3ndj</guid>
      <description>&lt;p&gt;Streaming data in the browser is one of those things that feels magical the first time you see it: data appears live — no need to wait for the full response to load. In this two-part series, we’ll walk through building a small full-stack app that uses &lt;strong&gt;FastAPI&lt;/strong&gt; to stream data and a &lt;strong&gt;Next.js&lt;/strong&gt; frontend to consume and render it in real-time.&lt;/p&gt;

&lt;p&gt;There are many ways to stream data in the browser, from WebSockets to Server-Sent Events (SSE). But in this post, we’ll focus on a lesser-known method: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Transfer-Encoding" rel="noopener noreferrer"&gt;chunked transfer encoding&lt;/a&gt;. This technique allows the server to send data in small, manageable chunks, which the browser can process as they arrive.&lt;/p&gt;

&lt;p&gt;This post focuses on the &lt;strong&gt;frontend&lt;/strong&gt; bit. In Part 2, we’ll dive into building the &lt;strong&gt;FastAPI backend&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 The Setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox66rxuf1tmjli33aipy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox66rxuf1tmjli33aipy.jpg" alt="How the data will flow from Backend to the Frontend" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say you have a streaming API running locally at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://localhost:8000/stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This endpoint sends back a stream of text data — think server logs, chat messages, or real-time updates. The goal is to connect to this endpoint from a React component and display data &lt;strong&gt;as it arrives&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s the React component (&lt;code&gt;stream.tsx&lt;/code&gt;) that handles this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;StreamPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;dataChunks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setDataChunks&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;fetchStream&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:8000/stream&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`HTTP error! Status: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Response body is null&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;decoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;utf-8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;decoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
          &lt;span class="nf"&gt;setDataChunks&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nf"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
        &lt;span class="nf"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;fetchStream&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Streaming&lt;/span&gt; &lt;span class="nx"&gt;Data&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;}
&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;dataChunks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;pre&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;dataChunks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/pre&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;)}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧠 What’s Really Happening?
&lt;/h2&gt;

&lt;p&gt;Let’s break this down and understand the key concepts behind streaming in the browser:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Fetching the Stream&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:8000/stream&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unlike regular &lt;code&gt;fetch()&lt;/code&gt; calls that wait for the entire response before handing it over, this gives you access to the &lt;strong&gt;ReadableStream&lt;/strong&gt; — letting you process data chunk-by-chunk.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Getting a Reader&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives us a &lt;code&gt;ReadableStreamDefaultReader&lt;/code&gt;, which lets us manually pull chunks of data from the response. This is part of the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Streams_API" rel="noopener noreferrer"&gt;Streams API&lt;/a&gt;, now supported in all major browsers.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Reading Chunks&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;value&lt;/code&gt;: a &lt;code&gt;Uint8Array&lt;/code&gt; representing a chunk of binary data.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;done&lt;/code&gt;: &lt;code&gt;true&lt;/code&gt; when the stream is finished.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We loop until &lt;code&gt;done&lt;/code&gt; becomes &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Decoding the Text&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;decoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;utf-8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;decoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Streaming responses may split characters across chunks, especially for multi-byte encodings like UTF-8. &lt;code&gt;TextDecoder&lt;/code&gt; handles this for us, making sure our text is correctly reconstructed.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Updating the UI&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;setDataChunks&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We append each new chunk to our state array. React re-renders the component with every update, giving us that real-time feel.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;strong&gt;One-Time Effect&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fetch logic runs once when the component mounts. Perfect for one-time side effects like opening a connection.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Recap
&lt;/h2&gt;

&lt;p&gt;With just a few lines of code, we’ve created a streaming experience in the browser using modern Web APIs. The key things were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fetch()&lt;/code&gt; with a streaming response&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReadableStream&lt;/code&gt; + &lt;code&gt;TextDecoder&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Updating React state to progressively display data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Part 2, we’ll build the &lt;strong&gt;FastAPI backend&lt;/strong&gt; to power this stream — including how to set up a streaming route and control flush timing for chunked responses.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Gotchas to Watch Out For
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CORS&lt;/strong&gt; : Make sure your FastAPI server has CORS enabled if frontend and backend are on different ports/domains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Buffering&lt;/strong&gt; : Some servers (or even browsers like Safari) buffer streaming responses — so test in Chrome/Edge for best results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup&lt;/strong&gt; : If you’re adding WebSocket or long-running fetches, remember to cancel or clean them up in &lt;code&gt;useEffect&lt;/code&gt; cleanup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📚 References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Streams_API?utm_source=sahansera.dev" rel="noopener noreferrer"&gt;Streams API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder?utm_source=sahansera.dev" rel="noopener noreferrer"&gt;TextDecoder&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream?utm_source=sahansera.dev" rel="noopener noreferrer"&gt;ReadableStream&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fastapi.tiangolo.com/advanced/custom-response/#streamingresponse?utm_source=sahansera.dev" rel="noopener noreferrer"&gt;Streaming Response&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nextjs</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Basics of Apache Kafka - An Overview</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Tue, 03 Jan 2023 16:01:00 +0000</pubDate>
      <link>https://dev.to/sahan/basics-of-apache-kafka-an-overview-25b</link>
      <guid>https://dev.to/sahan/basics-of-apache-kafka-an-overview-25b</guid>
      <description>&lt;p&gt;A few months ago I wrote an article on &lt;a href="https://sahansera.dev/setting-up-kafka-locally-for-testing/"&gt;setting up a local Apache Kafka instance&lt;/a&gt;. In this article, I will briefly introduce Apache Kafka and some of its core concepts. We will look at the these concepts from a high level and understand how they are put together.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kafka?
&lt;/h2&gt;

&lt;p&gt;Apache Kafka is an open-source distributed event streaming platform for building real-time data pipelines and streaming applications. It is designed to handle high volumes of data and enable real-time processing of data streams.&lt;/p&gt;

&lt;p&gt;Kafka is typically used in scenarios where data needs to be processed in real-time, such as in real-time analytics, event-driven architectures, and other streaming applications. It can be integrated with various systems, such as databases, storage systems, and stream processing frameworks, to support a wide range of use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event Streaming vs Messages Queues
&lt;/h2&gt;

&lt;p&gt;I have seen people using Kafka interchangeably with message queues. However, they are two things. Kafka is a distributed event streaming system whereas message queuing systems like RabbitMQ, Amazon SQS, and Azure Queue Storage.&lt;/p&gt;

&lt;p&gt;I suppose confusion must have stemmed mainly from the fact that the feature sets of these systems are getting more or less the same and the line between them is getting blurred. However, it is advisable to understand trade-offs among these systems and pick one that suits you best without overcomplicating the overall solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Kafka?
&lt;/h2&gt;

&lt;p&gt;There are several reasons why Apache Kafka is a popular choice for building real-time data pipelines and streaming applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scalability: Kafka is designed to handle high volumes of data and to scale horizontally, making it suitable for handling large amounts of data.&lt;/li&gt;
&lt;li&gt;Real-time processing: Kafka enables real-time processing of data streams, allowing applications to react to new data as soon as it arrives.&lt;/li&gt;
&lt;li&gt;Fault-tolerant: Kafka is fault-tolerant and highly available, meaning it can continue operating even if some of its components fail.&lt;/li&gt;
&lt;li&gt;Flexibility: Kafka can be integrated with many systems and frameworks, making it a versatile platform for building streaming applications.&lt;/li&gt;
&lt;li&gt;High performance: Kafka has been designed to provide high throughput and low latency, making it suitable for applications that require fast processing of data streams.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a nutshell, imagine you have the following tightly coupled spaghetti of systems (could be deployed internally within your organization or a public-facing app)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ou5K1nfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/0e8d501da99f5781f4298dcea9c16834/4b190/basics-of-apache-kafka-an-overview-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ou5K1nfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/0e8d501da99f5781f4298dcea9c16834/4b190/basics-of-apache-kafka-an-overview-1.jpg" alt="basics-of-apache-kafka-an-overview-1" title="basics-of-apache-kafka-an-overview-1" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For instance, you could imagine these services to be separate microservices or even monoliths that need to defer their workloads to some other systems. As we can see services 1 - 5 talk to services A - E. This becomes extremely difficult to maintain and will cause massive tech debt.&lt;/p&gt;

&lt;p&gt;Following is a refactored communication flow utilizing Kafka. Your line of business applications, or microservices can be added or removed as you go. It also provides a consistent API for both produces and consumers. But, could this be a single point of failure? What if Kafka goes down? or can it go down? You’ll find the answers in the upcoming sections, why that’s not the case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LzE2XL3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/566369b9507dbd2d5dec218b28ac4c22/4b190/basics-of-apache-kafka-an-overview-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LzE2XL3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/566369b9507dbd2d5dec218b28ac4c22/4b190/basics-of-apache-kafka-an-overview-2.jpg" alt="basics-of-apache-kafka-an-overview-2.jpg" title="basics-of-apache-kafka-an-overview-2.jpg" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting everything together
&lt;/h2&gt;

&lt;p&gt;Below is a diagram that I’ve created to show you a typical request flow in Kafka. The numbers correspond to each subheading that I will describe in much detail below. In each section, I will zoom in on a particular concept to provide any relevant context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6YviQv3U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/944a6b539b82f36993d8a14d04886d20/4b190/basics-of-apache-kafka-an-overview-3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6YviQv3U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/944a6b539b82f36993d8a14d04886d20/4b190/basics-of-apache-kafka-an-overview-3.jpg" alt="basics-of-apache-kafka-an-overview-3.jpg" title="basics-of-apache-kafka-an-overview-3.jpg" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we jump in, I need to mention that Kafka exposes a client API that could be used by users to write and read messages. You could use your programming language of choice to interact with the cluster using these API calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Producers
&lt;/h2&gt;

&lt;p&gt;Producers write data (or events) to topics, and consumers read data from topics. Producers could be anything like an IoT sensor, activity tracking system, a frontend server sending metrics etc. You can specify a key/id in your message in order to make sure that the messages always end up in the same partition. More on these will be covered in Events, Topics and Partitions sections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--te1tfaTD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/3239f05bb5137045d21af46069eab3d0/4b190/basics-of-apache-kafka-an-overview-4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--te1tfaTD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/3239f05bb5137045d21af46069eab3d0/4b190/basics-of-apache-kafka-an-overview-4.jpg" alt="basics-of-apache-kafka-an-overview-4.jpg" title="basics-of-apache-kafka-an-overview-4.jpg" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other than that, it’s worth noting that the Producers can choose 3 methods to send messages (fire and forget, Synchronous send &amp;amp; Asynchronous send) and also they can wait for acknowledgements (none, any or all) as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Events
&lt;/h2&gt;

&lt;p&gt;An event in Apache Kafka is a record of an occurrence or state change in a system. Sometimes it’s also called a &lt;strong&gt;record&lt;/strong&gt; or a &lt;strong&gt;message&lt;/strong&gt; as well. This could be a user interaction, a change in data, or a notification from another system. Events are stored in Kafka topics and are typically published by producers and consumed by consumers. Events in Kafka are &lt;strong&gt;immutable&lt;/strong&gt; , such that they cannot be changed once they have been published.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data formats&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kafka sees messages are just byte arrays. However, we humans need a way to serialize and deserialize this data. Therefore, you could use a &lt;strong&gt;schema&lt;/strong&gt; for it such as JSON, Protocol Buffers, or Apache Avro (used in Hadoop).&lt;/p&gt;

&lt;p&gt;What’s important to understand is that you use a consistent data format in your applications.&lt;/p&gt;

&lt;p&gt;Here is an example of a Kafka event with a key and value that are both strings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Key: "user_id"
Value: "12345"
Timestamp: "20 Dec 2022"
Headers: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Topics and Partitions
&lt;/h2&gt;

&lt;p&gt;In Kafka, a &lt;strong&gt;topic&lt;/strong&gt; is a way to separate out different messages coming in from different systems. Each topic is partitioned, meaning that the data in a topic is split into multiple &lt;strong&gt;partitions&lt;/strong&gt; distributed across the Kafka cluster. This allows for redundancy and scalability within Kafka.&lt;/p&gt;

&lt;p&gt;If a given topic is partitioned across multiple nodes (i.e. servers), how does Kafka know which message goes to which partition also while preserving the order? This is where the &lt;strong&gt;key&lt;/strong&gt; of a message comes in handy. Kafka uses the key of a message and runs it through a hash function and mod (%) the resulting number so that it knows where to write that message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XI-kFyRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/a23099ad33bcabace5c63b8058975177/4b190/basics-of-apache-kafka-an-overview-5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XI-kFyRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/a23099ad33bcabace5c63b8058975177/4b190/basics-of-apache-kafka-an-overview-5.jpg" alt="basics-of-apache-kafka-an-overview-5.jpg" title="basics-of-apache-kafka-an-overview-5.jpg" width="800" height="718"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if a message doesn’t have a key (key being NULL)? Well, in that case, Kafka defaults to a Round-Robin approach to write the messages to the partitions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Messages with the same key go to the same partition thus guaranteeing the order.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s also worth mentioning that a partition is not strictly tied to a single server. They can also scale out horizontally depending on your requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Brokers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;if you start working on Kafka, you’d come across the term brokers. A broker is just a computer, server, container, or cloud instance that’s running a &lt;strong&gt;Kafka broker process&lt;/strong&gt;. So what do they do? They are responsible for managing the partitions that we talked about in the previous section. They also handle any read/write requests.&lt;/p&gt;

&lt;p&gt;Remember I mentioned that a partition can be replicated across multiple nodes? Brokers manage that replication too. But there has to be some kind of way to orchestrate this replication, right? In Kafka, they follow a leader-follower approach to tackle that. Generally speaking, reading and writing happens at the leader level. Then the leader and followers work among themselves to get the replication done.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Consumers &amp;amp; Consumer Groups
&lt;/h2&gt;

&lt;p&gt;The “consumer” is a complex concept in Kafka. Consumers are applications that  &lt;strong&gt;subscribe&lt;/strong&gt; to a set of Kafka topics and read from topic partitions. Consumers track their offsets as they progress through topic partitions and periodically  &lt;strong&gt;commit offsets&lt;/strong&gt;  to Kafka, which ensures that they can resume from the last processed offset in the event of a restart. These offsets are incremental and the brokers will know which messages to send when the consumer requests more data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wWLVXKmO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/330c7c4ed2dcc03c5a58c14eb146ef97/4b190/basics-of-apache-kafka-an-overview-6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wWLVXKmO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/330c7c4ed2dcc03c5a58c14eb146ef97/4b190/basics-of-apache-kafka-an-overview-6.jpg" alt="basics-of-apache-kafka-an-overview-6.jpg" title="basics-of-apache-kafka-an-overview-6.jpg" width="800" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown above, same consumer can also listen to mutiple topics as well.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 What sets Kafka apart from traditional Message Queues is that reading a message does not destroy it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why is Kafka fast?
&lt;/h2&gt;

&lt;p&gt;Apache Kafka uses a combination of in-memory storage and disk storage to achieve high scalability. Kafka stores data in memory in a log structure, which allows for fast access and processing.&lt;/p&gt;

&lt;p&gt;However, as the amount of data in a Kafka cluster grows, it can eventually exceed the available memory. In this case, Kafka will automatically flush data to disk to free up memory. This allows Kafka to scale horizontally by adding more nodes to the cluster, which can provide additional memory and disk storage capacity. By combining in-memory and disk storage, Kafka can achieve high scalability while still providing fast access and processing of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I build a Kafka cluster?
&lt;/h2&gt;

&lt;p&gt;If your organization has a need to deploy your own cluster at a data center you can do that. Kafka’s deployment targets range from bare metal servers to cloud providers. If you are rolling out your own, beware that it’s not easy to set it up it &lt;strong&gt;will&lt;/strong&gt; take time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locally (Development)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using Docker is the easiest I’ve found. Follow this &lt;a href="https://sahansera.dev/setting-up-kafka-locally-for-testing/"&gt;guide&lt;/a&gt; to understand how you can do that.&lt;/li&gt;
&lt;li&gt;Or you could YOLO and &lt;a href="https://kafka.apache.org/quickstart"&gt;install the binaries&lt;/a&gt; yourself 😀&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cloud (Production)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For your production needs, your best bet would be to use&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.confluent.io/confluent-cloud/?utm_medium=sem&amp;amp;utm_source=google&amp;amp;utm_campaign=ch.sem_br.brand_tp.prs_tgt.confluent-brand_mt.mbm_rgn.apac_lng.eng_dv.all_con.confluent-cloud&amp;amp;utm_term=%2Bconfluent%20%2Bcloud&amp;amp;creative=&amp;amp;device=c&amp;amp;placement=&amp;amp;gclid=CjwKCAiAkfucBhBBEiwAFjbkr2K1kTxef5V7TbtFMCskzax9yIBTsKrmZkSWlc0gT-uEH61Tf7RWRhoCriAQAvD_BwE"&gt;Confluent Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/hdinsight/kafka/apache-kafka-introduction"&gt;Kafka on Azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/msk/"&gt;Amazon Managed Streaming for Kafka (MSK)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different cloud providers might charge you differently. It’s always wise to consider the pricing options before locking yourself into a specific cloud provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we looked at Apache Kafka is an open-source distributed event streaming platform for building real-time data pipelines and streaming applications. We went over the main concepts that should be enough to get you started. I’m planning to write an article series on implementing producers and consumers in .NET, Go &amp;amp; Python. So stay tuned! Until next time 👋&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://kafka.apache.org/documentation/#intro_topics"&gt;Intro Topics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kafka.apache.org/33/documentation/streams/core-concepts"&gt;Core Concepts&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kafka</category>
      <category>tutorial</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Serverless Go with Azure Functions and GitHub Actions</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Sun, 11 Sep 2022 14:55:00 +0000</pubDate>
      <link>https://dev.to/sahan/serverless-go-with-azure-functions-and-github-actions-4j9k</link>
      <guid>https://dev.to/sahan/serverless-go-with-azure-functions-and-github-actions-4j9k</guid>
      <description>&lt;p&gt;In this article, we are going to learn how to host a serverless Go web service with Azure Functions leveraging custom handlers in Azure functions. We will also automate the deployments by using GitHub Actions 🚀 You can even apply the same concepts to other languages Rust as long as you can self-contained binary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Walkthrough video
&lt;/h2&gt;

&lt;p&gt;If you prefer to watch a video, I have created a walkthrough video on my YouTube channel 😊&lt;/p&gt;



&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0FqD8LTjHbg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Azure Functions is Microsoft’s offering for serverless computing that enables you to run code on-demand without having to explicitly provision or manage infrastructure. Azure Functions is a great way to run your code in response to events, such as HTTP requests, timers, or messages from Azure services.&lt;/p&gt;

&lt;p&gt;As of today, they support multiple &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/supported-languages"&gt;runtimes&lt;/a&gt; such as .NET, Java, JavaScript, Python, TypeScript. But what if we want to write our app in Go? Well, now you can do that too - by using Custom Handlers.&lt;/p&gt;

&lt;p&gt;Before we move on, so what really is a custom handler and how does it work? Custom handlers let your Function app to accept events (eg. HTTP requests) from the Global host (aka &lt;a href="https://github.com/Azure/azure-functions-host"&gt;Function host&lt;/a&gt; - that powers your Function apps) - as long as your chosen language supports HTTP primitives.&lt;/p&gt;

&lt;p&gt;Here’s a great overview from Microsoft on how this is achieved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tE2acCFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/ea7a2923719d4c72709b548b69167ecf/5a190/serverless-go-with-azure-functions-github-actions-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tE2acCFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/ea7a2923719d4c72709b548b69167ecf/5a190/serverless-go-with-azure-functions-github-actions-1.png" alt="serverless-go-with-azure-functions-github-actions-1" title="serverless-go-with-azure-functions-github-actions-1" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-custom-handlers"&gt;Microsoft Docs&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, in our case, we are going to wrap a Go binary as a Function app and deploy it to Azure. Sounds good? Let’s jump right into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Make sure you have the following setup locally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://go.dev/dl/"&gt;Go SDK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=v4%2Cmacos%2Ccsharp%2Cportal%2Cbash#install-the-azure-functions-core-tools"&gt;Azure Functions Core Tools&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Prefererrably, Azure Functions &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions"&gt;VS Code extension&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The plan
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo or create one&lt;/li&gt;
&lt;li&gt;Create the Azure function resources&lt;/li&gt;
&lt;li&gt;Test out the app&lt;/li&gt;
&lt;li&gt;Deploy to Azure&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Clone the repo or create one
&lt;/h2&gt;

&lt;p&gt;Our code is going to be pretty simple. All it does is, whenever we make a request, it will return a random quote on programming.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 You can clone the repo I have created from &lt;a href="https://github.com/sahansera/go-azure-functions"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="c"&gt;// Removed for brevity&lt;/span&gt;

&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"Talk is cheap. Show me the code."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"First, solve the problem. Then, write the code."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"Experience is the name everyone gives to their mistakes."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"Any fool can write code that a computer can understand. Good programmers write code that humans can understand."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;quotesHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Get a random quote&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Intn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;))]&lt;/span&gt;

    &lt;span class="c"&gt;// Write the response&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;listenAddr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;":8080"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupEnv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FUNCTIONS_CUSTOMHANDLER_PORT"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;listenAddr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;":"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/api/GetQuotes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;quotesHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"About to listen on %s. Go to https://127.0.0.1%s/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;listenAddr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;listenAddr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listenAddr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We setting port &lt;code&gt;8080&lt;/code&gt; as the default port for our app. But if we are running this in Azure Functions, we will be using a different port. So, we are checking if the &lt;code&gt;FUNCTIONS_CUSTOMHANDLER_PORT&lt;/code&gt; environment variable is set. If it is, we will use that port instead.&lt;/li&gt;
&lt;li&gt;We are registering a handler for the &lt;code&gt;/api/GetQuotes&lt;/code&gt; endpoint. This is the endpoint that we will be using to make requests to our app.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;quotesHandler&lt;/code&gt; is a simple function that returns a random quote from the &lt;code&gt;quotes&lt;/code&gt; array.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Create the Azure function resources
&lt;/h2&gt;

&lt;p&gt;You can create your own Azure function from &lt;a href="http://portal.azure.com"&gt;portal.azure.com&lt;/a&gt; or using ARM templates. Below are the configuration I chose.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish: Code&lt;/li&gt;
&lt;li&gt;Runtime Stack: Custom Handler&lt;/li&gt;
&lt;li&gt;Operating System: Linux&lt;/li&gt;
&lt;li&gt;Plan type: Consumption (Serverless)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Test out the app
&lt;/h2&gt;

&lt;p&gt;If you have cloned the project you should see a folder structure similar to what’s shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iRQVBVQR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/e359e32c161b4ba7a9ff08f5e10a60e1/9f21b/serverless-go-with-azure-functions-github-actions-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iRQVBVQR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/e359e32c161b4ba7a9ff08f5e10a60e1/9f21b/serverless-go-with-azure-functions-github-actions-2.png" alt="serverless-go-with-azure-functions-github-actions-2.png" title="serverless-go-with-azure-functions-github-actions-2.png" width="706" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the root of the project folder let’s run the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go build handler.go &lt;span class="c"&gt;# To build a binary&lt;/span&gt;
func start &lt;span class="c"&gt;# Start the Function app service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the output like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T1QvQ0uQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/ff84465da4d2aa34c75ab1f599764e54/5a190/serverless-go-with-azure-functions-github-actions-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T1QvQ0uQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/ff84465da4d2aa34c75ab1f599764e54/5a190/serverless-go-with-azure-functions-github-actions-3.png" alt="serverless-go-with-azure-functions-github-actions-3.png" title="serverless-go-with-azure-functions-github-actions-3.png" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s run through each file now.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/sahansera/go-azure-functions/blob/main/GetQuotes/function.json"&gt;GetQuotes/function.json&lt;/a&gt;: This file defines what happens when a request comes in and what should go out from the function. These are known as &lt;strong&gt;&lt;em&gt;bindings.&lt;/em&gt;&lt;/strong&gt; Our function is triggered by HTTP requests and we will return a response&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/sahansera/go-azure-functions/blob/main/handler.go"&gt;handler.go&lt;/a&gt;: This is our Go web service where our main logic lives in. We listen on port 8080 and expose an HTTP endpoint called &lt;code&gt;/api/GetQuotes&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/sahansera/go-azure-functions/blob/main/host.json"&gt;host.json&lt;/a&gt;: Take note under &lt;code&gt;customHandler.description.defaultExecutablePath&lt;/code&gt; is set to handler which says where to find the compiled binary of our Go app and &lt;code&gt;enableForwardingHttpRequest&lt;/code&gt; where we tell the function host to forward our traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Deploy to Azure with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Now that we have everything ready to go let’s deploy this to Azure! 🚀  To get this done, we are going to use the &lt;a href="https://github.com/marketplace/actions/azure-functions-action"&gt;Azure Functions Action&lt;/a&gt; from the GH Actions Marketplace.&lt;/p&gt;

&lt;p&gt;From a high-level this is what we need to do in order to deploy this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authenticate with Azure&lt;/li&gt;
&lt;li&gt;Build the project&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Authenticate with Azure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since our app is written in “Go” which is not really supported out of the box, we won’t be able to use the Publish Profile method for this. So we are going to focus on using an Azure Service Principal for RBAC.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Remember to follow the steps according to &lt;a href="https://github.com/marketplace/actions/azure-functions-action#using-azure-service-principal-for-rbac-as-deployment-credential"&gt;this guide&lt;/a&gt; to create an SP.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Once you have created the service principal, we need to add that as a secret in our repo so that it can be used for authenticating with Azure Resource Manager during the deployment step. Head over to your repo → Settings → Secrets → Actions&lt;/li&gt;
&lt;li&gt;Create a secret a called &lt;code&gt;AZURE_RBAC_CREDENTIALS&lt;/code&gt; and update its content with what you got when you created the service principal.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now the that the credentials are in place we can refer to that from the GitHub Action workflow file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build the project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This step is pretty straightforward as we are using the Go SDK to build the project with &lt;code&gt;GOOS=linux GOARCH=amd64&lt;/code&gt; config.&lt;/p&gt;

&lt;p&gt;This is what the final workflow file should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/go-azure-functions/blob/main/.github/workflows/deploy.yml"&gt;deploy.yml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI/CD&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;AZURE_FUNCTIONAPP_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azgofuncapp&lt;/span&gt; &lt;span class="c1"&gt;# set this to your application's name&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Login&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;via&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Azure&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;CLI'&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure/login@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;creds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AZURE_RBAC_CREDENTIALS }}&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Set&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;up&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Go'&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-go@v3&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;go-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.18&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GOOS=linux GOARCH=amd64 go build handler.go&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Deploy&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Azure'&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Azure/functions-action@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.AZURE_FUNCTIONAPP_NAME }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once everything is done, head over to the following URL and refresh a couple of times to see our programming quotes! 😀&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://&amp;lt;your_app_URL&amp;gt;.azurewebsites.net/api/GetQuotes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kT_1uRL6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/3319e08116dcac501fe741acd7da9027/5a190/serverless-go-with-azure-functions-github-actions-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kT_1uRL6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sahansera.dev/static/3319e08116dcac501fe741acd7da9027/5a190/serverless-go-with-azure-functions-github-actions-4.png" alt="serverless-go-with-azure-functions-github-actions-4.png" title="serverless-go-with-azure-functions-github-actions-4.png" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Well, that’s it folks! Today we built a small Go web service, wrapped it in an Azure Function and deployed it to Azure by using GitHub Actions!&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;I ran into couple of issues when creating this project.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Are you getting the &lt;code&gt;Value cannot be null. (Parameter 'provider')&lt;/code&gt; error? &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was able to resolve it by following the exact config as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"logging"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"customHandler"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"defaultExecutablePath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"handler"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"workingDirectory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"arguments"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"enableForwardingHttpRequest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Are you getting the &lt;code&gt;Azure Functions Runtime is unreachable&lt;/code&gt; error?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For me, this went away when I did the first deployment. If it still doesn’t go away, check out this &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-recover-storage-account"&gt;link&lt;/a&gt; for more info.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other?tabs=go%2Cmacos"&gt;https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other?tabs=go%2Cmacos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/event-driven-scaling"&gt;https://docs.microsoft.com/en-us/azure/azure-functions/event-driven-scaling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure/azure-functions-dotnet-worker/issues/810"&gt;https://github.com/Azure/azure-functions-dotnet-worker/issues/810&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/marketplace/actions/azure-functions-action#using-azure-service-principal-for-rbac-as-deployment-credential"&gt;https://github.com/marketplace/actions/azure-functions-action#using-azure-service-principal-for-rbac-as-deployment-credential&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>go</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploying a .NET gRPC Server on Azure App Service</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Mon, 05 Sep 2022 03:59:00 +0000</pubDate>
      <link>https://dev.to/sahan/deploying-a-net-grpc-server-on-azure-app-service-3877</link>
      <guid>https://dev.to/sahan/deploying-a-net-grpc-server-on-azure-app-service-3877</guid>
      <description>&lt;p&gt;In this article we are going to be deploying a gRPC web service written in .NET to an Azure App Service. We will then interact with it and also automate deployment process of the Azure resources and the build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;About a few months ago I wrote a blog series on creating a simple gRPC server and a client locally. This time we are going to take the next steps by deploying it to the cloud.&lt;/p&gt;

&lt;p&gt;If you missed the series you can get yourself up to speed with the below links 🚀&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://sahansera.dev/introduction-to-grpc/" rel="noopener noreferrer"&gt;Introduction to gRPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sahansera.dev/building-grpc-server-dotnet/" rel="noopener noreferrer"&gt;Building a gRPC Server in .NET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sahansera.dev/building-grpc-client-dotnet" rel="noopener noreferrer"&gt;Building a gRPC Client in .NET&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Walkthrough video
&lt;/h2&gt;

&lt;p&gt;If you like to watch a video walkthrough instead of this article, you can follow along on my YouTube channel too 😊&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jm6aeRmTKCU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Plan
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone the above repo.&lt;/li&gt;
&lt;li&gt;We will create an App Service instance on Azure and deploy it (automated).&lt;/li&gt;
&lt;li&gt;Getting ready for deployment.&lt;/li&gt;
&lt;li&gt;Deploying the BookshopServer code.&lt;/li&gt;
&lt;li&gt;Testing the web service.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Clone the repo (or build one)
&lt;/h2&gt;

&lt;p&gt;Since we already have an app in handy we are going to use the same thing and make this work. More specifically we are going to use the exact code we looked at in the &lt;a href="https://sahansera.dev/building-grpc-server-dotnet/" rel="noopener noreferrer"&gt;Building a gRPC Server in .NET&lt;/a&gt; blog post.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The code can be found under &lt;a href="https://github.com/sahansera/dotnet-grpc/tree/main/BookshopServer" rel="noopener noreferrer"&gt;this repo&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are going to be using the &lt;code&gt;BookshopServer&lt;/code&gt; as our web service and &lt;code&gt;BookshopClient&lt;/code&gt; as the client to interact with it.&lt;/p&gt;

&lt;p&gt;Once you have it cloned, you can set the branch to app-service with,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout app-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Create an App Service instance in Azure
&lt;/h2&gt;

&lt;p&gt;You can head over to &lt;a href="http://portal.azure.com" rel="noopener noreferrer"&gt;portal.azure.com&lt;/a&gt; and create the App Service instance manually or you could use the ARM template down below.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 To save you some time and trouble, I’d stay away from the F1 tier - it was unreliable for me and had a hard time deploying my code. I went ahead with B1 (and you get 30 days free for Linux). It costs you a little bit, but heaps more reliable for Dev/Testing work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  ARM template for easy deployment
&lt;/h3&gt;

&lt;p&gt;If you haven’t got &lt;code&gt;az cli&lt;/code&gt; head over &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt; to get it set up. Once installed, do a &lt;code&gt;az login&lt;/code&gt; and select the account you want to use for deployments.&lt;/p&gt;

&lt;p&gt;I have created a shell script to automate all the steps. This is the usage of the script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;BookshopServer/Infrastructure/
./deploy.sh net6-grpc australiasoutheast template.json parameters.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The parameters to pass in are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resource group name (eg: &lt;code&gt;net6-grpc&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Region you want to create the resources in (eg: &lt;code&gt;australiasoutheast&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Name of the ARM template file (eg: &lt;code&gt;template.json&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Name of the parameters file (eg: &lt;code&gt;parameters.json&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are unsure, you can see what’s what just by typing &lt;code&gt;./deploy&lt;/code&gt; as well. Once deployed, you should be able to see something like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F4dc6c05a87143db4ea8494b0adac1220%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F4dc6c05a87143db4ea8494b0adac1220%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services.png" title="deploying-dotnet-grpc-service-azure-app-services" alt="deploying-dotnet-grpc-service-azure-app-services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  App Service configuration
&lt;/h3&gt;

&lt;p&gt;The ARM template takes care of most of the configuration for us. However there’s one more thing that we need to enabled, called &lt;code&gt;HTTP 2.0 Proxy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is what your configuration should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F4151149b65a157ba10003618ae74e0a5%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F4151149b65a157ba10003618ae74e0a5%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-2.png" title="deploying-dotnet-grpc-service-azure-app-services-2" alt="deploying-dotnet-grpc-service-azure-app-services-2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Getting ready for deployment
&lt;/h2&gt;

&lt;p&gt;Before we do any deployment there are a couple of things we need to change in our code. These are all done for you in my repo. But for the curious,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/dotnet-grpc/blob/d931d19cbe5a91e12cc8380e0b93f646afe7ce64/BookshopServer/Program.cs#L7" rel="noopener noreferrer"&gt;BookshopServer/Program.cs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add/Update the configuration to tell Kestrel to listen on all IPs on port &lt;code&gt;8080&lt;/code&gt; and also on port &lt;code&gt;8585&lt;/code&gt; but with HTTP 2.0 only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WebHost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureKestrel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListenAnyIP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ListenAnyIP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;8585&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;listenOptions&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; 
  &lt;span class="p"&gt;{&lt;/span&gt; 
      &lt;span class="n"&gt;listenOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Protocols&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Microsoft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AspNetCore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Kestrel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HttpProtocols&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Http2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
  &lt;span class="p"&gt;});&lt;/span&gt; 
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have that config we no longer need the configuration specified in appSettings.json. Let’s remove the following lines like so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/dotnet-grpc/blob/91df33edc33b8bc32cfe22338a6cdf337f6896c4/BookshopServer/appsettings.json#L9" rel="noopener noreferrer"&gt;BookshopServer/appSettings.json&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Delete&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;following&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;lines&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"Kestrel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"EndpointDefaults"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Protocols"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Http2"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Deploying the BookshopServer code
&lt;/h2&gt;

&lt;p&gt;Let’s go ahead and deploy our code! 🚀 There are many ways to do this. If you are setting it up for testing you can simply deploy from your local Git repo. To keep the scope of this blog post small I’m not going to be covering &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/deploy-continuous-deployment?tabs=github" rel="noopener noreferrer"&gt;other continous deployment methods&lt;/a&gt;. Let me know if you like me to cover them too!&lt;/p&gt;

&lt;p&gt;You can follow &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/deploy-local-git?tabs=cli#deploy-the-web-app" rel="noopener noreferrer"&gt;these steps&lt;/a&gt; from the official docs to quickly deploy from locally. There are some configurations that you need to do.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to you &lt;strong&gt;&lt;em&gt;app service&lt;/em&gt;&lt;/strong&gt; instance &amp;gt; click &lt;strong&gt;&lt;em&gt;Deployment Center&lt;/em&gt;&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;Select Local Git from the &lt;strong&gt;&lt;em&gt;Source&lt;/em&gt;&lt;/strong&gt; dropdown&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;&lt;em&gt;Local Git/FTPS Credentials&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Set a username and a password under &lt;strong&gt;&lt;em&gt;User Scope&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Hit &lt;strong&gt;&lt;em&gt;Save&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;(Important) Go back to the app service &lt;strong&gt;&lt;em&gt;Overview&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the URL under &lt;strong&gt;&lt;em&gt;Git Clone URL&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to your &lt;strong&gt;&lt;em&gt;local repo&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do a &lt;code&gt;git remote add azure &amp;lt;Git URL you copied before&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change deployment branch with &lt;code&gt;git push azure &amp;lt;local branch name&amp;gt;:master&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This might take a couple of minutes for the initial deployment. After the last step you should be able to see a log like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fc02f5886c75a167c394229ab831c19a5%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fc02f5886c75a167c394229ab831c19a5%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-4.png" title="deploying-dotnet-grpc-service-azure-app-services-4.png" alt="deploying-dotnet-grpc-service-azure-app-services-4.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although it my seem a little daunting at first, this is only a one-off config. For any code changes you make, you just need to commit your changes to your local branch and then do a &lt;code&gt;git push&lt;/code&gt; like we saw in step 10 above.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Testing the web service
&lt;/h2&gt;

&lt;p&gt;Now to the interesting part. Let’s see how we can interact with our web service 🤔&lt;/p&gt;

&lt;p&gt;For some reason I couldn’t get &lt;code&gt;grpcurl&lt;/code&gt; to work, thankfully, we already have the BookshopClient project - so let’s use that!&lt;/p&gt;

&lt;p&gt;When you open up the project, make sure to update the URL to point to your App service - and that’s all you need to do!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/dotnet-grpc/blob/91df33edc33b8bc32cfe22338a6cdf337f6896c4/BookshopClient/Program.cs#L5" rel="noopener noreferrer"&gt;BookshopClient/Program.cs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;var&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;GrpcChannel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ForAddress&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://your_app_service.azurewebsites.net/"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(make sure you add the &lt;code&gt;/&lt;/code&gt; at the end of the URL)&lt;/p&gt;

&lt;p&gt;and finally let’s run the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet run --project BookshopClient
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should return back the list of books as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fab62dde7919f63e6f612d1f07a7f437e%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fab62dde7919f63e6f612d1f07a7f437e%2F5a190%2Fdeploying-dotnet-grpc-service-azure-app-services-5.png" title="deploying-dotnet-grpc-service-azure-app-services-5.png" alt="deploying-dotnet-grpc-service-azure-app-services-5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it! now you know how to deploy a gRPC .NET web service in Azure 🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we looked at how we can leverage Azure App Service to deploy a gRPC server and interact with it. Until next time 👋&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/gRPC/use_gRPC_with_dotnet.md" rel="noopener noreferrer"&gt;https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/gRPC/use_gRPC_with_dotnet.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/app-service/deploy-local-git?tabs=cli" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/app-service/deploy-local-git?tabs=cli&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>dotnet</category>
      <category>azure</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Setting up a local Apache Kafka instance for testing</title>
      <dc:creator>Sahan</dc:creator>
      <pubDate>Thu, 01 Sep 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/sahan/setting-up-a-local-apache-kafka-instance-for-testing-ibf</link>
      <guid>https://dev.to/sahan/setting-up-a-local-apache-kafka-instance-for-testing-ibf</guid>
      <description>&lt;p&gt;I recently started digging into Apache Kafka for a project I’m working on at work. The more I started looking into it I realized I need to get myself a local Kafka cluster to play around with. This article walks you through that process and hopefully save your some time 😄.&lt;/p&gt;

&lt;p&gt;If you don’t know what Kafka is, &lt;a href="https://www.youtube.com/watch?v=FKgi3n-FyNU&amp;amp;ab_channel=Confluent" rel="noopener noreferrer"&gt;here’s a great video&lt;/a&gt; from Confluent.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 You can follow along by cloning the repo I have created over &lt;a href="https://github.com/sahansera/kafka-docker" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Installing the binaries
&lt;/h2&gt;

&lt;p&gt;There’s a quickstart &lt;a href="https://kafka.apache.org/quickstart" rel="noopener noreferrer"&gt;guide&lt;/a&gt; over at the official docs. To be honest, I didn’t want to install anything locally in order to keep the Kafka instance fairly separate my dev environment, hence went with the Docker way which you will find below.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Docker way
&lt;/h2&gt;

&lt;p&gt;This is by far the easiest I have found and saves you ton of time. Once you have cloned the repo just spin it up!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to know what’s going on in here let’s take a look at the docker-compose.yaml file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/kafka-docker/blob/main/docker-compose.yaml" rel="noopener noreferrer"&gt;docker-compose.yaml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-zookeeper:7.0.1&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ZOOKEEPER_CLIENT_PORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2181&lt;/span&gt;
      &lt;span class="na"&gt;ZOOKEEPER_TICK_TIME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000&lt;/span&gt;

  &lt;span class="na"&gt;broker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-kafka:7.0.1&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broker&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# To learn about configuring Kafka for access across networks see&lt;/span&gt;
    &lt;span class="c1"&gt;# https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9092:9092"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_BROKER_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ZOOKEEPER_CONNECT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;zookeeper:2181'&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ADVERTISED_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_TRANSACTION_STATE_LOG_MIN_ISR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;We spin up a &lt;a href="https://zookeeper.apache.org/" rel="noopener noreferrer"&gt;Zookeeper&lt;/a&gt; instance at port &lt;code&gt;2181&lt;/code&gt; internally within the Docker network (i.e. not accessible from the host)&lt;/li&gt;
&lt;li&gt;Next we spin up our broker which is the Kafka instance which has a dependency on Zookeeper. This is exposed at port &lt;code&gt;9092&lt;/code&gt; and accessible from the host.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything else is set to default. If you want to learn what each environment variable does, head over &lt;a href="https://docs.confluent.io/platform/current/kafka/multi-node.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You can add multiple brokers (Kafka instances) if you want in the same docker-compose file.&lt;/p&gt;

&lt;p&gt;This approach is quite beneficial if you want to stand up an instance for testing on the go or even in a CI environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interacting with the cluster
&lt;/h2&gt;

&lt;p&gt;Before we publish anything, let’s spin up a consumer so that we know our setup is working. To do that, run the following command from the CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="nt"&gt;--tty&lt;/span&gt; broker &lt;span class="se"&gt;\&lt;/span&gt;
kafka-console-consumer &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092 &lt;span class="se"&gt;\&lt;/span&gt;
                       &lt;span class="nt"&gt;--topic&lt;/span&gt; example-topic &lt;span class="se"&gt;\&lt;/span&gt;
                       &lt;span class="nt"&gt;--from-beginning&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s create some messages and see them in action!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec --interactive --tty broker \
kafka-console-producer --bootstrap-server localhost:9092 \
                       --topic example-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what it will look like from the terminal. To your left is the &lt;code&gt;producer&lt;/code&gt; and &lt;code&gt;consumer&lt;/code&gt; to your right hand side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F018146d435429a3497576c80172e22e3%2F5a190%2Fsetting-up-kafka-locally-for-testing-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F018146d435429a3497576c80172e22e3%2F5a190%2Fsetting-up-kafka-locally-for-testing-1.png" title="setting-up-kafka-locally-for-testing-1.png" alt="setting-up-kafka-locally-for-testing-1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A GUI perhaps?
&lt;/h2&gt;

&lt;p&gt;Not a fan of the CLI? There’s many tools out there. I wanted to find a nice, lightweight, OSS solution for this. I decided to go ahead with &lt;a href="https://marketplace.visualstudio.com/items?itemName=jeppeandersen.vscode-kafka" rel="noopener noreferrer"&gt;Tools for Apache Kafka for VS Code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have that going you could interact with it from the side bar. There will be a new icon with Kafka’s logo on it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F5d851500362a638d86358d395c19a699%2F5a190%2Fsetting-up-kafka-locally-for-testing-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F5d851500362a638d86358d395c19a699%2F5a190%2Fsetting-up-kafka-locally-for-testing-2.png" title="setting-up-kafka-locally-for-testing-2.png" alt="setting-up-kafka-locally-for-testing-2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what it looks like when it finds your instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F1de368b211cb45a6fe43f6591f4dedd7%2F084e2%2Fsetting-up-kafka-locally-for-testing-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2F1de368b211cb45a6fe43f6591f4dedd7%2F084e2%2Fsetting-up-kafka-locally-for-testing-3.png" title="setting-up-kafka-locally-for-testing-3.png" alt="setting-up-kafka-locally-for-testing-3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sahansera/kafka-docker/blob/main/producer.kafka" rel="noopener noreferrer"&gt;producer.kafka&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;PRODUCER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;keyed-message&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;topic:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;example-topic&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;key:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;mykeyq&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;record&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;content&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;###&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;PRODUCER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;non-keyed-json-message&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;topic:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;example-topic&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my_test_event-{{random.number}}"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/sahansera/kafka-docker/blob/main/consumer.kafka" rel="noopener noreferrer"&gt;consumer.kafka&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;CONSUMER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;consumer-group-id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;topic:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;example-topic&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;partitions:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;from:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can invoke the producers and consumers right within VS Code, which is pretty easy and cool!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fe4fd10918691b2d4db95f2f18502b26f%2F5a190%2Fsetting-up-kafka-locally-for-testing-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsahansera.dev%2Fstatic%2Fe4fd10918691b2d4db95f2f18502b26f%2F5a190%2Fsetting-up-kafka-locally-for-testing-4.png" title="setting-up-kafka-locally-for-testing-4" alt="setting-up-kafka-locally-for-testing-4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Welll that’s it! Standing up a docker container certainly did save me some time just for testing purposes. Hope you enjoyed this post and see you next time 👋&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://developer.confluent.io/quickstart/kafka-docker/" rel="noopener noreferrer"&gt;https://developer.confluent.io/quickstart/kafka-docker/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=jeppeandersen.vscode-kafka" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=jeppeandersen.vscode-kafka&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.confluent.io/platform/current/kafka/multi-node.html" rel="noopener noreferrer"&gt;https://docs.confluent.io/platform/current/kafka/multi-node.html&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>tutorial</category>
      <category>kafka</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
