<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Buun ch.</title>
    <description>The latest articles on DEV Community by Buun ch. (@buun-ch).</description>
    <link>https://dev.to/buun-ch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/buun-ch"/>
    <language>en</language>
    <item>
      <title>JupyterHub on Kubernetes: Secure Notebook Secrets with Vault</title>
      <dc:creator>Buun ch.</dc:creator>
      <pubDate>Mon, 08 Sep 2025 11:05:58 +0000</pubDate>
      <link>https://dev.to/buun-ch/jupyterhub-on-kubernetes-secure-notebook-secrets-with-vault-kk5</link>
      <guid>https://dev.to/buun-ch/jupyterhub-on-kubernetes-secure-notebook-secrets-with-vault-kk5</guid>
      <description>&lt;p&gt;In this article, we set up a multi‑user JupyterHub on a Kubernetes home lab and make it practical for day‑to‑day work. We’ll install the chart with Helm (wrapped in Just recipes), enable user profiles and custom images, connect notebooks to in‑cluster services like PostgreSQL, and manage API keys directly from notebooks using Vault with a tiny Python helper. The end result is a self‑hosted notebook platform with single sign‑on, sensible defaults, and a clean developer experience.&lt;/p&gt;

&lt;p&gt;If you’ve followed the earlier posts in this series, you already have a k3s cluster, Keycloak for OIDC, Vault, and (optionally) Longhorn running. We’ll build on top of that foundation here.&lt;/p&gt;

&lt;p&gt;Repository: &lt;a href="https://github.com/buun-ch/buun-stack" rel="noopener noreferrer"&gt;https://github.com/buun-ch/buun-stack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenn.dev/buun_ch/articles/a4ae089442e6a2" rel="noopener noreferrer"&gt;Japanese(日本語)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/Iww53qTl9Ns"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  What JupyterHub is
&lt;/h2&gt;

&lt;p&gt;JupyterHub is a multi‑user gateway for Jupyter. On Kubernetes, the hub spawns one pod per user, mounts a persistent volume for their files, and proxies traffic to each user server. Authentication is pluggable; in this setup we authenticate with Keycloak over OIDC. This model gives each person an isolated, reproducible environment while keeping administration centralized.&lt;/p&gt;

&lt;p&gt;The benefits for a small team or home lab are straightforward: there’s one place to sign in and manage access; users choose an environment that fits their work; and you keep data local with predictable performance and cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing JupyterHub with Helm
&lt;/h2&gt;

&lt;p&gt;This repository ships a set of Just recipes that automate the JupyterHub installation.&lt;/p&gt;

&lt;p&gt;Clone the repo and enter the workspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://github.com/buun-ch/buun-stack
&lt;span class="nb"&gt;cd &lt;/span&gt;buun-stack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you have a &lt;code&gt;.env.local&lt;/code&gt; file with your Keycloak and Vault settings (see README.md), then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Interactive install (prompts for host, optional NFS, Vault integration)&lt;/span&gt;
just jupyterhub::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During installation you’ll be asked to confirm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JupyterHub host (FQDN) used for OAuth callbacks&lt;/li&gt;
&lt;li&gt;Whether to enable NFS PV (requires Longhorn); if yes, supply NFS IP and path&lt;/li&gt;
&lt;li&gt;Whether to enable Vault integration for notebook secrets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you publish the JupyterHub through Cloudflare Tunnel, add a public hostname entry.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subdomain: &lt;code&gt;jupyter&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Domain: &lt;code&gt;example.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Service: &lt;code&gt;https://localhost:443&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Advanced options: disable TLS verification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open the URL in your browser, sign in with Keycloak, and start a server to verify everything works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuhh2rl5pzsrb20j4wnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuhh2rl5pzsrb20j4wnk.png" alt="JupyterHub 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbi4mbfgjomxx8dcxn0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbi4mbfgjomxx8dcxn0l.png" alt="JupyterHub 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zbu5nfrjvn3i02n3v5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zbu5nfrjvn3i02n3v5j.png" alt="JupyterHub 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm8wre6i7dr6ykmp4o5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm8wre6i7dr6ykmp4o5g.png" alt="JupyterHub 4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Customization
&lt;/h2&gt;

&lt;p&gt;Profiles let users pick an image and resource shape when starting their server. They’re defined in the Helm values and applied by KubeSpawner. By default, only the official “Jupyter Notebook Data Science Stack” is available, so the image puller can finish quickly. You can also enable a custom “Buun‑stack” image that includes additional libraries and the Vault integration used below.&lt;/p&gt;

&lt;p&gt;To turn on the Buun‑stack profile, enable it and optionally the CUDA variant in &lt;code&gt;.env.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable profiles (export or put these in .env.local)&lt;/span&gt;
&lt;span class="nv"&gt;JUPYTER_PROFILE_BUUN_STACK_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="c"&gt;# Optional GPU profile&lt;/span&gt;
&lt;span class="nv"&gt;JUPYTER_PROFILE_BUUN_STACK_CUDA_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="c"&gt;# Optional: turn off the default datascience image&lt;/span&gt;
&lt;span class="nv"&gt;JUPYTER_PROFILE_DATASCIENCE_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;

&lt;span class="c"&gt;# Configure image registry and tag in .env.local as needed&lt;/span&gt;
&lt;span class="nv"&gt;IMAGE_REGISTRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost:30500
&lt;span class="nv"&gt;JUPYTER_PYTHON_KERNEL_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;python-3.12-28
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then build and push the images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build and push kernel images&lt;/span&gt;
just jupyterhub::build-kernel-images
just jupyterhub::push-kernel-images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Buun‑stack Dockerfile bundles the Python components needed for Vault along with common data/ML packages, so users can start coding right away.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optional: NFS‑backed storage with Longhorn
&lt;/h3&gt;

&lt;p&gt;If you work with large or shared datasets, enable an NFS‑backed PersistentVolume via Longhorn and mount it into user servers. This keeps data local to your environment, reduces egress, and makes backups straightforward. You can preconfigure NFS settings and let the installer provision the PV/PVC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;JUPYTERHUB_NFS_PV_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
export &lt;/span&gt;&lt;span class="nv"&gt;JUPYTER_NFS_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.10.1
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;JUPYTER_NFS_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/volume1/drive1/jupyter
just jupyterhub::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Service Integration
&lt;/h2&gt;

&lt;p&gt;Because JupyterHub runs inside the cluster, notebooks can reach services over Kubernetes DNS without port‑forwarding. For example, a PostgreSQL URL might be &lt;code&gt;postgresql://user:password@postgres-cluster-rw.postgres:5432/mydb&lt;/code&gt;. The environment variables &lt;code&gt;POSTGRES_HOST&lt;/code&gt; and &lt;code&gt;POSTGRES_PORT&lt;/code&gt; will be injected into the user server, allowing your notebook code to construct these URLs at runtime like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="n"&gt;pg_host&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POSTGRES_HOST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pg_port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POSTGRES_PORT&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pg_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;postgresql://user:password@&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pg_host&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pg_port&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/mydb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With DuckDB, you can attach Postgres and move data in a few lines of SQL, which is handy for ad‑hoc imports and quick queries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;duckdb&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_duckdb_postgres&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;    &lt;span class="n"&gt;con&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;duckdb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;    &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;INSTALL postgres&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;    &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;LOAD postgres&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;    &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ATTACH &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pg_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; AS pg (TYPE POSTGRES)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setup_duckdb_postgres&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;OR&lt;/span&gt; &lt;span class="n"&gt;REPLACE&lt;/span&gt; &lt;span class="n"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;pg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;athlete_events&lt;/span&gt; &lt;span class="n"&gt;AS&lt;/span&gt; 
&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;FROM&lt;/span&gt; &lt;span class="nf"&gt;read_csv_auto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{data_dir}/athlete_events.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;con&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;Sport&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;athlete_events&lt;/span&gt; 
&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;GROUP&lt;/span&gt; &lt;span class="n"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;Sport&lt;/span&gt; 
&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;ORDER&lt;/span&gt; &lt;span class="n"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="n"&gt;DESC&lt;/span&gt; 
&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="n"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;df&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;Sport&lt;/span&gt;   &lt;span class="n"&gt;count&lt;/span&gt;
&lt;span class="o"&gt;--------------------&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt;   &lt;span class="n"&gt;Athletics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;  &lt;span class="mi"&gt;38624&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;   &lt;span class="n"&gt;Gymnastics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="mi"&gt;26707&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;   &lt;span class="n"&gt;Swimming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;   &lt;span class="mi"&gt;23195&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt;   &lt;span class="n"&gt;Shooting&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;   &lt;span class="mi"&gt;11448&lt;/span&gt;
&lt;span class="mi"&gt;4&lt;/span&gt;   &lt;span class="n"&gt;Cycling&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;    &lt;span class="mi"&gt;10859&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using JupyterHub and Kubernetes-internal services keeps latency low and configuration clean. It also scales to other internal services—object storage, analytics APIs, or anything else you’ve deployed in‑cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets with Vault
&lt;/h2&gt;

&lt;p&gt;Cloud notebooks like Colab provide a simple way to fetch secrets in code. On Google Colab, you can store and retrieve secrets with a built‑in helper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Google Colab (example)
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.colab&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;userdata&lt;/span&gt;
&lt;span class="n"&gt;openai_api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userdata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With plain Jupyter, the common pattern is to paste secrets at runtime using getpass, which is manual and error‑prone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Plain Jupyter (manual paste)
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;getpass&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;getpass&lt;/span&gt;
&lt;span class="n"&gt;openai_api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getpass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OpenAI API key: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We recreate the Colab‑style ergonomics in a self‑hosted way using Vault and a small Python class named &lt;code&gt;SecretStore&lt;/code&gt;, included in the Buun‑stack image. When your server starts, JupyterHub creates a per‑user policy and token in Vault, then injects &lt;code&gt;NOTEBOOK_VAULT_TOKEN&lt;/code&gt; and &lt;code&gt;VAULT_ADDR&lt;/code&gt; as environment variables. &lt;code&gt;SecretStore&lt;/code&gt; uses them under the hood and renews the token as needed during long sessions.&lt;/p&gt;

&lt;p&gt;Here’s what it looks like with this stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;buunstack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SecretStore&lt;/span&gt;

&lt;span class="n"&gt;secrets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SecretStore&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;api-keys&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sk-...&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;openai_api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;api-keys&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s no copy‑paste of tokens into cells, and each user has an isolated namespace in Vault with a full audit trail. To use this, enable Vault during install and choose one of the Buun‑stack kernels.&lt;/p&gt;

&lt;p&gt;You can enable Vault before installation by adding to &lt;code&gt;.env.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;JUPYTERHUB_VAULT_INTEGRATION_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just jupyterhub::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or set up after installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just jupyterhub::setup-vault-jwt-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implementation details (how SecretStore works)
&lt;/h3&gt;

&lt;p&gt;Under the hood, the integration mirrors what you’d expect from a managed notebook platform, but with components you control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Admin token supply: the hub uses a renewable Vault admin token stored at a well‑known path in Vault and fetched into the Hub via an ExternalSecret. A tiny sidecar container renews that token automatically at TTL/2 intervals so it never expires during normal operation.&lt;/li&gt;
&lt;li&gt;Pre‑spawn user isolation: when a user starts a server, a pre‑spawn hook creates a user‑specific Vault policy and an orphan token bound to that policy. Orphan tokens aren’t limited by a parent token’s policy, which avoids inheritance issues. The token is injected into the notebook container as &lt;code&gt;NOTEBOOK_VAULT_TOKEN&lt;/code&gt; along with &lt;code&gt;VAULT_ADDR&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Per‑user namespaces: each policy constrains access to that user’s own secret namespace. Vault’s audit logs capture every access.&lt;/li&gt;
&lt;li&gt;In‑notebook helper: &lt;code&gt;SecretStore&lt;/code&gt; (backed by the &lt;code&gt;buunstack&lt;/code&gt; Python package) reads the injected token and calls Vault. Before each operation it checks whether the token is valid and renewable; if the TTL is low it renews the token so long‑running sessions keep working.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;User‑token renewal is implemented inside &lt;code&gt;SecretStore._ensure_authenticated()&lt;/code&gt;: it checks the current token with &lt;code&gt;lookup_self&lt;/code&gt;, renews it when the TTL is low and the token is renewable, and raises an error if the token is no longer valid so the user can restart their server. Admin‑token renewal is handled separately by the Hub sidecar and does not involve notebook code.&lt;/p&gt;

&lt;p&gt;This design keeps the user experience simple (set/get in code) while providing strong boundaries between users and durable sessions without manual secret pasting. For deeper operational details—policies, ExternalSecret setup, user‑policy scope, orphan tokens, and renewal behavior—see &lt;code&gt;docs/jupyterhub.md&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of self‑hosting JupyterHub
&lt;/h2&gt;

&lt;p&gt;Running JupyterHub on your own Kubernetes cluster gives you control over the entire notebook experience while keeping data close to where it’s produced and used. You decide which images and libraries are available, how resources are allocated, and how authentication and secrets are handled.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control and customization: curate images and profiles that match how your team works; adjust spawner settings, resources, and storage to your needs.&lt;/li&gt;
&lt;li&gt;Data locality and performance: keep data on your network for lower latency and simpler compliance; tune storage, CPU/MEM, and even GPUs.&lt;/li&gt;
&lt;li&gt;Team productivity: preinstalled tools reduce setup time; each user gets an isolated, reproducible server; services inside the cluster are reachable by simple DNS names.&lt;/li&gt;
&lt;li&gt;Operations and security: SSO via Keycloak, per‑user isolation, Vault‑backed secrets with an audit trail, and backups/monitoring you own. For steady workloads, costs are predictable and often lower than equivalent cloud notebooks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrap‑up
&lt;/h2&gt;

&lt;p&gt;We built a practical, secure, multi‑user JupyterHub on Kubernetes and showed how to use it day to day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installed with Helm using Just recipes, including optional NFS storage and Vault integration&lt;/li&gt;
&lt;li&gt;Enabled profiles and custom kernel images (Buun‑stack) for consistent environments&lt;/li&gt;
&lt;li&gt;Connected notebooks to in‑cluster services (e.g., PostgreSQL) using Kubernetes DNS and env vars&lt;/li&gt;
&lt;li&gt;Managed secrets directly from notebooks with Vault via the &lt;code&gt;SecretStore&lt;/code&gt; helper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a self‑hosted notebook platform with single sign‑on, in‑cluster integrations, and safe, ergonomic secret management—ideal for small teams, learning environments, and home labs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Repository: &lt;a href="https://github.com/buun-ch/buun-stack" rel="noopener noreferrer"&gt;https://github.com/buun-ch/buun-stack&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Documentation: &lt;code&gt;docs/jupyterhub.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Previous articles:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/buun-ch/building-a-remote-accessible-kubernetes-home-lab-with-k3s-5g05"&gt;Building a Remote-Accessible Kubernetes Home Lab with k3s&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/buun-ch/faster-kubernetes-dev-loop-with-tilt-and-telepresence-303p"&gt;Faster Kubernetes Dev Loop with Tilt and Telepresence&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>jupyter</category>
      <category>vault</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Faster Kubernetes Dev Loop with Tilt and Telepresence</title>
      <dc:creator>Buun ch.</dc:creator>
      <pubDate>Mon, 01 Sep 2025 05:41:01 +0000</pubDate>
      <link>https://dev.to/buun-ch/faster-kubernetes-dev-loop-with-tilt-and-telepresence-303p</link>
      <guid>https://dev.to/buun-ch/faster-kubernetes-dev-loop-with-tilt-and-telepresence-303p</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/buun-ch/building-a-remote-accessible-kubernetes-home-lab-with-k3s-5g05"&gt;the previous article&lt;/a&gt; we set up a Kubernetes home lab accessible over the internet with k3s. In this article, we’ll build a fast inner development loop on top of that environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenn.dev/buun_ch/articles/001000b9f445c9" rel="noopener noreferrer"&gt;Japanese(日本語)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/kT_m2ZtkApk"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Working Kubernetes environment (see previous setup)&lt;/li&gt;
&lt;li&gt;SSH access to the home server&lt;/li&gt;
&lt;li&gt;Basic kubectl and Helm knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Container registry for development
&lt;/h2&gt;

&lt;p&gt;First, let’s set up our environment to build and push container images. Assume a k3s cluster is running on a home server with a private container registry inside the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf7rb90vjaos76hzdfk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf7rb90vjaos76hzdfk4.png" alt="Docker push failure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you’re outside your home network, Cloudflare Tunnel can expose most services, but it doesn't fit to a container registry because Cloudflare Tunnel doesn't allow you to push a large image into your local network. For public distribution, we have to use a hosted registry. But for development, we can still push to the private in‑cluster registry by setting up a remote Docker context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y2m7sz3q78xw0x2e9ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y2m7sz3q78xw0x2e9ty.png" alt="Docker push over SSH"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/buun-ch/building-a-remote-accessible-kubernetes-home-lab-with-k3s-5g05"&gt;the previous article&lt;/a&gt;, we set up SSH access to the home network. We’ll use it to talk to the Docker daemon on the home server. Example &lt;code&gt;~/.ssh/config&lt;/code&gt; (replace &lt;code&gt;buun.dev&lt;/code&gt; with your hostname):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host buun.dev
  Hostname ssh.buun.dev
  ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set &lt;code&gt;DOCKER_HOST=ssh://buun.dev&lt;/code&gt; so the Docker CLI talks to the remote engine over SSH instead of your local daemon.&lt;/p&gt;

&lt;p&gt;Behind the scenes, the CLI opens an SSH session to &lt;code&gt;buun.dev&lt;/code&gt; and tunnels the Docker Engine API through it. From the CLI’s point of view, &lt;code&gt;docker build&lt;/code&gt;, &lt;code&gt;docker push&lt;/code&gt;, and &lt;code&gt;docker run&lt;/code&gt; all hit the remote engine. Bonus: offloading builds saves laptop battery.&lt;/p&gt;

&lt;p&gt;Tag images with the prefix &lt;code&gt;localhost:30500/&lt;/code&gt;. That address works on the home server both inside and outside the cluster. Pushes happen from the remote host’s network perspective; if that host can reach the registry, &lt;code&gt;docker push&lt;/code&gt; succeeds even when your laptop can’t reach it directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build, Push, and Deploy a Sample App
&lt;/h3&gt;

&lt;p&gt;Let’s try it with a &lt;a href="https://github.com/buun-ch/sample-web-app" rel="noopener noreferrer"&gt;sample‑web‑app&lt;/a&gt;, a simple Next.js app with Drizzle ORM connecting to PostgreSQL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/buun-ch/sample-web-app
&lt;span class="nb"&gt;cd &lt;/span&gt;sample-web-app
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssh://buun.dev
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; localhost:30500/buun-ch/sample-web-app:latest &lt;span class="nb"&gt;.&lt;/span&gt;
docker push localhost:30500/buun-ch/sample-web-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands build the Docker image and push it to the private registry. As mentioned earlier, the &lt;code&gt;localhost:30500/...&lt;/code&gt; tag is reachable inside and outside the cluster.&lt;/p&gt;

&lt;p&gt;Next, create the database and user. If you’re using &lt;a href="https://github.com/buun-ch/buun-stack" rel="noopener noreferrer"&gt;buun‑stack&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /path/to/buun-stack
just postgres::create-user-and-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;values.yaml&lt;/code&gt; for development:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;imageRegistry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:30500/buun-ch&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-web-app&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
  &lt;span class="na"&gt;pullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DATABASE_URL&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql://todo:todopass@postgres-cluster-rw.postgres:5432/todo&lt;/span&gt;

&lt;span class="na"&gt;migration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DATABASE_URL&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql://todo:todopass@postgres-cluster-rw.postgres:5432/todo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then deploy the Helm chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace sample
helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; sample-web-app ./charts/sample-web-app &lt;span class="nt"&gt;-n&lt;/span&gt; sample &lt;span class="nt"&gt;--wait&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the release status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; sample
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Telepresence: in‑cluster DNS and routing without port‑forwarding
&lt;/h2&gt;

&lt;p&gt;You could do a quick &lt;code&gt;kubectl port-forward&lt;/code&gt; and open the app, but redoing that after every deploy is tedious—and it doesn’t integrate DNS. Telepresence lets a local process join the cluster network as if it were running inside Kubernetes. Once connected, your laptop resolves service names and routes traffic through Telepresence, so you can hit services by DNS name directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbutifqqhny6ta0x4hwvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbutifqqhny6ta0x4hwvw.png" alt="Telepresence connection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First‑time setup: install the Traffic Manager:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;telepresence helm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;telepresence connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can reach services by DNS (e.g., &lt;code&gt;&amp;lt;service&amp;gt;.&amp;lt;namespace&amp;gt;&lt;/code&gt;) directly from your laptop, and GUI tools like &lt;a href="https://dbgate.org/" rel="noopener noreferrer"&gt;DbGate&lt;/a&gt; work out of the box. When you’re done, disconnect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;telepresence quit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dev orchestrators: Skaffold, Tilt, DevSpace
&lt;/h2&gt;

&lt;p&gt;Now that Telepresence takes care of networking and DNS, the next bottleneck is the inner dev loop: build, tag, push, and roll out changes—over and over again across multiple services.&lt;/p&gt;

&lt;p&gt;What we want in an inner dev loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast incremental rebuilds: watch source and rebuild only what changed&lt;/li&gt;
&lt;li&gt;Instant live sync: edit locally and update containers immediately&lt;/li&gt;
&lt;li&gt;Unified feedback: logs and (optional) forwards in one place&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://skaffold.dev/" rel="noopener noreferrer"&gt;Skaffold&lt;/a&gt;, &lt;a href="https://tilt.dev/" rel="noopener noreferrer"&gt;Tilt&lt;/a&gt;, and &lt;a href="https://www.devspace.sh/" rel="noopener noreferrer"&gt;DevSpace&lt;/a&gt; all deliver this. Here are minimal (illustrative) configs:&lt;/p&gt;

&lt;h3&gt;
  
  
  Skaffold
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;skaffold/v4beta6&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Config&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-web-app&lt;/span&gt;
&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./apps/sample-web-app&lt;/span&gt;
&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;releases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-web-app&lt;/span&gt;
        &lt;span class="na"&gt;chartPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;charts/sample-web-app&lt;/span&gt;
        &lt;span class="na"&gt;valuesFiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;values.dev.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tilt
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Tiltfile (Starlark)
&lt;/span&gt;&lt;span class="nf"&gt;docker_build&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./apps/sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Optional but fast: sync edits and run commands in the container
&lt;/span&gt;&lt;span class="nf"&gt;live_update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nf"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./apps/sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;npm install&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;trigger&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;package.json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;package-lock.json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  DevSpace
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2beta1&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-web-app&lt;/span&gt;
&lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./apps/sample-web-app&lt;/span&gt;
&lt;span class="na"&gt;deployments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;charts/sample-web-app&lt;/span&gt;
      &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;
          &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;imageSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:30500/sample-web-app&lt;/span&gt;
    &lt;span class="na"&gt;sync&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./apps/sample-web-app:/app&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
        &lt;span class="na"&gt;forward&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: These snippets are examples only — not runnable as‑is.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is the GitHub star history of these tools:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubqkf4rh7vln943r9rky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubqkf4rh7vln943r9rky.png" alt="GitHub star history"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each tool has its own unique features and strengths, making them suitable for different use cases and preferences. I recommend Tilt for its flexibility and ease of use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tiltfile overview (sample web app)
&lt;/h2&gt;

&lt;p&gt;Here’s the Tiltfile for sample-web-app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;allow_k8s_contexts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;k8s_context&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;registry&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define_bool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;port-forward&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;extra-values-file&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define_bool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;enable-health-logs&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;cfg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;registry&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;registry&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:30500&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;default_registry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;registry&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;docker_build&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sample-web-app-dev&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dockerfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Dockerfile.dev&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;live_update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nf"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pnpm install&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;trigger&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./package.json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./pnpm-lock.yaml&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;values_files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./charts/sample-web-app/values-dev.yaml&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;extra_values_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;extra-values-file&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;extra_values_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;values_files&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;extra_values_file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;📝 Using extra values file: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;extra_values_file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;helm_set_values&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;enable_health_logs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;enable-health-logs&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;enable_health_logs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;helm_set_values&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;logging.health_request=true&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;📵 Health check request logs enabled&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;helm_release&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;helm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./charts/sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;values&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;values_files&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;helm_set_values&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;k8s_yaml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;helm_release&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;enable_port_forwards&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;port-forward&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;k8s_resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sample-web-app&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;port_forwards&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;13000:3000&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;enable_port_forwards&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;enable_port_forwards&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;🚀 Access your application at: http://localhost:13000&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Define a few custom command‑line args (e.g., default container registry).&lt;/li&gt;
&lt;li&gt;Build the image.&lt;/li&gt;
&lt;li&gt;Deploy with Helm. Values differ by environment and are driven by the args defined above.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tilt in action
&lt;/h2&gt;

&lt;p&gt;Let’s run &lt;code&gt;tilt up&lt;/code&gt; and see it in action. The CLI prompts you to press space to open Tilt’s web UI in the browser; it shows the status of Kubernetes resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8g5jqddt16jtlhnj7q9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8g5jqddt16jtlhnj7q9.png" alt="Tilt GUI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the &lt;code&gt;sample-web-app&lt;/code&gt; resource. The UI shows logs for &lt;code&gt;docker build&lt;/code&gt;, &lt;code&gt;docker push&lt;/code&gt;, and the Helm release.&lt;/li&gt;
&lt;li&gt;In this demo, &lt;code&gt;docker build&lt;/code&gt; completes instantly because no code changed since the last build.&lt;/li&gt;
&lt;li&gt;The app deploys. Open it in the browser to verify it’s reachable.&lt;/li&gt;
&lt;li&gt;Live Update: change the title to “ToDo App Test” and save. The browser updates immediately—no manual steps.&lt;/li&gt;
&lt;li&gt;Modify the Dockerfile. Tilt detects the change and rebuilds only the invalidated layer. When the push finishes, Tilt redeploys automatically.&lt;/li&gt;
&lt;li&gt;Update Helm values. Tilt detects the change, re‑renders the manifests, and reapplies them. When the rollout finishes, the new settings are live.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kubernetes CLI productivity tips
&lt;/h2&gt;

&lt;p&gt;These tips aim to reduce keystrokes, make output easier to scan, and centralize feedback while you iterate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autocompletion
&lt;/h3&gt;

&lt;p&gt;Tab completion speeds up navigation and prevents typos for resource kinds and names. If you are using zsh, add the following to your shell init so it’s always available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;autoload &lt;span class="nt"&gt;-Uz&lt;/span&gt; compinit
compinit

&lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl completion zsh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Usage: start typing a command and press Tab to complete names, e.g., &lt;code&gt;kubectl get pod &amp;lt;TAB&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  kubecolor + aliases
&lt;/h3&gt;

&lt;p&gt;Colorized output makes table scanning faster. Alias &lt;code&gt;kubectl&lt;/code&gt; to &lt;code&gt;kubecolor&lt;/code&gt;, and remap completion so Tab still works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;kubectl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubecolor
compdef &lt;span class="nv"&gt;kubecolor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubectl
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubecolor
compdef &lt;span class="nv"&gt;kubecolor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The short alias &lt;code&gt;k&lt;/code&gt; reduces keystrokes for frequent use.&lt;/p&gt;

&lt;h3&gt;
  
  
  zsh Global aliases for output
&lt;/h3&gt;

&lt;p&gt;zsh Global aliases expand anywhere on the line and improve readability of kubectl output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Pretty YAML via bat&lt;/span&gt;
&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;Y&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-o yaml | bat -l yaml'&lt;/span&gt;

&lt;span class="c"&gt;# Wide output&lt;/span&gt;
&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;W&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-o wide'&lt;/span&gt;

&lt;span class="c"&gt;# YAML into a read‑only Neovim buffer&lt;/span&gt;
&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;YE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-o yaml | nvim -c ":set ft=yaml" -R'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Normal (without alias)&lt;/span&gt;
kubectl get deploy sample-web-app &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | bat &lt;span class="nt"&gt;-l&lt;/span&gt; yaml

&lt;span class="c"&gt;# Print manifest YAML with syntax highlight&lt;/span&gt;
kubectl get deploy sample-web-app Y

&lt;span class="c"&gt;# Wide columns&lt;/span&gt;
kubectl get pods W
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Watch with viddy
&lt;/h3&gt;

&lt;p&gt;Dashboards like &lt;a href="https://kdash.cli.rs/" rel="noopener noreferrer"&gt;KDash&lt;/a&gt; and &lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;k9s&lt;/a&gt; are great, but I generally stick to kubectl plus &lt;a href="https://github.com/sachaos/viddy" rel="noopener noreferrer"&gt;viddy&lt;/a&gt; for quick loops. &lt;code&gt;viddy&lt;/code&gt; re‑runs a command, highlights changes in the output, and lets you choose a past timestamp to view that run’s output.&lt;/p&gt;

&lt;p&gt;I use this alias to pair &lt;code&gt;viddy&lt;/code&gt; with kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;vk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'viddy -dtw kubectl'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Context/namespace helpers
&lt;/h3&gt;

&lt;p&gt;Use &lt;a href="https://github.com/ahmetb/kubectx/" rel="noopener noreferrer"&gt;kubectx and kubens&lt;/a&gt; to switch contexts and namespaces quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi‑pod logs with stern
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/stern/stern" rel="noopener noreferrer"&gt;stern&lt;/a&gt; is a Kubernetes log tailer that streams logs from multiple pods (and containers) at once. It supports label selectors and regex filters, works across namespaces, and color‑codes/group streams so you can follow deployments and incidents in real time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Tail by label in a namespace&lt;/span&gt;
stern &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sample-web-app

&lt;span class="c"&gt;# Tail multiple apps with a regex and include timestamps&lt;/span&gt;
stern &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="s1"&gt;'web|api'&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt;

&lt;span class="c"&gt;# Tail the last 5 minutes, raw lines&lt;/span&gt;
stern &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sample-web-app &lt;span class="nt"&gt;--since&lt;/span&gt; 5m &lt;span class="nt"&gt;-o&lt;/span&gt; raw

&lt;span class="c"&gt;# Focus on a specific container within each pod&lt;/span&gt;
stern &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sample-web-app &lt;span class="nt"&gt;-c&lt;/span&gt; web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrap‑up
&lt;/h2&gt;

&lt;p&gt;We built a smooth inner dev loop for a Kubernetes development environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remote Docker over SSH to push to the private registry&lt;/li&gt;
&lt;li&gt;Telepresence for DNS and routing without port‑forwarding&lt;/li&gt;
&lt;li&gt;Tilt for watching, incremental rebuilds, and live sync with clear feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These practices make Kubernetes development faster and less error‑prone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub repos:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/buun-ch/buun-stack" rel="noopener noreferrer"&gt;https://github.com/buun-ch/buun-stack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/buun-ch/sample-web-app" rel="noopener noreferrer"&gt;https://github.com/buun-ch/sample-web-app&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devenv</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Building a Remote-Accessible Kubernetes Home Lab with k3s</title>
      <dc:creator>Buun ch.</dc:creator>
      <pubDate>Wed, 20 Aug 2025 02:22:53 +0000</pubDate>
      <link>https://dev.to/buun-ch/building-a-remote-accessible-kubernetes-home-lab-with-k3s-5g05</link>
      <guid>https://dev.to/buun-ch/building-a-remote-accessible-kubernetes-home-lab-with-k3s-5g05</guid>
      <description>&lt;p&gt;Turn a mini PC into your personal Kubernetes development environment accessible from anywhere in the world!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenn.dev/buun_ch/articles/f8d8512d3b78e4" rel="noopener noreferrer"&gt;Japanese (日本語)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/AOaLYLsB0Gs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Developers today face a common dilemma: the need for a persistent Kubernetes environment without the high costs of cloud services or the battery drain of running containers locally.&lt;/p&gt;

&lt;p&gt;Kubernetes has become essential for orchestrating multiple services running in containers. However, cloud services like AWS, Azure, or GCP can be prohibitively expensive for personal projects or learning environments. Meanwhile, running Docker and Kubernetes on a development laptop quickly drains the battery, particularly when working remotely.&lt;/p&gt;

&lt;p&gt;This guide demonstrates how to build a Kubernetes cluster on a mini PC at home or in your office, creating a development environment accessible from anywhere via the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Kubernetes Distribution
&lt;/h2&gt;

&lt;p&gt;Several Kubernetes distributions are available for local development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;minikube&lt;/strong&gt; and &lt;strong&gt;kind&lt;/strong&gt;: These tools excel at quickly launching clusters for clean testing environments but lack the stability required for long-term development or production use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microk8s&lt;/strong&gt;: Built on Snap package management, Microk8s is designed specifically for development environments with comprehensive tooling support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;k3s&lt;/strong&gt;: Features a single-binary installer optimized for resource-constrained environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While both Microk8s and k3s suit development environments well, k3s has experienced rapid growth in community adoption. GitHub star history shows k3s's exceptional popularity trajectory, validating its selection for this home lab setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5996afsueor6kddb0xwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5996afsueor6kddb0xwx.png" alt="GitHub star history"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Required Cloud Services
&lt;/h2&gt;

&lt;p&gt;A self-hosted Kubernetes cluster still requires certain cloud services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Domain Registrar&lt;/strong&gt;: Essential for registering and managing domain names&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tunneling Service&lt;/strong&gt;: Enables secure internet access to the cluster (Cloudflare Tunnel serves this purpose)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Registry&lt;/strong&gt;: Necessary for storing container images, as pushing large Docker images through Cloudflare Tunnel from home networks presents data size limitations&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up the Linux Machine
&lt;/h2&gt;

&lt;p&gt;The setup begins with preparing a Linux machine accessible via SSH from the development workstation. The initial configuration involves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Installing Linux with Docker support&lt;/li&gt;
&lt;li&gt;Configuring SSH daemon for remote access&lt;/li&gt;
&lt;li&gt;Setting up passwordless sudo execution (a requirement for k3sup)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Arch Linux
&lt;/h3&gt;

&lt;p&gt;Arch Linux users must configure sshd to support keyboard-interactive authentication with PAM. Create the following file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/ssh/sshd_config.d/10-pamauth.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KbdInteractiveAuthentication yes
AuthenticationMethods publickey keyboard-interactive:pam
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating this file, restart the sshd service to apply the changes.&lt;/p&gt;

&lt;p&gt;Create the &lt;code&gt;sudoers&lt;/code&gt; file for your account. For example, if your account name is &lt;code&gt;buun&lt;/code&gt;, create the following file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/sudoers.d/buun&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buun ALL=(ALL:ALL) NOPASSWD: ALL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Required Tools
&lt;/h2&gt;

&lt;p&gt;The local development machine requires specific tooling for cluster management. Begin by cloning the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/buun-ch/buun-stack
&lt;span class="nb"&gt;cd &lt;/span&gt;buun-stack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The project uses &lt;a href="https://mise.jdx.dev/" rel="noopener noreferrer"&gt;mise&lt;/a&gt; for tool version management. Follow the &lt;a href="https://mise.jdx.dev/getting-started.html" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; guide to install it.&lt;/p&gt;

&lt;p&gt;After installing mise, install all required tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mise &lt;span class="nb"&gt;install
&lt;/span&gt;mise &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;  &lt;span class="c"&gt;# Verify installed tools&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The toolchain includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;gomplate&lt;/strong&gt;: Template engine for generating configuration files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gum&lt;/strong&gt;: Interactive CLI for user input collection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm&lt;/strong&gt;: Kubernetes package manager&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;just&lt;/strong&gt;: Task runner organizing installation commands as recipes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubelogin&lt;/strong&gt;: kubectl authentication plugin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vault&lt;/strong&gt;: HashiCorp Vault CLI client&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating the Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;With the toolchain ready, generate the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just &lt;span class="nb"&gt;env&lt;/span&gt;::setup  &lt;span class="c"&gt;# Creates .env.local with your configuration&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This interactive command collects necessary information and generates the &lt;code&gt;.env.local&lt;/code&gt; file containing environment variables for subsequent operations.&lt;/p&gt;

&lt;p&gt;Deploy the k3s cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just k8s::install
kubectl get nodes  &lt;span class="c"&gt;# Verify cluster status&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The installation leverages k3sup to deploy k3s on the remote machine while automatically creating/modifying kubeconfig (&lt;code&gt;~/.kube/config&lt;/code&gt;) on your local machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Cloudflare Tunnel
&lt;/h2&gt;

&lt;p&gt;Cloudflare Tunnel provides secure internet access to the cluster. This example assumes a domain with DNS managed by Cloudflare.&lt;/p&gt;

&lt;p&gt;In the Cloudflare dashboard:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to Zero Trust &amp;gt; Network &amp;gt; Tunnels&lt;/li&gt;
&lt;li&gt;Click "+ Create a tunnel"&lt;/li&gt;
&lt;li&gt;Click "Select Cloudflared"&lt;/li&gt;
&lt;li&gt;Enter the name of your tunnel&lt;/li&gt;
&lt;li&gt;Click "Save tunnel"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If your Linux is based on Debian or Red Hat, follow the instructions displayed in the page.&lt;/p&gt;

&lt;p&gt;If you are using Arch Linux, install cloudflared with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;paru cloudflared
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and create the systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl edit &lt;span class="nt"&gt;--force&lt;/span&gt; &lt;span class="nt"&gt;--full&lt;/span&gt; cloudflared.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy and paste the systemd unit file content from the page &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/cloudflared-parameters/" rel="noopener noreferrer"&gt;Configure cloudflared parameters · Cloudflare Zero Trust docs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Cloudflare Tunnel
After=network.target

[Service]
TimeoutStartSec=0
Type=notify
ExecStart=/usr/bin/cloudflared tunnel --loglevel debug --logfile /var/log/cloudflared/cloudflared.log run --token &amp;lt;TOKEN VALUE&amp;gt;
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Edit the path &lt;code&gt;/usr/local/bin&lt;/code&gt; -&amp;gt; &lt;code&gt;/usr/bin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;&amp;lt;TOKEN VALUE&amp;gt;&lt;/code&gt; with your token

&lt;ul&gt;
&lt;li&gt;The token is shown in the tunnel overview page&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Public host names
&lt;/h3&gt;

&lt;p&gt;Configure the following public hostnames:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ssh.yourdomain.com&lt;/code&gt; → SSH localhost:22&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vault.yourdomain.com&lt;/code&gt; → HTTPS localhost:443 (No TLS Verify)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;auth.yourdomain.com&lt;/code&gt; → HTTPS localhost:443 (No TLS Verify)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;k8s.yourdomain.com&lt;/code&gt; → HTTPS localhost:6443 (No TLS Verify)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unless you are building a zero-trust network, you can enable "No TLS Verify" because only Cloudflare can reach your local machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSH
&lt;/h3&gt;

&lt;p&gt;Here is an example of SSH configuration for macOS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;cloudflared
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;~/.ssh/config&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host yourdomain
  Hostname ssh.yourdomain.com
  ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh your domain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Core Components
&lt;/h2&gt;

&lt;p&gt;Before setting up Kubernetes remote access, install the following components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Longhorn - Distributed Storage
&lt;/h3&gt;

&lt;p&gt;Longhorn provides distributed block storage for Kubernetes, Development environments benefit from its ability to create PersistentVolumes backed by NFS exports, enabling work with large datasets stored on network-attached storage.&lt;/p&gt;

&lt;p&gt;Prerequisites: Install open-iscsi on the Linux machine and ensure the iscsid service is running.&lt;/p&gt;

&lt;p&gt;For example, if you are using Arch Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh your-linux-machine

&lt;span class="nb"&gt;sudo &lt;/span&gt;packman &lt;span class="nt"&gt;-S&lt;/span&gt; open-iscsi
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;iscsid
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start iscsid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your local machine, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just longhorn::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  HashiCorp Vault - Secrets Management
&lt;/h3&gt;

&lt;p&gt;Vault serves as the central secrets management system, handling encryption keys and sensitive data across all applications in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just vault::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Store the displayed root token securely.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL - Database Cluster
&lt;/h3&gt;

&lt;p&gt;PostgreSQL provides relational database services for Keycloak and application data storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just postgres::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Keycloak - Identity Management
&lt;/h3&gt;

&lt;p&gt;Keycloak delivers comprehensive identity and access management, providing authentication and single sign-on capabilities for both applications and the Kubernetes API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just keycloak::install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring OIDC Authentication
&lt;/h2&gt;

&lt;p&gt;Create the Keycloak realm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just keycloak::create-realm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default realm name is &lt;code&gt;buunstack&lt;/code&gt;. You can change it by editing &lt;code&gt;.env.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;KEYCLOAK_REALM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-realm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configures Vault OIDC integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just vault::setup-oidc-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates initial user account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just keycloak::create-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enables Kubernetes OIDC authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just k8s::setup-oidc-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new kubectl context with OIDC-based authentication. If the original context is named &lt;code&gt;minipc1&lt;/code&gt;, the OIDC context is created as &lt;code&gt;minipc1-oidc&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Setup
&lt;/h2&gt;

&lt;p&gt;Validate the OIDC authentication configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context minipc1-oidc
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cluster is now accessible from anywhere via the internet. &lt;/p&gt;

&lt;p&gt;Verify full functionality by testing pod operations. Create a Pod and Service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; debug/debug-pod.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; debug/debug-svc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl exec&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;debug-pod &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; sh
/ &lt;span class="c"&gt;# uname -a&lt;/span&gt;
Linux debug-pod 6.12.41-1-lts &lt;span class="c"&gt;#1 SMP PREEMPT_DYNAMIC Fri, 01 Aug 2025 20:42:03 +0000 x86_64 GNU/Linux&lt;/span&gt;
/ &lt;span class="c"&gt;# ps x&lt;/span&gt;
PID   USER     TIME  COMMAND
    1 root      0:00 sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;h1&amp;gt;Debug Pod Web Server&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;Hostname: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;Time: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/p&amp;gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /t
    9 root      0:00 httpd &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080 &lt;span class="nt"&gt;-h&lt;/span&gt; /tmp
   17 root      0:00 sh
   24 root      0:00 ps x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl port-forward&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/debug-service 8080:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect the the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl localhost:8080
&amp;lt;h1&amp;gt;Debug Pod Web Server&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;Hostname: debug-pod&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;Time: Wed Aug 20 02:15:00 UTC 2025&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test Vault OIDC integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://vault.yourdomain.com
vault login &lt;span class="nt"&gt;-method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;oidc
vault kv get &lt;span class="nt"&gt;-mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret &lt;span class="nt"&gt;-field&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password postgres/admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates initial user account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just keycloak::create-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enables Kubernetes OIDC authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;just k8s::setup-oidc-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new kubectl context with OIDC-based authentication. If the original context is named &lt;code&gt;minipc1&lt;/code&gt;, the OIDC context is created as &lt;code&gt;minipc1-oidc&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Setup
&lt;/h2&gt;

&lt;p&gt;Validate the OIDC authentication configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context minipc1-oidc
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cluster is now accessible from anywhere via the internet.&lt;/p&gt;

&lt;p&gt;Verify full functionality by testing pod operations. Create a Pod and Service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; debug/debug-pod.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; debug/debug-svc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl exec&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;debug-pod &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; sh
/ &lt;span class="c"&gt;# uname -a&lt;/span&gt;
Linux debug-pod 6.12.41-1-lts &lt;span class="c"&gt;#1 SMP PREEMPT_DYNAMIC Fri, 01 Aug 2025 20:42:03 +0000 x86_64 GNU/Linux&lt;/span&gt;
/ &lt;span class="c"&gt;# ps x&lt;/span&gt;
PID   USER     TIME  COMMAND
    1 root      0:00 sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;h1&amp;gt;Debug Pod Web Server&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;Hostname: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;Time: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/p&amp;gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /t
    9 root      0:00 httpd &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080 &lt;span class="nt"&gt;-h&lt;/span&gt; /tmp
   17 root      0:00 sh
   24 root      0:00 ps x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl port-forward&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/debug-service 8080:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect the the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl localhost:8080
&amp;lt;h1&amp;gt;Debug Pod Web Server&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;Hostname: debug-pod&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;Time: Wed Aug 20 02:15:00 UTC 2025&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test Vault OIDC integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://vault.yourdomain.com
vault login &lt;span class="nt"&gt;-method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;oidc
vault kv get &lt;span class="nt"&gt;-mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret &lt;span class="nt"&gt;-field&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password postgres/admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This guide has demonstrated how to build an internet-accessible Kubernetes home lab secured with Cloudflare Tunnel and OIDC authentication. The resulting infrastructure provides a cost-effective, remotely accessible cluster suitable for both development and learning purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Benefits of This Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: Eliminates expensive cloud service fees while maintaining professional-grade capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote Accessibility&lt;/strong&gt;: Full cluster access from any location via secure internet connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Security&lt;/strong&gt;: OIDC authentication ensures robust access control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: Automated deployment reduces complexity and ensures reproducibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Platform&lt;/strong&gt;: Ideal environment for Kubernetes experimentation and skill development&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository&lt;/strong&gt;: &lt;a href="https://github.com/buun-ch/buun-stack" rel="noopener noreferrer"&gt;buun-ch/buun-stack&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;k3s&lt;/strong&gt;: &lt;a href="https://k3s.io" rel="noopener noreferrer"&gt;k3s.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mise&lt;/strong&gt;: &lt;a href="https://mise.jdx.dev" rel="noopener noreferrer"&gt;mise.jdx.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloudflare Tunnel&lt;/strong&gt;: &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/" rel="noopener noreferrer"&gt;Cloudflare Tunnel · Cloudflare Zero Trust docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devenv</category>
      <category>selfhosted</category>
    </item>
  </channel>
</rss>
