<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pavel</title>
    <description>The latest articles on DEV Community by Pavel (@pavel-hostim).</description>
    <link>https://dev.to/pavel-hostim</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pavel-hostim"/>
    <language>en</language>
    <item>
      <title>Let's Encrypt Wildcard Certs in Kubernetes: cert-manager + DNS-01 (and When We Skipped It)</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 05 May 2026 16:52:23 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/lets-encrypt-wildcard-certs-in-kubernetes-cert-manager-dns-01-and-when-we-skipped-it-5bi8</link>
      <guid>https://dev.to/pavel-hostim/lets-encrypt-wildcard-certs-in-kubernetes-cert-manager-dns-01-and-when-we-skipped-it-5bi8</guid>
      <description>&lt;p&gt;If you run Kubernetes and want a wildcard TLS cert from Let's Encrypt — say &lt;code&gt;*.example.com&lt;/code&gt; — you need a DNS-01 challenge. HTTP-01 cannot prove control over a wildcard. That single fact rules out the easy path most tutorials show.&lt;/p&gt;

&lt;p&gt;This post is what we actually run at &lt;a href="https://hostim.dev/" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; for our shared &lt;code&gt;*.region.hostim.dev&lt;/code&gt; wildcard. We use &lt;strong&gt;cert-manager for per-app certs&lt;/strong&gt; and a &lt;strong&gt;plain &lt;code&gt;certbot&lt;/code&gt; Ansible playbook for the wildcard&lt;/strong&gt;. Two different tools for two different jobs. We will explain why, then show the code for both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why two tools for one cluster?
&lt;/h2&gt;

&lt;p&gt;You can do everything with cert-manager. It supports DNS-01 with a long list of providers. So why are we running a second tool?&lt;/p&gt;

&lt;p&gt;Three reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Our DNS provider (Namecheap) does not have a stable cert-manager webhook.&lt;/strong&gt; There are community webhooks, but they break on upgrades. Maintaining one for a single cert is more work than running certbot once a quarter.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The wildcard cert covers our shared ingress, not user apps.&lt;/strong&gt; It rotates rarely, lives in one namespace, and is read by every ingress as a TLS secret. cert-manager is built for the opposite case: many short-lived certs per Ingress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A failed cert-manager renewal at 3 a.m. is hard to debug.&lt;/strong&gt; A failed Ansible run on our laptop is a stack trace we can read.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For per-app domains (&lt;code&gt;my-app.user.tld&lt;/code&gt; with cert-manager + HTTP-01), the controller-driven model wins. For the one shared wildcard, the manual model wins. Use the right tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Path A: cert-manager + HTTP-01 (per-app domains)
&lt;/h2&gt;

&lt;p&gt;This is the standard path. Most apps want a cert for one or two hostnames. HTTP-01 is the simplest challenge: cert-manager spins up a temporary pod, the ACME server hits &lt;code&gt;http://app.example.com/.well-known/acme-challenge/...&lt;/code&gt;, the pod responds, the cert is issued.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Install cert-manager
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for the three pods (&lt;code&gt;cert-manager&lt;/code&gt;, &lt;code&gt;cert-manager-webhook&lt;/code&gt;, &lt;code&gt;cert-manager-cainjector&lt;/code&gt;) to be ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create a ClusterIssuer
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIssuer&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;acme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;you@example.com&lt;/span&gt;
    &lt;span class="na"&gt;privateKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod-account&lt;/span&gt;
    &lt;span class="na"&gt;solvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply it. cert-manager will register an ACME account on first use.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Annotate your Ingress
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app.example.com"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-example-com-tls&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app.example.com&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is it. cert-manager sees the annotation, requests the cert, solves the HTTP-01 challenge, writes the cert into the &lt;code&gt;app-example-com-tls&lt;/code&gt; secret. Renewal is automatic.&lt;/p&gt;

&lt;p&gt;This works for any number of distinct hostnames. We do this exact thing for every user app on hostim.dev.&lt;/p&gt;

&lt;h2&gt;
  
  
  Path B: certbot + DNS-01 (the wildcard)
&lt;/h2&gt;

&lt;p&gt;For &lt;code&gt;*.region.hostim.dev&lt;/code&gt;, HTTP-01 cannot work — the ACME server cannot resolve every possible subdomain. We need DNS-01: prove control over the parent domain by adding a TXT record.&lt;/p&gt;

&lt;p&gt;You can do this with cert-manager and a DNS-01 webhook for your provider. We chose not to. Here is the Ansible playbook we run instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  The flow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Ansible writes two scripts: an auth hook (creates the TXT record) and a cleanup hook (deletes it).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;certbot --manual --preferred-challenges dns&lt;/code&gt; runs the auth hook, waits for DNS to propagate, lets ACME verify, then runs the cleanup hook.&lt;/li&gt;
&lt;li&gt;The resulting &lt;code&gt;fullchain.pem&lt;/code&gt; and &lt;code&gt;privkey.pem&lt;/code&gt; get loaded into a Kubernetes Secret of type &lt;code&gt;kubernetes.io/tls&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Every ingress in the shared namespace references that secret.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The playbook (trimmed)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Issue and upload wildcard TLS certificate&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sld&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;example"&lt;/span&gt;
    &lt;span class="na"&gt;tld&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;com"&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eu-center"&lt;/span&gt;
    &lt;span class="na"&gt;wildcard_domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sld&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tld&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;local_tmp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/wildcard-{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;k8s_namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingress-nginx"&lt;/span&gt;
    &lt;span class="na"&gt;k8s_secret_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wildcard-{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}-tls"&lt;/span&gt;

  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create certbot auth hook (creates the TXT record)&lt;/span&gt;
      &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/certbot-auth-{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.sh"&lt;/span&gt;
        &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0755"&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;#!/bin/bash&lt;/span&gt;
          &lt;span class="s"&gt;set -e&lt;/span&gt;
          &lt;span class="s"&gt;namecheap-cli setone \&lt;/span&gt;
            &lt;span class="s"&gt;--sld {{ sld }} --tld {{ tld }} \&lt;/span&gt;
            &lt;span class="s"&gt;--type TXT --name "_acme-challenge.{{ region }}" \&lt;/span&gt;
            &lt;span class="s"&gt;--address "${CERTBOT_VALIDATION}" --ttl 60&lt;/span&gt;
          &lt;span class="s"&gt;# Wait for DNS to propagate&lt;/span&gt;
          &lt;span class="s"&gt;for i in {1..30}; do&lt;/span&gt;
            &lt;span class="s"&gt;val=$(dig TXT _acme-challenge.{{ region }}.{{ sld }}.{{ tld }} @1.1.1.1 +short | tr -d '"')&lt;/span&gt;
            &lt;span class="s"&gt;[[ "$val" == "${CERTBOT_VALIDATION}" ]] &amp;amp;&amp;amp; break&lt;/span&gt;
            &lt;span class="s"&gt;sleep 10&lt;/span&gt;
          &lt;span class="s"&gt;done&lt;/span&gt;
          &lt;span class="s"&gt;sleep 30  # belt and suspenders&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Issue wildcard certificate&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="s"&gt;certbot certonly --manual --preferred-challenges dns&lt;/span&gt;
        &lt;span class="s"&gt;--manual-auth-hook /tmp/certbot-auth-{{ region }}.sh&lt;/span&gt;
        &lt;span class="s"&gt;--manual-cleanup-hook /tmp/certbot-cleanup-{{ region }}.sh&lt;/span&gt;
        &lt;span class="s"&gt;--agree-tos -m you@example.com&lt;/span&gt;
        &lt;span class="s"&gt;--server https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
        &lt;span class="s"&gt;-d "{{ wildcard_domain }}"&lt;/span&gt;
        &lt;span class="s"&gt;--work-dir {{ local_tmp }} --config-dir {{ local_tmp }}&lt;/span&gt;
        &lt;span class="s"&gt;--logs-dir {{ local_tmp }} --non-interactive&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create or update TLS Secret&lt;/span&gt;
      &lt;span class="na"&gt;kubernetes.core.k8s&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;k8s_namespace&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;definition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
          &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
          &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;k8s_secret_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/tls&lt;/span&gt;
          &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;tls.crt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('file',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;local_tmp&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'/live/.../fullchain.pem')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;b64encode&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
            &lt;span class="na"&gt;tls.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('file',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;local_tmp&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'/live/.../privkey.pem')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;b64encode&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reference the secret in your Ingress
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.region.example.com"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wildcard-region-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  When does it run?
&lt;/h3&gt;

&lt;p&gt;We run the playbook every 60 days. Let's Encrypt certs are valid for 90 days, so 60 leaves a 30-day buffer. A simple cron on a bastion host is enough — we do not even need to automate this. The cost of a manual run twice a quarter is lower than the cost of debugging a webhook.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Unable to locate package 'appengine'" — a real gotcha we hit
&lt;/h2&gt;

&lt;p&gt;If you copy this playbook and your &lt;code&gt;certbot&lt;/code&gt; is from your distro's package manager, you may hit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ImportError: cannot import name 'appengine' from 'urllib3.contrib'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a Python env collision. System certbot (often 1.21) wants old &lt;code&gt;urllib3&lt;/code&gt;; you have a newer one in &lt;code&gt;~/.local/lib/python3.10/site-packages&lt;/code&gt;. The newer version dropped &lt;code&gt;appengine&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Quick fix — add &lt;code&gt;PYTHONNOUSERSITE: "1"&lt;/code&gt; to the certbot task's &lt;code&gt;environment&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Issue wildcard certificate&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;PYTHONNOUSERSITE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="s"&gt;certbot certonly --manual ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Long-term fix — install certbot via snap or pipx so it has its own Python env.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should you do it this way?
&lt;/h2&gt;

&lt;p&gt;Probably not. If your DNS provider has a stable cert-manager webhook (Cloudflare, Route53, DigitalOcean, Google Cloud DNS), use cert-manager for both per-app &lt;strong&gt;and&lt;/strong&gt; wildcard certs. It is simpler and renews automatically.&lt;/p&gt;

&lt;p&gt;The hybrid model only makes sense when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your DNS provider has no first-party or stable cert-manager support&lt;/li&gt;
&lt;li&gt;You have one wildcard, not many&lt;/li&gt;
&lt;li&gt;You would rather audit a 30-line shell script than a webhook deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For us those three are all true. For most teams, only the first might be — and even then, switching DNS provider is often easier than maintaining a webhook.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Per-app domains&lt;/strong&gt; → cert-manager + HTTP-01 + ClusterIssuer. One annotation per Ingress, automatic renewals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wildcards&lt;/strong&gt; → DNS-01 is mandatory. Use cert-manager with your DNS provider's webhook if it exists. Otherwise, a 60-day Ansible run with &lt;code&gt;certbot --manual&lt;/code&gt; and a TLS Secret.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two tools is fine.&lt;/strong&gt; Don't force one model onto two different problems.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Want to skip TLS entirely?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://hostim.dev/" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; does this for you. Bring a Docker image or a git repo, get a cert and a domain.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Should Small Teams Even Bother with Kubernetes?</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Sun, 26 Apr 2026 10:43:15 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/should-small-teams-even-bother-with-kubernetes-occ</link>
      <guid>https://dev.to/pavel-hostim/should-small-teams-even-bother-with-kubernetes-occ</guid>
      <description>&lt;p&gt;Most small teams hit the same question at some point: should we move to Kubernetes? The honest answer for the majority of them is no, but that answer alone is not very helpful. So here is the longer version, with real prices and a clear line where the answer flips to yes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Kubernetes gives you
&lt;/h2&gt;

&lt;p&gt;Underneath the marketing, Kubernetes is four practical things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Declarative scheduling&lt;/strong&gt; – you describe the desired state and a controller keeps the cluster in that state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing&lt;/strong&gt; – crashed pods restart, dead nodes are drained, replicas come back automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bin-packing&lt;/strong&gt; – many workloads share the same nodes with CPU and memory limits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A standard API&lt;/strong&gt; – Deployments, Services, Ingress, Jobs, Secrets, all the same on any cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are real benefits. The catch is that you pay for them in money, time, or both.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it actually costs
&lt;/h2&gt;

&lt;p&gt;A realistic small-team setup looks like this: 3 services (API, worker, frontend), one Postgres, one Redis, around 50GB of storage. Here is what the same workload costs in three common shapes, all in eu-central-1 / Frankfurt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managed Kubernetes (AWS EKS)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;EKS control plane, 0.10 USD per hour: &lt;strong&gt;~67 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;3× t3.medium nodes (2 vCPU / 4GB): &lt;strong&gt;~92 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;RDS Postgres &lt;code&gt;db.t3.small&lt;/code&gt;, Single-AZ: &lt;strong&gt;~27 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;ElastiCache Redis &lt;code&gt;cache.t3.micro&lt;/code&gt;: &lt;strong&gt;~13 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;ALB base plus LCU: &lt;strong&gt;~15 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;50GB EBS gp3 plus ~200GB egress: &lt;strong&gt;~14 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total: around 228 €/mo&lt;/strong&gt;, before backups, observability, or any of your time.&lt;/p&gt;

&lt;p&gt;GKE used to give you the first cluster for free. That is gone now: control plane is 0.10 USD per hour, and you get a 74.40 USD monthly billing-account credit that offsets one zonal cluster. Regional clusters pay full price.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single Hetzner box + Docker Compose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AX42 dedicated (8-core Ryzen 7 PRO, 64GB DDR5, NVMe): &lt;strong&gt;from 57 €/mo&lt;/strong&gt; (April 2026 pricing)&lt;/li&gt;
&lt;li&gt;Postgres, Redis, app – all containers on the same machine, isolated by Compose&lt;/li&gt;
&lt;li&gt;nginx and Let's Encrypt for HTTPS&lt;/li&gt;
&lt;li&gt;Storage Box BX11 (1TB) for backups: &lt;strong&gt;~4 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total: around 61 €/mo.&lt;/strong&gt; That box has enough headroom to run several more projects beside the main one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-hosted Kubernetes (k3s on Hetzner Cloud)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;3× CCX13 (2 dedicated vCPU / 8GB / 80GB SSD): &lt;strong&gt;~48 €/mo&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You run the control plane, etcd, ingress controller, cert-manager, backups and monitoring yourself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compute is around 48 €/mo, but the real cost is the hours you put into the cluster every week.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The ops cost nobody prices in
&lt;/h2&gt;

&lt;p&gt;Hosting is the cheap part. Kubernetes adds work that simply does not exist with Compose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster upgrades.&lt;/strong&gt; A new minor lands every four months. If you skip a few, the upgrade path becomes painful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress and cert-manager.&lt;/strong&gt; Works fine until cert-manager hits a CRD migration or your ingress controller deprecates an annotation you depend on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CNI debugging.&lt;/strong&gt; A misbehaving Calico or Cilium pod can take half a day to track down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RBAC and ServiceAccounts.&lt;/strong&gt; Required even for trivial things like letting one pod read one secret.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PVCs and storage classes.&lt;/strong&gt; A reboot at the wrong moment can leave a volume stuck in &lt;code&gt;Terminating&lt;/code&gt; and you reading the controller logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;etcd.&lt;/strong&gt; Quiet most of the time, then your cluster is suddenly read-only at 2am and you are restoring from a snapshot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Realistic estimate: 2 to 5 hours a week of cluster maintenance for a self-hosted setup. Managed clusters cost less time but more money, as the table above shows. For a 3-person team, 2-5 hours a week is 5-12% of one engineer's time spent on infrastructure that does not ship features.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Kubernetes is the right call
&lt;/h2&gt;

&lt;p&gt;There are real cases where the cost is justified:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You run more than 20 services that need consistent deploys, secrets and networking&lt;/li&gt;
&lt;li&gt;Multi-region or multi-tenant with hard isolation per customer&lt;/li&gt;
&lt;li&gt;Compliance work (SOC 2, HIPAA) where audited RBAC and NetworkPolicies save weeks of paperwork&lt;/li&gt;
&lt;li&gt;Your team already knows Kubernetes well and Compose would slow them down&lt;/li&gt;
&lt;li&gt;Bursty workloads that genuinely benefit from horizontal autoscaling on shared nodes&lt;/li&gt;
&lt;li&gt;You are building a platform where the Kubernetes API itself is the product (operators, CRDs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If two or more of those apply, Kubernetes earns its keep. If none do, you are paying for capabilities you will not use.&lt;/p&gt;




&lt;h2&gt;
  
  
  When it is not
&lt;/h2&gt;

&lt;p&gt;Most small teams have a workload that looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 to 5 services&lt;/li&gt;
&lt;li&gt;One Postgres, maybe a Redis&lt;/li&gt;
&lt;li&gt;A single region&lt;/li&gt;
&lt;li&gt;Fewer than 5 deploys a day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This fits comfortably on one Hetzner box with Docker Compose, or on a PaaS. No Kubernetes needed, much less money spent, and far less time on ops.&lt;/p&gt;

&lt;p&gt;The "we will need it eventually" argument is mostly survivorship bias. Most projects never reach the scale where Kubernetes is actually the cheapest option, and migrating later is easier than people claim. A &lt;code&gt;docker-compose.yml&lt;/code&gt; maps almost line-for-line to Deployments and Services when the day comes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick decision table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Situation&lt;/th&gt;
&lt;th&gt;Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Solo dev, 1-3 services&lt;/td&gt;
&lt;td&gt;Docker Compose on a VPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Small team, up to ~10 services, one region&lt;/td&gt;
&lt;td&gt;PaaS or Compose + Ansible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-tenant SaaS with isolation needs&lt;/td&gt;
&lt;td&gt;Kubernetes (managed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compliance-heavy, audited infrastructure&lt;/td&gt;
&lt;td&gt;Kubernetes (managed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Building a platform or operator&lt;/td&gt;
&lt;td&gt;Kubernetes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Everyone else uses it"&lt;/td&gt;
&lt;td&gt;Not a real reason&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The middle ground
&lt;/h2&gt;

&lt;p&gt;A PaaS exists exactly for this gap. You get the useful parts of Kubernetes – self-healing, declarative deploys, automatic HTTPS, isolated namespaces – without running the cluster yourself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; runs Kubernetes underneath, on bare metal in Germany, so you do not have to. You paste a &lt;code&gt;docker-compose.yml&lt;/code&gt; and get a deployed app with HTTPS, Postgres, Redis, volumes, metrics and logs.&lt;/p&gt;

&lt;p&gt;The same stack priced on Hostim:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3× shared App (2 vCPU / 2GB): &lt;strong&gt;13.50 €&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Postgres (10GB): &lt;strong&gt;10 €&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Redis (2.5GB): &lt;strong&gt;5 €&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;50GB volume: &lt;strong&gt;10 €&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total: 38.50 €/mo&lt;/strong&gt;, with HTTPS, metrics, logs and backups included.&lt;/p&gt;

&lt;p&gt;If you actually need Kubernetes, run Kubernetes. If you are reaching for it because it is the default answer, a PaaS or a single Hetzner box will probably serve you better, for less money and less weekend work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;👉 Try Hostim.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>startup</category>
      <category>devops</category>
    </item>
    <item>
      <title>Which Database Should You Self-Host? SQLite vs MySQL vs PostgreSQL vs Redis</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Wed, 08 Apr 2026 16:38:44 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/which-database-should-you-self-host-sqlite-vs-mysql-vs-postgresql-vs-redis-7j0</link>
      <guid>https://dev.to/pavel-hostim/which-database-should-you-self-host-sqlite-vs-mysql-vs-postgresql-vs-redis-7j0</guid>
      <description>&lt;p&gt;When you're deploying your own app, the database choice matters more than most people think. It affects performance, ops complexity, backups, and how much memory your server needs.&lt;/p&gt;

&lt;p&gt;There are four options you'll run into most often: &lt;strong&gt;SQLite, MySQL, PostgreSQL, and Redis&lt;/strong&gt;. They're not all the same kind of database – and that's the point. Here's when each one makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQLite – the zero-ops embedded database
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; small apps, prototypes, CLIs, single-user tools, edge deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; no server process, single file, zero config, instant setup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; no concurrent writes, no replication, hard to scale past one instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SQLite is not a server – it's a library that reads and writes a single file. That makes it perfect for apps where simplicity matters more than scale. If your app has one process writing to the database and modest traffic, SQLite will outperform anything else because there's no network round-trip. The moment you need concurrent writes or multiple app replicas, you've outgrown it.&lt;/p&gt;




&lt;h2&gt;
  
  
  MySQL – the reliable workhorse
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; web apps, CMS platforms, CRUD-heavy workloads, WordPress/Laravel stacks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; fast reads, mature replication, huge ecosystem, low memory footprint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; weaker JSON support, less strict by default, fewer advanced types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MySQL powers a massive chunk of the internet. It's battle-tested, well-documented, and runs well even on small VPS instances. If you're running a standard web app with mostly reads and simple queries, MySQL will serve you well without hogging resources. Just be aware that its default configs are more lenient than PostgreSQL – silent truncations and implicit type casts can bite you.&lt;/p&gt;




&lt;h2&gt;
  
  
  PostgreSQL – the feature-rich powerhouse
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; complex queries, data integrity, JSON workloads, GIS, analytics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; advanced types (JSONB, arrays, hstore), strong standards compliance, extensions ecosystem&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; higher memory usage, more tuning needed, steeper learning curve for ops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PostgreSQL is the database you pick when correctness and flexibility matter. It handles complex joins, window functions, CTEs, and full-text search natively. The extension ecosystem (PostGIS, pg_cron, pgvector) makes it a Swiss army knife. The trade-off: it's hungrier on resources and rewards careful tuning of &lt;code&gt;shared_buffers&lt;/code&gt;, &lt;code&gt;work_mem&lt;/code&gt;, and connection pooling.&lt;/p&gt;




&lt;h2&gt;
  
  
  Redis – the in-memory speed layer
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; caching, sessions, rate limiting, queues, pub/sub, leaderboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; sub-millisecond reads, rich data structures (lists, sets, sorted sets, streams), built-in TTL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; data must fit in RAM, persistence is optional and lossy, not a primary data store&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Redis isn't a replacement for a relational database – it's a complement. Use it for things that need to be fast and can tolerate occasional data loss: session tokens, cache layers, job queues. Redis Streams can even replace simple message brokers. Just don't store your source of truth here – if the server restarts between RDB snapshots, recent writes are gone.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;SQLite&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;MySQL&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Redis&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Type&lt;/td&gt;
&lt;td&gt;Embedded&lt;/td&gt;
&lt;td&gt;Relational server&lt;/td&gt;
&lt;td&gt;Relational server&lt;/td&gt;
&lt;td&gt;In-memory store&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ease of setup&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrent writes&lt;/td&gt;
&lt;td&gt;❌ Single-writer&lt;/td&gt;
&lt;td&gt;✅ Good&lt;/td&gt;
&lt;td&gt;✅ Excellent&lt;/td&gt;
&lt;td&gt;✅ Very fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Complex queries&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;N/A (key-value)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory usage&lt;/td&gt;
&lt;td&gt;Minimal&lt;/td&gt;
&lt;td&gt;Low–moderate&lt;/td&gt;
&lt;td&gt;Moderate–high&lt;/td&gt;
&lt;td&gt;High (all data in RAM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Replication&lt;/td&gt;
&lt;td&gt;None built-in&lt;/td&gt;
&lt;td&gt;Mature&lt;/td&gt;
&lt;td&gt;Mature&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best self-host size&lt;/td&gt;
&lt;td&gt;Single instance&lt;/td&gt;
&lt;td&gt;Small–large&lt;/td&gt;
&lt;td&gt;Medium–large&lt;/td&gt;
&lt;td&gt;Any (as cache layer)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Persistence&lt;/td&gt;
&lt;td&gt;Always (file)&lt;/td&gt;
&lt;td&gt;Always (disk)&lt;/td&gt;
&lt;td&gt;Always (disk)&lt;/td&gt;
&lt;td&gt;Optional (RDB/AOF)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  So which one should you choose?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Building a prototype or CLI tool?&lt;/strong&gt; → &lt;strong&gt;SQLite&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running a standard web app?&lt;/strong&gt; → &lt;strong&gt;MySQL&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need complex queries, JSONB, or extensions?&lt;/strong&gt; → &lt;strong&gt;PostgreSQL&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need a fast cache, session store, or queue?&lt;/strong&gt; → &lt;strong&gt;Redis&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most real-world apps end up using &lt;strong&gt;two&lt;/strong&gt;: a relational database (MySQL or PostgreSQL) for your data, and Redis for caching and sessions. That's not overkill – it's the right tool for each job.&lt;/p&gt;




&lt;h2&gt;
  
  
  Self-hosting these databases
&lt;/h2&gt;

&lt;p&gt;Running databases on a VPS means managing backups, updates, and disk space yourself. It's doable, but it's one more thing to maintain.&lt;/p&gt;

&lt;p&gt;On &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt;, MySQL, PostgreSQL, and Redis are built in – provisioned alongside your app with metrics and no extra config. Paste a &lt;code&gt;docker-compose.yml&lt;/code&gt; and your database is ready.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>mysql</category>
      <category>sqlite</category>
      <category>redis</category>
    </item>
    <item>
      <title>Bastion Host &amp; GitHub Actions on Hostim.dev</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Thu, 08 Jan 2026 19:00:09 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/bastion-host-github-actions-on-hostimdev-g9j</link>
      <guid>https://dev.to/pavel-hostim/bastion-host-github-actions-on-hostimdev-g9j</guid>
      <description>&lt;p&gt;I haven't posted updates for a while, but several core features landed on Hostim.dev recently.&lt;/p&gt;

&lt;p&gt;Instead of shipping from a fixed roadmap, I'm following &lt;strong&gt;support-driven (customer-driven) development&lt;/strong&gt;: features move to the top of the queue once users actively need them.&lt;/p&gt;

&lt;p&gt;Over the past month, this resulted in three practical additions around &lt;strong&gt;Docker CI/CD&lt;/strong&gt;, &lt;strong&gt;GitHub Actions deployment&lt;/strong&gt;, and &lt;strong&gt;secure bastion host access&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  GitHub Actions deploy for Docker apps
&lt;/h2&gt;

&lt;p&gt;Hostim.dev now supports &lt;a href="https://hostim.dev/docs/apps/github-actions" rel="noopener noreferrer"&gt;&lt;strong&gt;GitHub Actions deployments&lt;/strong&gt;&lt;/a&gt; out of the box.&lt;/p&gt;

&lt;p&gt;You can trigger a deploy directly from GitHub Actions using a simple API call. This works well for common &lt;strong&gt;Docker CI/CD&lt;/strong&gt; setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and deploy on merge to &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Restart an app after pushing a new Docker image&lt;/li&gt;
&lt;li&gt;Manual deploys via &lt;code&gt;workflow_dispatch&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's no OAuth and no hidden logic. Your workflow controls everything — branches, conditions, environments. Hostim only executes the requested action.&lt;/p&gt;

&lt;p&gt;This is especially useful if you already run &lt;strong&gt;CI/CD with Docker&lt;/strong&gt; and just want a clean deployment target.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bastion host for secure shell access to containers
&lt;/h2&gt;

&lt;p&gt;Each project now includes a built-in &lt;a href="https://hostim.dev/docs/services/bastion" rel="noopener noreferrer"&gt;&lt;strong&gt;SSH bastion host&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're unfamiliar: &lt;strong&gt;a bastion host is a hardened entry point&lt;/strong&gt; used to access private infrastructure without exposing services to the public internet.&lt;/p&gt;

&lt;p&gt;On Hostim.dev, the bastion host allows you to open a shell into running apps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;shell my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This answers common questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;What is a bastion host used for?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How do I securely SSH into containers?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How can I debug a production Docker app without public access?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typical use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging production issues&lt;/li&gt;
&lt;li&gt;Running database migrations&lt;/li&gt;
&lt;li&gt;Inspecting environment variables&lt;/li&gt;
&lt;li&gt;Accessing internal services safely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bastion host is private, key-based, and isolated per project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Custom commands for Docker apps
&lt;/h2&gt;

&lt;p&gt;Apps can now override the container command.&lt;/p&gt;

&lt;p&gt;This enables common Docker patterns such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One image, multiple roles (web + worker)&lt;/li&gt;
&lt;li&gt;Background jobs using the same Docker image&lt;/li&gt;
&lt;li&gt;CI/CD pipelines that reuse images across environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pairs naturally with &lt;strong&gt;Docker CI/CD pipelines&lt;/strong&gt;, where images are built once and reused consistently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this approach
&lt;/h2&gt;

&lt;p&gt;Many platforms ship features based on assumptions.&lt;/p&gt;

&lt;p&gt;Instead, these changes came directly from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"How do I deploy with GitHub Actions?"&lt;/li&gt;
&lt;li&gt;"How do I get shell access without exposing ports?"&lt;/li&gt;
&lt;li&gt;"How do I run workers with the same image?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Support questions shape the roadmap.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;More items are planned, but user feedback decides the order.&lt;/p&gt;

&lt;p&gt;If something feels missing, it's probably already on the list.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;https://hostim.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>githubactions</category>
      <category>cicd</category>
      <category>ci</category>
    </item>
    <item>
      <title>A Better Umami Dashboard with Grafana</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Thu, 20 Nov 2025 17:24:38 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/a-better-umami-dashboard-with-grafana-1mfj</link>
      <guid>https://dev.to/pavel-hostim/a-better-umami-dashboard-with-grafana-1mfj</guid>
      <description>&lt;p&gt;Umami is great. Lightweight, privacy-friendly, no cookies, no tracking drama.&lt;br&gt;
We use it ourselves on Hostim.dev, and we ship a &lt;strong&gt;one-click Umami template&lt;/strong&gt; for anyone who wants simple, privacy-focused analytics.&lt;/p&gt;

&lt;p&gt;But once you start relying on analytics to make actual decisions, you hit the limits pretty quickly.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why the default Umami dashboard wasn't enough
&lt;/h2&gt;

&lt;p&gt;Umami intentionally keeps things minimal, but some gaps become obvious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No clear view of &lt;strong&gt;when&lt;/strong&gt; visitors peak during the day&lt;/li&gt;
&lt;li&gt;Hard to isolate &lt;strong&gt;bots&lt;/strong&gt; from real traffic&lt;/li&gt;
&lt;li&gt;No &lt;strong&gt;moving averages&lt;/strong&gt; or trend smoothing&lt;/li&gt;
&lt;li&gt;No grouped referrers (e.g. "search", "LLM", "other")&lt;/li&gt;
&lt;li&gt;Limited visibility into relationships between sessions and custom events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is criticism — Umami is intentionally simple.&lt;br&gt;
But sometimes you want more resolution.&lt;/p&gt;

&lt;p&gt;So I took the quickest path:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy Grafana → connect it to Umami's PostgreSQL → build a custom dashboard.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This took maybe ten minutes and unlocked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Daily heatmap&lt;/strong&gt; showing real traffic peaks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;7-day moving averages&lt;/strong&gt; for referrers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qualified sessions&lt;/strong&gt; (≥2 pageviews) to filter out most bots&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Selectable custom events&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raw stats&lt;/strong&gt; for the selected period&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Suddenly Umami became "actionable" instead of just "nice".&lt;/p&gt;


&lt;h2&gt;
  
  
  How to connect Grafana to Umami's PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Inside Grafana:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration → Data sources → Add data source → PostgreSQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fill in the credentials from your Umami database and save.&lt;/p&gt;

&lt;p&gt;You can now import our dashboard:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://grafana.com/grafana/dashboards/24431" rel="noopener noreferrer"&gt;&lt;strong&gt;Grafana.com Dashboard&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;If you prefer to self-host on your own VPS, here is a complete Docker Compose stack for Umami, PostgreSQL, and Grafana.&lt;/p&gt;
&lt;h3&gt;
  
  
  Full Docker Compose stack
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:15&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;umami&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;umami_pass&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;umami&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres_data:/var/lib/postgresql/data&lt;/span&gt;

  &lt;span class="na"&gt;umami&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/umami-software/umami:postgres-latest&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres://umami:umami_pass@postgres:5432/umami&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
      &lt;span class="na"&gt;APP_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;replace_this_with_a_random_secret"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:3000:3000"&lt;/span&gt;

  &lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/grafana:latest&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GF_SERVER_DOMAIN=grafana.example.com&lt;/span&gt;
      &lt;span class="s"&gt;GF_SERVER_ROOT_URL=https://grafana.example.com&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:3001:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;grafana_data:/var/lib/grafana&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;grafana_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Start everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Umami → &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Grafana → &lt;a href="http://localhost:3001" rel="noopener noreferrer"&gt;http://localhost:3001&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Grafana, configure a PostgreSQL data source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host: postgres
Port: 5432
User: umami
Password: umami_pass
Database: umami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Import the dashboard using the ID from Grafana.com.&lt;/p&gt;




&lt;h2&gt;
  
  
  Don't want to run a server?
&lt;/h2&gt;

&lt;p&gt;If you don't want to manage Docker, OS maintenance, or networking, you can deploy the &lt;strong&gt;same stack&lt;/strong&gt; on Hostim.dev by simply pasting the Compose file above when creating a new project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose &lt;strong&gt;Paste Docker Compose&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use the YAML from this section&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hostim.dev will handle HTTPS, internal networking, logs, metrics, and persistence for all three services.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;&lt;strong&gt;Try Hostim.dev — deploy the full stack without touching SSH&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Already running Umami on Hostim.dev?
&lt;/h2&gt;

&lt;p&gt;If you deployed Umami using the one-click template, you don't need a new project. Just:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;strong&gt;separate Grafana App&lt;/strong&gt; in the &lt;strong&gt;same project&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use the &lt;code&gt;grafana/grafana:latest&lt;/code&gt; image&lt;/li&gt;
&lt;li&gt;Add the existing &lt;strong&gt;Umami PostgreSQL&lt;/strong&gt; as a Grafana data source&lt;/li&gt;
&lt;li&gt;Import the dashboard JSON&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both apps run on the same private project network, so they can communicate without exposing ports or adjusting firewall rules.&lt;/p&gt;

</description>
      <category>umami</category>
      <category>grafana</category>
      <category>analytics</category>
      <category>docker</category>
    </item>
    <item>
      <title>MetalLB on Hetzner Dedicated with vSwitch</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 28 Oct 2025 17:00:41 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/metallb-on-hetzner-dedicated-with-vswitch-1f9e</link>
      <guid>https://dev.to/pavel-hostim/metallb-on-hetzner-dedicated-with-vswitch-1f9e</guid>
      <description>&lt;p&gt;When running Kubernetes on Hetzner Dedicated, there is no cloud load balancer. But you &lt;em&gt;can&lt;/em&gt; provide public LoadBalancer IPs by attaching a routed IP range to a vSwitch and letting MetalLB announce addresses over L2.&lt;/p&gt;

&lt;p&gt;Our setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calico (VXLAN + WireGuard)&lt;/li&gt;
&lt;li&gt;kube-proxy IPVS with strictARP&lt;/li&gt;
&lt;li&gt;ingress-nginx for ingress traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Assign a public subnet to your vSwitch
&lt;/h2&gt;

&lt;p&gt;Example routed block Hetzner provides:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet:     123.45.67.32/29
Gateway:    123.45.67.33
Usable:     123.45.67.34–38
Broadcast:  123.45.67.39
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach your dedicated servers to the vSwitch (VLAN ID e.g. 4000).&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Configure vSwitch VLAN on each node
&lt;/h2&gt;

&lt;p&gt;Each node gets a &lt;strong&gt;/32&lt;/strong&gt; from the subnet – Hetzner routes the whole /29 to your server.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note on routing table IDs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This guide uses routing table &lt;strong&gt;200&lt;/strong&gt; as an example.&lt;/p&gt;

&lt;p&gt;If you are running &lt;strong&gt;Cilium&lt;/strong&gt;, avoid table &lt;code&gt;200&lt;/code&gt;: Cilium currently flushes all routes in table 200 on startup, which breaks vSwitch routing.&lt;/p&gt;

&lt;p&gt;For Cilium-based installations, &lt;strong&gt;any other unused routing table ID works&lt;/strong&gt; (for example &lt;code&gt;201&lt;/code&gt;, &lt;code&gt;300&lt;/code&gt;, or &lt;code&gt;1001&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Reference: &lt;a href="https://github.com/cilium/cilium/issues/38531" rel="noopener noreferrer"&gt;https://github.com/cilium/cilium/issues/38531&lt;/a&gt;&lt;br&gt;
Create &lt;code&gt;/etc/netplan/10-vlan-4000.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;renderer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networkd&lt;/span&gt;
  &lt;span class="na"&gt;vlans&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vlan4000&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
      &lt;span class="na"&gt;link&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eno1&lt;/span&gt;
      &lt;span class="na"&gt;mtu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1400&lt;/span&gt;
      &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;123.45.67.38/32&lt;/span&gt; &lt;span class="c1"&gt;# node-specific&lt;/span&gt;
      &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0/0&lt;/span&gt;
          &lt;span class="na"&gt;via&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.33&lt;/span&gt;
          &lt;span class="na"&gt;on-link&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.32/29&lt;/span&gt;
          &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;link&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
      &lt;span class="na"&gt;routing-policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.32/29&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
          &lt;span class="na"&gt;priority&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.32/29&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
          &lt;span class="na"&gt;priority&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.32/29&lt;/span&gt;
          &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.233.0.0/18&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;254&lt;/span&gt;
          &lt;span class="na"&gt;priority&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;123.45.67.32/29&lt;/span&gt;
          &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.233.64.0/18&lt;/span&gt;
          &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;254&lt;/span&gt;
          &lt;span class="na"&gt;priority&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;netplan apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. Required sysctl settings
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;/etc/sysctl.d/999-metallb.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setting&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;arp_ignore=1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Only reply to ARP queries for an IP &lt;strong&gt;on the correct interface&lt;/strong&gt; – prevents conflicting replies from Calico/VXLAN.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;arp_announce=2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Send ARP only from the &lt;strong&gt;interface that owns the VIP&lt;/strong&gt;, required when MetalLB moves VIPs between nodes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;rp_filter=0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Disable strict reverse-path filtering – otherwise nodes drop return traffic sourced from VIPs or remote pods.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  4. kube-proxy + Calico adjustments
&lt;/h2&gt;

&lt;p&gt;Enable strictARP in kube-proxy (IPVS mode):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubeproxy.config.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KubeProxyConfiguration&lt;/span&gt;
&lt;span class="na"&gt;ipvs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;strictARP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MTU must account for VXLAN + vSwitch + WireGuard overhead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Calico MTU: 1280 (consistent across nodes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. Deploy MetalLB
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPAddressPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vswitch&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;123.45.67.34-123.45.67.36&lt;/span&gt; &lt;span class="c1"&gt;# free VIPs&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;L2Advertisement&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;l2&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ipAddressPools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vswitch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vlan4000"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart MetalLB speakers to pick up interface binding.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Ingress service configuration
&lt;/h2&gt;

&lt;p&gt;For ingress-nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;externalTrafficPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pro:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preserves client IP&lt;/li&gt;
&lt;li&gt;Prevents traffic hairpin across nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tradeoff:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only one node handles a given connection (acceptable for ingress)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  7. Verification
&lt;/h2&gt;

&lt;p&gt;Confirm that your ingress-nginx Service received a public VIP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx ingress-nginx-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.233.53.156   123.45.67.35   80:30440/TCP,443:30477/TCP   17d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inspect the Service events to see which node currently advertises the VIP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe svc &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx ingress-nginx-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
  Normal  nodeAssigned  ...  metallb-speaker  announcing from node "control-plane-1" with protocol "layer2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then verify reachability from &lt;strong&gt;outside&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; http://123.45.67.35
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Failover test
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Identify the active announcer from the above events&lt;/li&gt;
&lt;li&gt;Shut that node down abruptly:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;poweroff
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Re-run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; http://123.45.67.35
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected: traffic continues within ~1–2 seconds as another node picks up the VIP.&lt;/p&gt;

&lt;p&gt;➡️ Note: VIPs &lt;strong&gt;do not appear&lt;/strong&gt; in &lt;code&gt;ip addr&lt;/code&gt; on nodes; they are held in IPVS and advertised via ARP. That is normal.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Acknowledgment&lt;/p&gt;

&lt;p&gt;Thanks to Oleksandr Vorona (DevOps at Dysnix) for reporting a Cilium routing table conflict and helping improve this guide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dysnix.com" rel="noopener noreferrer"&gt;https://dysnix.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;This gives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public LoadBalancer IPs&lt;/li&gt;
&lt;li&gt;Fast failover (~1-2s)&lt;/li&gt;
&lt;li&gt;Clean separation: pod networking via VXLAN/WireGuard, external via vSwitch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alternative approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hetzner Cloud Load Balancer (simpler, works with Dedicated too)&lt;/li&gt;
&lt;li&gt;Cilium with L2 announcements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We run hosting infrastructure, so controlling ingress networking ourselves matters (mostly to prove the point really). Hetzner still runs the vSwitch underneath, but it's more independent than relying on the cloud LB.&lt;/p&gt;

&lt;p&gt;And if you'd rather not handle any of this yourself – &lt;b&gt;&lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt;&lt;/b&gt; is now live. You can deploy your Docker or Compose apps with built-in databases, volumes, HTTPS, and logs – all in one place, ready in minutes.&lt;/p&gt;

</description>
      <category>metallb</category>
      <category>kubernetes</category>
      <category>hetzner</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to Self-Host n8n with Docker Compose</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Sat, 04 Oct 2025 12:23:08 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/how-to-self-host-n8n-with-docker-compose-mjl</link>
      <guid>https://dev.to/pavel-hostim/how-to-self-host-n8n-with-docker-compose-mjl</guid>
      <description>&lt;p&gt;&lt;a href="https://n8n.io/" rel="noopener noreferrer"&gt;n8n&lt;/a&gt; is a popular open-source automation tool – like Zapier, but self-hosted.&lt;br&gt;
Here's how to run it on your own VPS using Docker Compose, and expose it securely over HTTPS using &lt;strong&gt;Caddy&lt;/strong&gt; as the ingress proxy.&lt;/p&gt;

&lt;p&gt;If you want a broader primer, check out &lt;a href="https://dev.to/pavel-hostim/how-to-self-host-a-docker-compose-app-3b4p"&gt;How to Self-Host a Docker Compose App&lt;/a&gt;. But you don't need to read it first – this guide is self-contained.&lt;/p&gt;


&lt;h2&gt;
  
  
  1. Install Docker on your VPS
&lt;/h2&gt;

&lt;p&gt;Update the system and install Docker + Compose plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;ca-certificates curl &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.asc
&lt;span class="nb"&gt;chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.asc

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UBUNTU_CODENAME&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

apt-get update
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-compose-plugin &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2. Write a Docker Compose file
&lt;/h2&gt;

&lt;p&gt;Create a new directory for n8n and add &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;n8n&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;n8nio/n8n:latest&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:5678:5678"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_HOST=n8n.example.com&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_PORT=5678&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_PROTOCOL=https&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;n8n_data:/home/node/.n8n&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;n8n_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 Note: This guide assumes you already have a domain (like &lt;code&gt;n8n.example.com&lt;/code&gt;) pointing to your VPS's IP address. If not, set that up with your DNS provider before continuing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Notice we bind to &lt;code&gt;127.0.0.1:5678&lt;/code&gt; – so it's only accessible locally. Caddy will handle public access.&lt;/p&gt;

&lt;p&gt;Start it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. Make it survive reboots
&lt;/h2&gt;

&lt;p&gt;Create a systemd service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/systemd/system/n8n.service
&lt;/span&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;n8n workflow automation (Docker Compose)&lt;/span&gt;
&lt;span class="py"&gt;After&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network.target&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;oneshot&lt;/span&gt;
&lt;span class="py"&gt;WorkingDirectory&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/root/n8n&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/usr/bin/docker compose up -d&lt;/span&gt;
&lt;span class="py"&gt;ExecStop&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/usr/bin/docker compose down&lt;/span&gt;
&lt;span class="py"&gt;RemainAfterExit&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;multi-user.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;n8n
systemctl start n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now n8n restarts automatically after reboots.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Install and configure Caddy
&lt;/h2&gt;

&lt;p&gt;Caddy is a modern reverse proxy with &lt;strong&gt;automatic HTTPS&lt;/strong&gt;. Perfect for small setups.&lt;/p&gt;

&lt;p&gt;If you're curious about how Caddy compares to Nginx, HAProxy, or Traefik, see &lt;a href="https://dev.to/pavel-hostim/the-reverse-proxy-showdown-nginx-vs-haproxy-vs-caddy-vs-traefik-3a1b"&gt;The Reverse Proxy Showdown&lt;/a&gt;. For n8n, Caddy is the simplest choice.&lt;/p&gt;

&lt;p&gt;Make sure your domain (e.g. &lt;code&gt;n8n.example.com&lt;/code&gt;) already resolves to your VPS before you configure Caddy. Otherwise, Let's Encrypt won't be able to issue a certificate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; debian-keyring debian-archive-keyring apt-transport-https
curl &lt;span class="nt"&gt;-1sLf&lt;/span&gt; &lt;span class="s1"&gt;'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; /etc/apt/trusted.gpg.d/caddy-stable.asc
curl &lt;span class="nt"&gt;-1sLf&lt;/span&gt; &lt;span class="s1"&gt;'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; /etc/apt/sources.list.d/caddy-stable.list
apt update
apt &lt;span class="nb"&gt;install &lt;/span&gt;caddy &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the Caddyfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano /etc/caddy/Caddyfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;n8n.example.com {
    reverse_proxy localhost:5678
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload Caddy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl reload caddy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it – Caddy requests and renews Let's Encrypt certificates automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Secure access
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use strong credentials when creating first user.&lt;/li&gt;
&lt;li&gt;Keep the &lt;code&gt;docker-compose.yml&lt;/code&gt; volume so your workflows persist.&lt;/li&gt;
&lt;li&gt;Optionally, restrict access to your IP range using Caddy if it's just for personal use.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;You now have a self-hosted &lt;strong&gt;n8n&lt;/strong&gt; instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running in Docker Compose&lt;/li&gt;
&lt;li&gt;Restarting automatically after reboots&lt;/li&gt;
&lt;li&gt;Exposed via Caddy with HTTPS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to avoid managing servers altogether, platforms like &lt;a href="https://dev.to/pavel-hostim/from-vps-to-paas-why-i-stopped-managing-servers-5a7m"&gt;Hostim.dev&lt;/a&gt; let you paste a Compose file and get HTTPS, metrics, and persistence without touching SSH.&lt;/p&gt;

&lt;p&gt;But if you prefer DIY – this setup will take you far.&lt;/p&gt;

</description>
      <category>n8n</category>
      <category>docker</category>
      <category>selfhosting</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Reverse Proxy Showdown: Nginx vs HAProxy vs Caddy vs Traefik</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 16 Sep 2025 13:26:48 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/the-reverse-proxy-showdown-nginx-vs-haproxy-vs-caddy-vs-traefik-3a1b</link>
      <guid>https://dev.to/pavel-hostim/the-reverse-proxy-showdown-nginx-vs-haproxy-vs-caddy-vs-traefik-3a1b</guid>
      <description>&lt;p&gt;Reverse proxies are the unsung heroes of modern infrastructure. They terminate TLS, route traffic, balance loads, and keep your apps reachable. But which one should you choose?&lt;br&gt;
There are four popular options worth comparing head-to-head: &lt;strong&gt;Nginx, HAProxy, Caddy, and Traefik&lt;/strong&gt;. Each comes with its own strengths, trade-offs, and ideal use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx – the classic all-rounder
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; general-purpose web serving, static content, simple reverse proxy setups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; battle-tested, massive ecosystem, tons of tutorials, easy Certbot integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; verbose configs, not as dynamic as newer tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nginx is often the default choice. It's powerful, stable, and widely documented. If you're setting up a straightforward proxy or serving static files alongside your app, Nginx will feel familiar and reliable. Just be prepared to manage slightly more configuration boilerplate.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev/learn/proxies/nginx" rel="noopener noreferrer"&gt;Full Nginx reverse proxy guide →&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  HAProxy – the performance beast
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; high-traffic sites, low-latency routing, advanced load balancing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; blazing fast, robust observability, flexible ACL system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; steeper learning curve, TLS setup can be fiddly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;HAProxy is famous for performance. It's a favorite in environments where uptime and throughput matter most. Think enterprise setups or any case where you need fine-grained control over routing logic and health checks. It's less beginner-friendly, but extremely powerful once mastered.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev/learn/proxies/haproxy" rel="noopener noreferrer"&gt;Full HAProxy reverse proxy guide →&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Caddy – the modern “batteries included” choice
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; minimal config, automatic HTTPS, developer-friendly defaults&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; one-line proxy configs, TLS handled automatically, sane defaults&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; smaller ecosystem, fewer advanced knobs for complex routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Caddy made waves by taking the pain out of HTTPS. With a simple &lt;code&gt;Caddyfile&lt;/code&gt;, you get automatic TLS, redirects, and reverse proxying. It's ideal for small projects or developers who want secure, working defaults without fiddling with extra tooling.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev/learn/proxies/caddy" rel="noopener noreferrer"&gt;Full Caddy reverse proxy guide →&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Traefik – the container-native router
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Docker and Kubernetes workloads, dynamic environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths:&lt;/strong&gt; integrates with container labels, dynamic service discovery, built-in metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaknesses:&lt;/strong&gt; YAML configs can get verbose, less popular outside containerized setups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traefik was built with cloud-native apps in mind. Instead of editing config files, you annotate containers with labels and Traefik routes traffic automatically. It shines in environments where services come and go frequently, making it a natural fit for orchestrators like Kubernetes.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev/learn/proxies/traefik" rel="noopener noreferrer"&gt;Full Traefik reverse proxy guide →&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick comparison
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Looking for full setup walkthroughs? Check out our guides for &lt;a href="https://hostim.dev/learn/proxies/nginx" rel="noopener noreferrer"&gt;Nginx&lt;/a&gt;, &lt;a href="https://hostim.dev/learn/proxies/haproxy" rel="noopener noreferrer"&gt;HAProxy&lt;/a&gt;, &lt;a href="https://hostim.dev/learn/proxies/caddy" rel="noopener noreferrer"&gt;Caddy&lt;/a&gt;, and &lt;a href="https://hostim.dev/learn/proxies/traefik" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Nginx&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;HAProxy&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Caddy&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Traefik&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ease of setup&lt;/td&gt;
&lt;td&gt;⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐&lt;/td&gt;
&lt;td&gt;⭐⭐⭐⭐&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto HTTPS&lt;/td&gt;
&lt;td&gt;Needs Certbot&lt;/td&gt;
&lt;td&gt;Manual + hooks&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container native&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Somewhat&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ecosystem/docs&lt;/td&gt;
&lt;td&gt;Huge&lt;/td&gt;
&lt;td&gt;Mature ops-focused&lt;/td&gt;
&lt;td&gt;Growing dev-focused&lt;/td&gt;
&lt;td&gt;Strong in Docker/K8s space&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  So which one should you choose?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Just learning or running a blog?&lt;/strong&gt; → &lt;strong&gt;Nginx&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handling big traffic or need reliability?&lt;/strong&gt; → &lt;strong&gt;HAProxy&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Want HTTPS with zero config?&lt;/strong&gt; → &lt;strong&gt;Caddy&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running Docker/Kubernetes?&lt;/strong&gt; → &lt;strong&gt;Traefik&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which reverse proxy do you use in your projects and why?&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>haproxy</category>
      <category>caddy</category>
      <category>traefik</category>
    </item>
    <item>
      <title>Netlify's New Credit Pricing: When Cloud Rent Comes Due</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Mon, 08 Sep 2025 08:33:10 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/netlifys-new-credit-pricing-when-cloud-rent-comes-due-dod</link>
      <guid>https://dev.to/pavel-hostim/netlifys-new-credit-pricing-when-cloud-rent-comes-due-dod</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;📌 &lt;em&gt;Last week, I wrote about &lt;a href="https://dev.to/pavel-hostim/cloud-rent-in-action-how-eu50-turns-into-eu200-53jg"&gt;Cloud Rent in Action&lt;/a&gt; – how layers of middlemen drive up the cost of running a simple SaaS stack. Netlify's new pricing update feels like the same story, playing out live.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What Changed at Netlify
&lt;/h2&gt;

&lt;p&gt;Netlify &lt;a href="https://www.netlify.com/blog/new-pricing-credits/" rel="noopener noreferrer"&gt;just rolled out&lt;/a&gt; a &lt;strong&gt;credit-based pricing model&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New accounts are now required to buy credits.&lt;/li&gt;
&lt;li&gt;Every deploy, function, or gigabyte of bandwidth consumes those credits.&lt;/li&gt;
&lt;li&gt;When the credits run out, your projects pause until you top up.&lt;/li&gt;
&lt;li&gt;Legacy users can stay on old plans for now, but the future is clear: credits are the new normal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On paper, this looks like a simplification. In reality, it's the next stage of &lt;strong&gt;cloud rent&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Netlify Had to Change
&lt;/h2&gt;

&lt;p&gt;For years, companies like Netlify grew fast thanks to &lt;strong&gt;venture capital money&lt;/strong&gt;. Investors subsidized growth: cheap plans, generous free tiers, and aggressive marketing. The mission was simple – capture the market at any cost.&lt;/p&gt;

&lt;p&gt;That was the &lt;strong&gt;market expansion phase&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
VCs were happy to foot the bill as long as user numbers climbed.&lt;/p&gt;

&lt;p&gt;Now we're in the &lt;strong&gt;market exploration (or sustainability) phase&lt;/strong&gt;. Investors want returns. And that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free tiers shrink&lt;/li&gt;
&lt;li&gt;Simple flat plans get replaced with credit systems&lt;/li&gt;
&lt;li&gt;Costs shift from VC wallets to developer wallets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's not that Netlify suddenly became greedy – it's that the VC playbook &lt;em&gt;always&lt;/em&gt; ends this way. Rent has to be collected. And developers end up paying it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Credit Pricing
&lt;/h2&gt;

&lt;p&gt;Credits sound neat – one bucket, one metric. But for most developers, they create more problems than they solve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mental overhead&lt;/strong&gt; – you're forced to budget not just money, but deploys and requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unpredictable bills&lt;/strong&gt; – a sudden spike in traffic can drain credits overnight.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity creep&lt;/strong&gt; – hosting a static site shouldn't require a calculator.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly the dynamic I wrote about in my &lt;strong&gt;Cloud Rent&lt;/strong&gt; post: when platforms optimize for investor returns instead of developer trust, pricing drifts away from simplicity and fairness.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Hostim.dev Is Different
&lt;/h2&gt;

&lt;p&gt;I'm building &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; with a completely different philosophy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bootstrapped, not VC-funded.&lt;/strong&gt; No investors. No pressure to flip pricing later. No "growth at all costs" phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lean team.&lt;/strong&gt; Right now it's just me – the founder – building and running the platform. That means lower overhead and no bloated payroll to pass on to you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fair pricing from the beginning.&lt;/strong&gt; Plans are simple, predictable, and surge-safe. No credits, no hidden meters, no surprise bills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built for developers, not investors.&lt;/strong&gt; The focus is on usability and transparency. You don't need to rewire your workflow to fit a platform's billing quirks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cloud Rent vs. Developer Trust
&lt;/h2&gt;

&lt;p&gt;So if last week's post was the theory, this week is the proof:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Cloud rent always comes due.&lt;/strong&gt; Netlify's credits are just the latest example.&lt;/p&gt;

&lt;p&gt;At Hostim.dev, I'm building the opposite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flat, predictable plans
&lt;/li&gt;
&lt;li&gt;No surprise charges
&lt;/li&gt;
&lt;li&gt;Databases, volumes, and apps as first-class citizens
&lt;/li&gt;
&lt;li&gt;A platform you can trust, built for developers, not for VCs&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;💬 Curious to hear your thoughts:&lt;br&gt;&lt;br&gt;
Do you prefer credit-based pricing models (like Netlify's) or flat, predictable plans?  &lt;/p&gt;

&lt;p&gt;👉 You can check out Hostim.dev if you're interested – &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;your first project is free for 5 days&lt;/a&gt;, no credit card required.&lt;/p&gt;

</description>
      <category>netlify</category>
      <category>cloud</category>
      <category>devops</category>
      <category>startup</category>
    </item>
    <item>
      <title>Cloud Rent in Action: How €50 Turns Into €200+</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 02 Sep 2025 10:22:16 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/cloud-rent-in-action-how-eu50-turns-into-eu200-53jg</link>
      <guid>https://dev.to/pavel-hostim/cloud-rent-in-action-how-eu50-turns-into-eu200-53jg</guid>
      <description>&lt;p&gt;When you pay for cloud hosting, you're not just paying for compute.&lt;br&gt;&lt;br&gt;
You're paying &lt;strong&gt;rent&lt;/strong&gt;. And it adds up fast.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ What You Think You're Paying For
&lt;/h2&gt;

&lt;p&gt;Let's say you need a small SaaS backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 apps (API + worker)
&lt;/li&gt;
&lt;li&gt;1 Postgres database
&lt;/li&gt;
&lt;li&gt;1 Redis for caching
&lt;/li&gt;
&lt;li&gt;A few gigs of storage
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pretty standard stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  💸 What It Costs on AWS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2 (2× t3.medium)&lt;/strong&gt; → €50 / mo
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Postgres (db.t3.small, 10GB)&lt;/strong&gt; → €22 / mo
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ElastiCache Redis (cache.t3.micro)&lt;/strong&gt; → €10 / mo
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EBS storage (100GB)&lt;/strong&gt; → €10 / mo
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data transfer (200GB egress)&lt;/strong&gt; → €9 / mo
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total: ~€101 / mo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s without backups, monitoring, or any extras.&lt;br&gt;&lt;br&gt;
And without any “friendly” PaaS markup on top.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏠 What It Costs on Bare Metal
&lt;/h2&gt;

&lt;p&gt;Hetzner: 12 threads, 64GB RAM, 1TB SSD → &lt;strong&gt;€44 / mo&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;You could run &lt;em&gt;dozens&lt;/em&gt; of those same apps + databases on one machine.&lt;br&gt;&lt;br&gt;
But if you don’t want to babysit it, you go through AWS — and suddenly you’re paying &lt;strong&gt;2× more&lt;/strong&gt; for the same outcome.&lt;/p&gt;

&lt;p&gt;Of course, bare metal has risks too (datacenter failures, ops work).&lt;br&gt;&lt;br&gt;
But the price gap is still very real.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧃 Add a Middleman
&lt;/h2&gt;

&lt;p&gt;Now add a VC-backed PaaS that just resells AWS.&lt;br&gt;&lt;br&gt;
Nice UI, Heroku-like DX… but you’re paying &lt;strong&gt;another ×2 markup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your ~€100 stack just became &lt;strong&gt;€200–250 / mo.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s &lt;strong&gt;cloud rent&lt;/strong&gt;: the difference between the infra you’re actually using and the layers of middlemen you’re forced to pay.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌱 A Fairer Alternative
&lt;/h2&gt;

&lt;p&gt;Here’s the same stack — 2 apps, Postgres, Redis, and a 50GB volume — running on my own PaaS, &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;€34 / mo.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Includes HTTPS, metrics, and logs baked in.
&lt;/li&gt;
&lt;li&gt;No metering, no hidden costs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You still get the convenience of a PaaS.&lt;br&gt;&lt;br&gt;
But without subsidizing investors, shareholders, or cloud landlords.&lt;br&gt;&lt;br&gt;
Just me, your humble wannabe hoster.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Try It Yourself
&lt;/h2&gt;

&lt;p&gt;I recently shipped &lt;strong&gt;authless trials&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
paste a &lt;code&gt;docker-compose.yml&lt;/code&gt;, and you’ll see your app running in seconds — no signup needed.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev/dashboard?preview=1&amp;amp;modal=1&amp;amp;compose=1" rel="noopener noreferrer"&gt;Try Hostim.dev for free&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>aws</category>
      <category>heroku</category>
    </item>
    <item>
      <title>How We Built a PaaS with Go, Kubernetes, and React</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 26 Aug 2025 13:07:09 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/how-we-built-a-paas-with-go-kubernetes-and-react-2edl</link>
      <guid>https://dev.to/pavel-hostim/how-we-built-a-paas-with-go-kubernetes-and-react-2edl</guid>
      <description>&lt;p&gt;Building a PaaS as a solo founder means making choices. Some deliberate, some accidental, all of them tradeoffs.  &lt;/p&gt;

&lt;p&gt;Every tool has tradeoffs, and the deciding factor is usually the most expensive resource of all: &lt;strong&gt;time&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;If I can get the job done with something I already know, I'll take that path. I'll learn new tools when the project pays for it. Until then, it's all about moving forward with what works.  &lt;/p&gt;

&lt;p&gt;Here's how Hostim.dev is put together today – the stack that runs every app, database, and service behind the scenes.&lt;/p&gt;




&lt;h2&gt;
  
  
  🖥 Infra: Ansible + Kubespray
&lt;/h2&gt;

&lt;p&gt;Before Kubernetes even comes into the picture, there's infrastructure to manage.&lt;/p&gt;

&lt;p&gt;I've spent six years working with &lt;strong&gt;Ansible&lt;/strong&gt;, so it was my first pick. Hostim.dev runs on &lt;strong&gt;bare metal servers&lt;/strong&gt; – and the Kubernetes clusters on top of them are provisioned with &lt;a href="https://github.com/kubernetes-sigs/kubespray" rel="noopener noreferrer"&gt;Kubespray&lt;/a&gt;, which itself is a set of Ansible playbooks.&lt;/p&gt;

&lt;p&gt;That means everything integrates nicely:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My own playbooks handle &lt;strong&gt;server lifecycle&lt;/strong&gt; (deploy new users, rotate keys, manage credentials).&lt;/li&gt;
&lt;li&gt;Kubespray handles &lt;strong&gt;cluster lifecycle&lt;/strong&gt; (deploy, upgrade, or scale clusters).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Could Terraform do this job too? Maybe. But I'd spend more time learning it than deploying clusters. That's the tradeoff.&lt;/p&gt;

&lt;p&gt;The upside: &lt;strong&gt;flexibility&lt;/strong&gt;. I can run only the parts I need, when I need them.&lt;br&gt;&lt;br&gt;
The downside: it's not centralized – I run playbooks from my own machine. If two people apply different changes at the same time, you could hit conflicts. For now, that's a human problem, and we'll solve it in a human way.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ The Kubernetes Operator
&lt;/h2&gt;

&lt;p&gt;The heart of the platform is a &lt;strong&gt;custom operator&lt;/strong&gt; written in Go with &lt;a href="https://github.com/kubernetes-sigs/kubebuilder" rel="noopener noreferrer"&gt;Kubebuilder&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're not familiar: an operator is basically a program that runs in the cluster and ensures the "desired state" matches the "actual state."&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update an app's environment variables → operator notices, triggers a restart.&lt;/li&gt;
&lt;li&gt;Create a new database → operator picks a server, provisions it, updates permissions, applies migrations.&lt;/li&gt;
&lt;li&gt;Scale an app → operator reconciles replicas until the cluster matches your request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also emits the &lt;strong&gt;events&lt;/strong&gt; you see in the dashboard. The backend subscribes to them, stores them, and triggers UI updates so what you see is always fresh.&lt;/p&gt;

&lt;p&gt;This piece does a lot – from managing app lifecycles to handling Redis and Postgres placements – and is probably the best candidate for open-sourcing later on. No fixed timeline yet, but it's on my mind.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗 Backend API: Schema First
&lt;/h2&gt;

&lt;p&gt;All the business logic sits in the backend. It's written in &lt;strong&gt;Go&lt;/strong&gt;, with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/gin-gonic/gin" rel="noopener noreferrer"&gt;Gin&lt;/a&gt; for the HTTP server&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://entgo.io/" rel="noopener noreferrer"&gt;Ent&lt;/a&gt; as the ORM&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/deepmap/oapi-codegen" rel="noopener noreferrer"&gt;oapi-codegen&lt;/a&gt; to generate code from an &lt;strong&gt;OpenAPI schema&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write the OpenAPI schema first.&lt;/li&gt;
&lt;li&gt;Generate the server interfaces with &lt;code&gt;oapi-codegen&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Implement the interfaces manually.&lt;/li&gt;
&lt;li&gt;Wire them up with Ent models and Kubernetes operator objects.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's not 100% smooth (Ent and oapi-codegen don't integrate perfectly, so there's some type conversion glue). But overall, it means less boilerplate and more consistency.&lt;/p&gt;

&lt;p&gt;At the end of the day, three parts come together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;K8s objects&lt;/strong&gt; (via the operator's Go package)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database code&lt;/strong&gt; (via Ent)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP API&lt;/strong&gt; (via oapi-codegen)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My job: glue them together and add business logic. Which is exactly what Hostim.dev runs on today.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎨 Frontend: React + Ant Design
&lt;/h2&gt;

&lt;p&gt;This is where I had the least experience. A good friend helped me bootstrap the project, and I leaned on LLMs for some of the early decisions.&lt;/p&gt;

&lt;p&gt;Framework of choice: &lt;strong&gt;React + TypeScript&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
UI library: &lt;strong&gt;Ant Design (Antd)&lt;/strong&gt; – suggested by an LLM, picked mostly on a gut call. It does the job.&lt;/p&gt;

&lt;p&gt;Could I have picked Svelte or Vue or "framework XYZ" instead? Sure. But I had a friend to guide me through React, not those other frameworks. And that meant I could start shipping right away. That's the tradeoff.&lt;/p&gt;

&lt;p&gt;What I'm happy about is how code generation carries through to the frontend. The OpenAPI client is autogenerated, so the frontend just calls strongly typed functions.&lt;/p&gt;

&lt;p&gt;If I change an object in the backend and re-generate, any breaking changes are immediately visible in the IDE. That feedback loop saved me a ton of time and bugs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;So that's the stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ansible + Kubespray&lt;/strong&gt; for infra
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go + Kubebuilder&lt;/strong&gt; operator for apps, DBs, Redis, volumes
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go + Gin + Ent + OpenAPI&lt;/strong&gt; for backend
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React + Ant Design&lt;/strong&gt; for frontend
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's not perfect. Every layer has its tradeoffs. Some tools might be "hotter" or "easier," but experience and context matter more. In the end, the real problem to solve isn't "which framework is coolest" – it's time.&lt;/p&gt;

&lt;p&gt;That's true for me building the platform, and it's true for anyone using Hostim.dev instead of wiring up VPS configs or AWS bills.&lt;/p&gt;

&lt;p&gt;Think of it like this: you can choose Terraform vs. Ansible, React vs. Vue… and you can also choose "Do I spend Saturday night fixing SSL, or do I just deploy and move on?" 😅&lt;/p&gt;

&lt;p&gt;👉 Curious? You can &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;join the beta here&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What about you?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
What tradeoffs have you made recently – stuck with a familiar tool, or jumped to something shiny?&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>go</category>
      <category>react</category>
    </item>
    <item>
      <title>From VPS to PaaS: Why I Stopped Managing Servers</title>
      <dc:creator>Pavel</dc:creator>
      <pubDate>Tue, 19 Aug 2025 12:39:06 +0000</pubDate>
      <link>https://dev.to/pavel-hostim/from-vps-to-paas-why-i-stopped-managing-servers-5a7m</link>
      <guid>https://dev.to/pavel-hostim/from-vps-to-paas-why-i-stopped-managing-servers-5a7m</guid>
      <description>&lt;p&gt;Most side projects start the same way.&lt;br&gt;&lt;br&gt;
You grab a VPS from Hetzner or DigitalOcean, install Docker, run &lt;code&gt;docker compose up&lt;/code&gt;, and boom – you’re live.  &lt;/p&gt;

&lt;p&gt;It feels cheap. It feels simple.&lt;br&gt;&lt;br&gt;
Until it isn’t.  &lt;/p&gt;




&lt;h2&gt;
  
  
  The VPS Path: The Default Way
&lt;/h2&gt;

&lt;p&gt;Here’s the typical journey I went through (and many devs still do):  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rent a VPS for €5–€10/month
&lt;/li&gt;
&lt;li&gt;Install Docker + Docker Compose
&lt;/li&gt;
&lt;li&gt;Run the app
&lt;/li&gt;
&lt;li&gt;Add Nginx and Let’s Encrypt for HTTPS
&lt;/li&gt;
&lt;li&gt;Hack together a systemd unit so it restarts after reboot
&lt;/li&gt;
&lt;li&gt;Manually configure backups, logs, and monitoring
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It works. But every new project means repeating the same steps.&lt;br&gt;&lt;br&gt;
And every time, something goes wrong – ports left open, SSL renewal fails, or a config breaks after an update.  &lt;/p&gt;

&lt;p&gt;Well, unless you properly automate it with something like &lt;strong&gt;Ansible&lt;/strong&gt; or &lt;strong&gt;Terraform&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
But let’s be honest: do you really want to learn and maintain infra-as-code pipelines… just for side projects?  &lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Costs of “Cheap” VPS Hosting
&lt;/h2&gt;

&lt;p&gt;At first glance, VPS looks cheap.&lt;br&gt;&lt;br&gt;
But the costs sneak up on you:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backups&lt;/strong&gt;: €2–€5/month
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring/logs&lt;/strong&gt;: another €5–€10/month or DIY time
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downtime&lt;/strong&gt;: hours spent debugging instead of coding
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: one misconfigured firewall can expose your database
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The real cost isn’t just money.&lt;br&gt;&lt;br&gt;
It’s &lt;strong&gt;time lost&lt;/strong&gt; repeating setup, patching servers, and fixing mistakes.&lt;br&gt;&lt;br&gt;
And if you’re billing clients? That “cheap” VPS suddenly isn’t cheap anymore.  &lt;/p&gt;




&lt;h2&gt;
  
  
  The PaaS Alternative
&lt;/h2&gt;

&lt;p&gt;A PaaS (Platform-as-a-Service) takes that whole messy checklist and bakes it in:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy directly from &lt;strong&gt;Docker, Git, or Compose&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic HTTPS&lt;/strong&gt; and domain management
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in databases&lt;/strong&gt; like Postgres, MySQL, and Redis
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volumes&lt;/strong&gt; that survive restarts and redeploys
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics and logs&lt;/strong&gt; out of the box
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-project isolation&lt;/strong&gt; so one client doesn’t mess up another
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of spending hours setting up a VPS, you paste your Compose file or point to a repo, click deploy, and it just works.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Switched (and Why I Built Hostim.dev)
&lt;/h2&gt;

&lt;p&gt;After doing the VPS setup dozens of times – for my own apps, side projects, and client work – I finally hit a wall.  &lt;/p&gt;

&lt;p&gt;Every project felt like déjà vu.&lt;br&gt;&lt;br&gt;
Spin up server, fight configs, add SSL, fix logging, repeat.  &lt;/p&gt;

&lt;p&gt;So I built something that skips all of that.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; is a &lt;strong&gt;developer-first PaaS&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
You paste your &lt;code&gt;docker-compose.yml&lt;/code&gt;, or deploy from Git or Docker Hub, and you’re live with HTTPS, metrics, databases, and volumes.  &lt;/p&gt;

&lt;p&gt;No YAML rewrites. No hidden cloud costs.&lt;br&gt;&lt;br&gt;
Just deploy and move on.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;VPS hosting isn’t bad. It’s still a good choice if you want full control or you enjoy tweaking configs.  &lt;/p&gt;

&lt;p&gt;But if you’d rather spend time building apps instead of babysitting servers, a PaaS can save you both money and frustration.  &lt;/p&gt;

&lt;p&gt;And yes – I’ll still be babysitting servers.&lt;br&gt;&lt;br&gt;
But that’s my job now, not yours. 😉  &lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;Hostim.dev&lt;/a&gt; is opening soon with a free trial and always-free database tiers.&lt;br&gt;&lt;br&gt;
If you’re tired of fighting servers, &lt;a href="https://hostim.dev" rel="noopener noreferrer"&gt;join the waitlist&lt;/a&gt; – and let’s bring hosting back to earth.  &lt;/p&gt;

</description>
      <category>vps</category>
      <category>paas</category>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
