<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: DanielG</title>
    <description>The latest articles on DEV Community by DanielG (@danielgdk).</description>
    <link>https://dev.to/danielgdk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danielgdk"/>
    <language>en</language>
    <item>
      <title>Docker and Ansible: Setting Up a Reproducible On-Prem Stack in a Weekend</title>
      <dc:creator>DanielG</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:13:08 +0000</pubDate>
      <link>https://dev.to/danielgdk/docker-and-ansible-setting-up-a-reproducible-on-prem-stack-in-a-weekend-1h0d</link>
      <guid>https://dev.to/danielgdk/docker-and-ansible-setting-up-a-reproducible-on-prem-stack-in-a-weekend-1h0d</guid>
      <description>&lt;h1&gt;
  
  
  Docker and Ansible: Setting Up a Reproducible On-Prem Stack in a Weekend
&lt;/h1&gt;

&lt;p&gt;You have decided to move off the cloud. The spreadsheets convinced your CTO, the timeline is approved, and now someone — probably you — has to actually build the infrastructure. The question is not whether Docker and Ansible are the right tools. They are, for 90% of Nordic SMBs running steady-state workloads. The question is how to set them up so that your on-prem stack is reproducible, maintainable, and not a snowflake that only one person understands.&lt;/p&gt;

&lt;p&gt;I recently stood up exactly this kind of Docker Ansible on-premise setup for a client migrating off Azure — a .NET backend API, Angular frontends, PostgreSQL, and a full monitoring stack. Four VMs, zero cloud dependencies, and the whole thing was reproducible from a single &lt;code&gt;ansible-playbook&lt;/code&gt; command within a weekend. Here is how.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference Architecture: The 4-VM Layout
&lt;/h2&gt;

&lt;p&gt;Before writing a single line of YAML, you need a target architecture. Here is the layout I use for small-to-mid workloads — and it is the same architecture in my &lt;a href="https://znowman.gumroad.com/l/cloud-exit-starter-kit" rel="noopener noreferrer"&gt;Cloud Exit Starter Kit&lt;/a&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;VM&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;What Runs Here&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DEV&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Development environment&lt;/td&gt;
&lt;td&gt;App containers (dev config), dev database&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TEST&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Staging / QA&lt;/td&gt;
&lt;td&gt;App containers (staging config), test database, automated test runners&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;PROD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Production&lt;/td&gt;
&lt;td&gt;App containers (prod config), production database, Nginx reverse proxy with SSL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TOOLS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Shared tooling&lt;/td&gt;
&lt;td&gt;Harbor (container registry), SonarQube, Grafana + Loki + Promtail, CI/CD agents, Vaultwarden&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This separation is deliberate. DEV and TEST can break without touching production. TOOLS is isolated so that a misbehaving SonarQube scan does not eat your production server's RAM. And every VM is configured identically at the OS level — same packages, same users, same SSH hardening — because Ansible makes that trivial.&lt;/p&gt;

&lt;p&gt;Why not one big server? Because isolation is cheap and debugging resource contention on a shared host is not. Four modest machines (or VMs on a hypervisor) give you clear boundaries and simpler troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware and OS Selection
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;OS: Rocky Linux 9.&lt;/strong&gt; It is the CentOS successor with a 10-year support lifecycle. Ubuntu 22.04 LTS is also fine — pick whichever your team already knows. I chose Rocky because the client's sysadmin had RHEL experience, and the ecosystem (SELinux policies, RPM packaging) matched their existing tooling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware:&lt;/strong&gt; For a typical Nordic SMB, each VM needs 4–8 CPU cores, 16–32 GB RAM, and 500 GB SSD. Budget around €5,000 per server if buying physical hardware, or use an existing hypervisor. Four Dell PowerEdge T350s or equivalent run about €20,000 total and will last 5+ years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking:&lt;/strong&gt; A 1 Gbps internal switch at minimum. All four VMs should be on the same subnet with a dedicated VLAN for inter-service traffic. Nginx on the PROD VM handles external traffic with SSL termination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step: Provisioning with Ansible
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Directory Structure
&lt;/h3&gt;

&lt;p&gt;A clean Ansible layout prevents the "where did I put that playbook" problem at 2 AM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infrastructure/
├── ansible.cfg
├── inventory/
│   ├── hosts.yml
│   └── group_vars/
│       ├── all.yml
│       ├── dev.yml
│       ├── test.yml
│       ├── prod.yml
│       └── tools.yml
├── playbooks/
│   ├── site.yml          # runs everything
│   ├── common.yml        # base OS config
│   ├── docker.yml        # Docker + Compose install
│   ├── monitoring.yml    # Grafana/Loki/Promtail
│   ├── registry.yml      # Harbor setup
│   └── app-deploy.yml    # application deployment
└── roles/
    ├── base/             # SSH hardening, firewall, packages
    ├── docker/           # Docker CE + Compose plugin
    ├── nginx/            # reverse proxy + SSL
    ├── monitoring/       # Grafana stack
    └── harbor/           # container registry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inventory
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# inventory/hosts.yml&lt;/span&gt;
&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;children&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;dev-01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.10&lt;/span&gt;
    &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;test-01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.20&lt;/span&gt;
    &lt;span class="na"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;prod-01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.30&lt;/span&gt;
    &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tools-01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Base Role: Making Every Server Identical
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;base&lt;/code&gt; role handles everything that should be the same on every VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# roles/base/tasks/main.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set timezone&lt;/span&gt;
  &lt;span class="na"&gt;community.general.timezone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Europe/Copenhagen&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install base packages&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.dnf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;vim&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;wget&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;htop&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;firewalld&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;fail2ban&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Harden SSH - disable password auth&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.lineinfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/ssh/sshd_config&lt;/span&gt;
    &lt;span class="na"&gt;regexp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;^#?PasswordAuthentication"&lt;/span&gt;
    &lt;span class="na"&gt;line&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PasswordAuthentication&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no"&lt;/span&gt;
  &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restart sshd&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Harden SSH - disable root login&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.lineinfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/ssh/sshd_config&lt;/span&gt;
    &lt;span class="na"&gt;regexp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;^#?PermitRootLogin"&lt;/span&gt;
    &lt;span class="na"&gt;line&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PermitRootLogin&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no"&lt;/span&gt;
  &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restart sshd&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Enable and start firewalld&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;firewalld&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing clever here. That is the point — it should be boring and obvious.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Role: Install and Configure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# roles/docker/tasks/main.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add Docker CE repository&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo&lt;/span&gt;
    &lt;span class="na"&gt;creates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/yum.repos.d/docker-ce.repo&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Docker CE and Compose plugin&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.dnf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce-cli&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;containerd.io&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-compose-plugin&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add deploy user to docker group&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deploy_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Enable and start Docker&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this across all four VMs, every server has Docker and the Compose plugin. No manual SSH-ing, no "I forgot to install Compose on the test server" at 11 PM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying with Docker Compose
&lt;/h2&gt;

&lt;p&gt;Each environment gets its own &lt;code&gt;docker-compose.yml&lt;/code&gt;, but they share a common structure. Here is a simplified PROD example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.prod.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;harbor.internal/myapp/api:${APP_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ASPNETCORE_ENVIRONMENT=Production&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ConnectionStrings__Default=${DB_CONNECTION_STRING}&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2G&lt;/span&gt;

  &lt;span class="na"&gt;frontend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;harbor.internal/myapp/frontend:${APP_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;

  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:16&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pgdata:/var/lib/postgresql/data&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_DB=${DB_NAME}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_USER=${DB_USER}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD=${DB_PASSWORD}&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;

  &lt;span class="na"&gt;nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;harbor.internal/myapp/nginx:${APP_VERSION}&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;443:443"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;80:80"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./nginx/conf.d:/etc/nginx/conf.d:ro&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/etc/letsencrypt:/etc/letsencrypt:ro&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pgdata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app-network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The DEV and TEST variants differ only in environment variables and resource limits. The &lt;a href="https://znowman.gumroad.com/l/cloud-exit-starter-kit" rel="noopener noreferrer"&gt;Cloud Exit Starter Kit&lt;/a&gt; includes production-ready versions of these Compose files with health checks, logging drivers configured for the Loki stack, and proper volume backup hooks — the kind of details you only remember after losing data once.&lt;/p&gt;

&lt;p&gt;Ansible deploys the Compose stack with a straightforward playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# playbooks/app-deploy.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy application&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;target_env&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;vars_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;../inventory/group_vars/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;target_env&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.yml"&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy docker-compose file&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;docker-compose.{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;target_env&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.yml.j2"&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/opt/myapp/docker-compose.yml"&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pull latest images&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker compose pull&lt;/span&gt;
        &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/myapp&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy with zero downtime&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker compose up -d --remove-orphans&lt;/span&gt;
        &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/myapp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy to any environment with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ansible-playbook playbooks/app-deploy.yml &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;target_env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Secrets Management Without Cloud KMS
&lt;/h2&gt;

&lt;p&gt;This is where people overcomplicate things. You do not need HashiCorp Vault for a 4-VM setup. Here is what works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ansible Vault&lt;/strong&gt; for infrastructure secrets. Encrypt your &lt;code&gt;group_vars&lt;/code&gt; files that contain passwords and API keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ansible-vault encrypt inventory/group_vars/prod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your database passwords, registry credentials, and API keys are encrypted at rest. Decrypt at deploy time with a password file or prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;.env&lt;/code&gt; files on each host&lt;/strong&gt;, deployed by Ansible and readable only by the deploy user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy environment file&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env.j2&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/myapp/.env&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deploy_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0600"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vaultwarden on the TOOLS VM&lt;/strong&gt; for shared team credentials (not application secrets). This gives your team a self-hosted Bitwarden-compatible password manager.&lt;/p&gt;

&lt;p&gt;When NOT to do this: if you have compliance requirements for secret rotation and audit trails, invest in HashiCorp Vault. For most Nordic SMBs running internal applications, Ansible Vault plus locked-down &lt;code&gt;.env&lt;/code&gt; files is sufficient and dramatically simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD Pipeline Changes When Your Infra Is Local
&lt;/h2&gt;

&lt;p&gt;Your CI/CD pipeline needs three changes when moving from cloud to on-prem:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Self-Hosted Build Agents
&lt;/h3&gt;

&lt;p&gt;Cloud CI/CD (Azure DevOps hosted agents, GitHub Actions runners) cannot reach your internal servers. Install self-hosted agents on the TOOLS VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# roles/ci-agents/tasks/main.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create agent directory&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/azdevops-agent&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deploy_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Download and configure Azure DevOps agent&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;curl -fsSL https://vstsagentpackage.azureedge.net/agent/{{ agent_version }}/vsts-agent-linux-x64-{{ agent_version }}.tar.gz | tar xz&lt;/span&gt;
    &lt;span class="s"&gt;./config.sh --unattended \&lt;/span&gt;
      &lt;span class="s"&gt;--url https://dev.azure.com/{{ azdo_org }} \&lt;/span&gt;
      &lt;span class="s"&gt;--auth pat --token {{ azdo_pat }} \&lt;/span&gt;
      &lt;span class="s"&gt;--pool {{ agent_pool }} \&lt;/span&gt;
      &lt;span class="s"&gt;--agent tools-01&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/azdevops-agent&lt;/span&gt;
    &lt;span class="na"&gt;creates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/azdevops-agent/.agent&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Push to Your Private Registry
&lt;/h3&gt;

&lt;p&gt;Replace &lt;code&gt;docker push myregistry.azurecr.io/...&lt;/code&gt; with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag myapp/api:latest harbor.internal/myapp/api:&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BUILD_NUMBER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
docker push harbor.internal/myapp/api:&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BUILD_NUMBER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Harbor on the TOOLS VM gives you vulnerability scanning, access control, and image replication — features that Azure Container Registry charges extra for.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Deploy via Ansible from the Pipeline
&lt;/h3&gt;

&lt;p&gt;Your pipeline's deploy step becomes an Ansible call instead of &lt;code&gt;az webapp deploy&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# azure-pipelines.yml (deploy stage)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;
  &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DeployProd&lt;/span&gt;
      &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;self-hosted-pool'&lt;/span&gt;
      &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;ansible-playbook playbooks/app-deploy.yml \&lt;/span&gt;
              &lt;span class="s"&gt;-e target_env=prod \&lt;/span&gt;
              &lt;span class="s"&gt;-e app_version=$(Build.BuildNumber) \&lt;/span&gt;
              &lt;span class="s"&gt;--vault-password-file /opt/secrets/vault-pass&lt;/span&gt;
          &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Deploy&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;production'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline still runs in Azure DevOps — only the agents and targets are local. You keep the familiar interface, PR triggers, and approval gates while deploying to your own hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  The TOOLS VM: Your On-Prem Control Plane
&lt;/h2&gt;

&lt;p&gt;The TOOLS VM deserves special attention because it runs everything that supports your development workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Harbor&lt;/strong&gt; — private container registry with vulnerability scanning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SonarQube&lt;/strong&gt; — code quality and security analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana + Loki + Promtail&lt;/strong&gt; — monitoring, log aggregation, and dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vaultwarden&lt;/strong&gt; — team password management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted CI/CD agents&lt;/strong&gt; — Azure DevOps or Gitea runners&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these run as Docker Compose services on a single VM with 32 GB RAM and 8 cores. The Ansible playbook for TOOLS is the most complex one in the stack, but it is also the one you run least often — set it up once and it hums along.&lt;/p&gt;

&lt;p&gt;The monitoring stack in particular is worth getting right on day one. Grafana dashboards showing container health, Loki ingesting logs from every service via Promtail — this is your replacement for Azure Application Insights, and it costs exactly nothing beyond the hardware it runs on.&lt;/p&gt;

&lt;h2&gt;
  
  
  When This Approach Falls Short
&lt;/h2&gt;

&lt;p&gt;Docker Compose and Ansible are not the answer to everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you need auto-scaling&lt;/strong&gt;, this setup does not scale horizontally. You would need Kubernetes or Docker Swarm — and at that point, you are adding significant operational complexity. For most Nordic SMBs with predictable load, fixed capacity with headroom is simpler and cheaper than auto-scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have 50+ microservices&lt;/strong&gt;, managing individual Compose files and Ansible playbooks for each one becomes painful. At that scale, Kubernetes earns its complexity tax. But if you are reading this article, you probably have 5–15 services, and Compose handles that without breaking a sweat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If your team has zero Linux experience&lt;/strong&gt;, the learning curve for Ansible, Docker, SSH key management, and firewall configuration is real. Budget 2–4 weeks for the team to get comfortable, or bring in someone who has done it before.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Zero to Running in a Weekend
&lt;/h2&gt;

&lt;p&gt;Here is the realistic timeline — assuming you have the hardware racked and Rocky Linux installed on all four VMs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saturday morning:&lt;/strong&gt; Run the base Ansible playbook across all VMs. SSH hardening, packages, firewall rules, Docker installation. Two hours if your playbooks are solid. The Ansible playbooks in the &lt;a href="https://znowman.gumroad.com/l/cloud-exit-starter-kit" rel="noopener noreferrer"&gt;Cloud Exit Starter Kit&lt;/a&gt; are tested against Rocky Linux 9 and cover this entire base layer, so you are not starting from a blank file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saturday afternoon:&lt;/strong&gt; Stand up the TOOLS VM — Harbor, Grafana stack, CI/CD agents. Push your first container image to Harbor. Three hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sunday morning:&lt;/strong&gt; Deploy the application stack to DEV and TEST. Verify everything works, fix the inevitable environment variable typo. Two hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sunday afternoon:&lt;/strong&gt; Deploy to PROD. Configure Nginx with SSL. Point DNS. Run your smoke tests. Two hours, plus whatever time you spend staring at Grafana dashboards making sure the metrics look right.&lt;/p&gt;

&lt;p&gt;Is it actually a weekend? For someone who has done it before, yes. For a first-timer with well-structured playbooks, add another day or two for troubleshooting and learning. The point is that this is not a multi-month infrastructure project — it is a focused build with clear milestones.&lt;/p&gt;




&lt;h2&gt;Ready to migrate off the cloud?&lt;/h2&gt;

&lt;p&gt;I put together a &lt;strong&gt;&lt;a href="https://znowman.gumroad.com/l/cloud-exit-starter-kit" rel="noopener noreferrer"&gt;Cloud Exit Starter Kit&lt;/a&gt;&lt;/strong&gt; ($49) — Ansible playbooks, Docker Compose production templates, and the migration checklist I use on real projects. Everything you need to go from Azure/AWS to your own hardware.&lt;/p&gt;

&lt;p&gt;Or if you just want to talk it through: &lt;strong&gt;&lt;a href="https://calendly.com/znowm4n/free-30-min-cloud-exit-assessment" rel="noopener noreferrer"&gt;book a free 30-minute cloud exit assessment&lt;/a&gt;&lt;/strong&gt;. No sales pitch — just an honest look at whether on-prem makes sense for your situation.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ansible</category>
      <category>devops</category>
      <category>linux</category>
    </item>
    <item>
      <title>Building DocProof: Proving Documents Exist Without Sharing Them</title>
      <dc:creator>DanielG</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:13:05 +0000</pubDate>
      <link>https://dev.to/danielgdk/building-docproof-proving-documents-exist-without-sharing-them-5d0m</link>
      <guid>https://dev.to/danielgdk/building-docproof-proving-documents-exist-without-sharing-them-5d0m</guid>
      <description>&lt;p&gt;There's a problem that's been bugging me for a while. How do you prove a document existed at a specific point in time—without handing it over to someone else?&lt;/p&gt;

&lt;p&gt;Think about it: contracts, creative work, research notes, legal agreements. Sometimes you need proof that something existed &lt;em&gt;before&lt;/em&gt; a certain date. Traditional solutions? Upload it to a notary service. Email it to yourself. Store it with a third party who pinky-promises to keep timestamps honest.&lt;/p&gt;

&lt;p&gt;None of that felt right to me. Why should I trust a company to hold my documents? Why should proving &lt;em&gt;when&lt;/em&gt; something existed require giving up &lt;em&gt;what&lt;/em&gt; it contains?&lt;/p&gt;

&lt;p&gt;So I built DocProof, with GoLang as backend API and worker, and Angular as the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea
&lt;/h2&gt;

&lt;p&gt;DocProof lets you create a timestamped, verifiable proof that a document existed—without ever uploading the document itself.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; You select a file on your device&lt;/li&gt;
&lt;li&gt; Your browser computes a SHA-256 hash (a cryptographic fingerprint) of that file—locally, on your machine&lt;/li&gt;
&lt;li&gt; That fingerprint gets anchored on the blockchain with a timestamp&lt;/li&gt;
&lt;li&gt; Done. Your document never left your device. Only the fingerprint did.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Anyone can later verify the proof independently by hashing the same document and checking it against the blockchain record. No need to trust me, no need to trust DocProof to stay in business, no need to share sensitive content.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why blockchain?
&lt;/h2&gt;

&lt;p&gt;Blockchain carries some baggage. NFT speculation, crypto hype, environmental concerns.&lt;/p&gt;

&lt;p&gt;Blockchain is genuinely good at one specific job—creating immutable, timestamped records that don't depend on any single party. That's exactly what document verification needs.&lt;/p&gt;

&lt;p&gt;DocProof uses Base, a Layer 2 network on Ethereum. It's proof-of-stake (no massive energy consumption), cost-effective, and permanent. The proof exists independently of whether DocProof as a service continues to exist.&lt;/p&gt;

&lt;p&gt;There's no token, no NFT, no financial element, no crypto speculation. Just a timestamp you can trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is this like an NFT?
&lt;/h2&gt;

&lt;p&gt;I get this question, and it's fair. There's technical overlap: both use blockchain to create timestamped, immutable records. Both involve cryptographic proof of something existing on-chain.&lt;/p&gt;

&lt;p&gt;But the purpose is completely different.&lt;/p&gt;

&lt;p&gt;NFTs are about &lt;strong&gt;ownership and transferability&lt;/strong&gt;. You mint something, you can sell it, trade it, speculate on it. The whole ecosystem is built around digital assets changing hands—often with significant financial speculation attached.&lt;/p&gt;

&lt;p&gt;DocProof is about &lt;strong&gt;proving existence at a point in time&lt;/strong&gt;. That's it. No trading. No marketplace. No speculation. You're not buying or selling anything. You're creating a verifiable timestamp.&lt;/p&gt;

&lt;p&gt;There's also a key privacy difference: NFTs typically store or link to the actual content (an image, a video, metadata). DocProof only stores the hash—a cryptographic fingerprint. The document itself never leaves your device, and the hash reveals nothing about the content. You can't reverse-engineer a document from its hash.&lt;/p&gt;

&lt;p&gt;Think of it like the difference between selling a painting and getting a document notarized. Same pen, completely different purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who is this for?
&lt;/h2&gt;

&lt;p&gt;I've been thinking a lot about use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Freelancers and contractors&lt;/strong&gt; — Prove when you delivered work or signed an agreement. Useful if disputes arise later about timelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creators and inventors&lt;/strong&gt; — Establish proof of creation for designs, music, writing, code, patents. Show that your work existed before someone else claims it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Researchers and academics&lt;/strong&gt; — Timestamp research findings, papers, or data before publication. Protect priority claims.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legal and compliance&lt;/strong&gt; — Contracts, wills, insurance policies. Proof that a version existed at a specific moment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anyone who values privacy&lt;/strong&gt; — You might just want a personal record without trusting a third party with your files.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;DocProof is live now, and I'm actively working on it. The core functionality works: create proofs, verify them, all without uploading your documents anywhere.&lt;/p&gt;

&lt;p&gt;I'm building this because I think it should exist. Privacy-first and practical.&lt;/p&gt;

&lt;p&gt;If you want to try it out, head over to &lt;a href="https://docproof.org/" rel="noopener noreferrer"&gt;docproof.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I would love feedback—what works, what doesn't, what use cases I haven't thought of.&lt;/p&gt;

&lt;p&gt;And if you're curious about the technical details or want to follow along as I build, I'll be posting updates here and on Mastodon.&lt;/p&gt;

&lt;p&gt;Let's see where this goes.  &lt;/p&gt;

&lt;p&gt;/Daniel&lt;/p&gt;

</description>
      <category>angular</category>
      <category>go</category>
      <category>privacy</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Building DocProof: A Solo Dev Journey with AI as My Co-Pilot</title>
      <dc:creator>DanielG</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:13:03 +0000</pubDate>
      <link>https://dev.to/danielgdk/building-docproof-a-solo-dev-journey-with-ai-as-my-co-pilot-1221</link>
      <guid>https://dev.to/danielgdk/building-docproof-a-solo-dev-journey-with-ai-as-my-co-pilot-1221</guid>
      <description>&lt;p&gt;I shipped DocProof recently—a privacy-first document verification service. But this post isn't about what it does. It's about how it got built, and what it's like to develop a product in 2026 with AI assistance.&lt;/p&gt;

&lt;p&gt;Short version: I couldn't have done it this fast without Claude for code-assistance, and ChatGPT for ideation.  &lt;/p&gt;

&lt;p&gt;And I want to be honest about what that actually looked like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The starting point
&lt;/h2&gt;

&lt;p&gt;I'm a fullstack software engineer. I know my way around code.&lt;/p&gt;

&lt;p&gt;But DocProof touched areas where I'm not an expert: blockchain integration, cryptographic hashing in the browser, smart contract deployment, Stripe payment flows, and a dozen other things I'd never done exactly this way before.&lt;/p&gt;

&lt;p&gt;A few years ago, this would've meant weeks of documentation rabbit holes, Stack Overflow threads &amp;amp; trial-and-error. I'd still have built it—but slower, with more frustration, and probably with more architectural mistakes baked in.&lt;/p&gt;

&lt;p&gt;This time, I had a different approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude as a thinking partner
&lt;/h2&gt;

&lt;p&gt;I started using Claude not just for code snippets, but as a genuine collaborator. And that changed how I work.&lt;/p&gt;

&lt;p&gt;Here's what that looked like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural decisions.&lt;/strong&gt; Early on, I wasn't sure how to structure the blockchain interaction. Should the hash go directly on-chain? Should I batch transactions? What's the cost tradeoff? I talked through the options with Claude, weighing pros and cons until the right approach became clear. It wasn't Claude telling me what to do—it was a conversation that helped me think better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning on demand.&lt;/strong&gt; I'd never worked with Base or deployed a smart contract to mainnet with real money on the line. Instead of spending days reading documentation, I could ask targeted questions: "What's the difference between Base Sepolia and mainnet deployment?" or "How do I estimate gas costs for this transaction?" Instant, contextual answers. Then I'd verify and implement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code review and debugging.&lt;/strong&gt; When something didn't work, I'd paste the error and the relevant code. Claude would spot the issue faster than I could—often something obvious I'd been staring at for too long. Fresh eyes, even artificial ones, help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing and copy.&lt;/strong&gt; The landing page, the explanations, the documentation—writing clear copy for a technical product is hard. I'd draft something, Claude would suggest improvements, we'd iterate. The result was clearer than what I'd have written alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thinking through edge cases.&lt;/strong&gt; "What happens if someone tries to verify a document that was never registered?" "How should I handle failed blockchain transactions?" These conversations caught problems before they became bugs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI assistance actually feels like
&lt;/h2&gt;

&lt;p&gt;There's a misconception that using AI means "the AI builds it for you." That's not how it works—at least not for anything meaningful.&lt;/p&gt;

&lt;p&gt;It's more like having a knowledgeable colleague available 24/7 who never gets tired of questions, doesn't judge you for forgetting syntax, and can context-switch instantly between frontend, backend, blockchain, and marketing copy.&lt;/p&gt;

&lt;p&gt;I still made every decision. I still wrote and reviewed every line of code. I still own the architecture, the bugs, and the tradeoffs. But I got there faster, with fewer dead ends, and with more confidence.&lt;/p&gt;

&lt;p&gt;The best analogy I have: it's like pair programming with someone who's read all the documentation you haven't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The productivity shift
&lt;/h2&gt;

&lt;p&gt;I don't want to overstate this, but I also don't want to understate it.&lt;/p&gt;

&lt;p&gt;DocProof would have taken me significantly longer without AI assistance. Not because Claude wrote the code for me—but because it removed friction at every step. Less time stuck. Less time context-switching to Google. Less time second-guessing architectural choices.&lt;/p&gt;

&lt;p&gt;That time adds up. For a solo developer working on a side project, it's the difference between shipping and abandoning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some honest caveats
&lt;/h2&gt;

&lt;p&gt;It's not magic. Claude gets things wrong sometimes—especially with newer APIs or very specific library versions. I learned to verify, not just trust. The answers are a starting point, not gospel.&lt;/p&gt;

&lt;p&gt;And there's a skill to using it well. Vague questions get vague answers. The better I got at explaining context and asking precise questions, the more useful the responses became. It's a tool that rewards intentional use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this leaves me
&lt;/h2&gt;

&lt;p&gt;DocProof is live. It works. Real users can create real proofs.&lt;/p&gt;

&lt;p&gt;Building it taught me something about how software development is changing. The gap between "I have an idea" and "I shipped a product" is shrinking—not because AI does the work for you, but because it removes the friction that used to slow everything down.&lt;/p&gt;

&lt;p&gt;I'm still the developer. I still need to understand what I'm building. But I have a collaborator now that makes the whole process feel less lonely and more efficient.&lt;/p&gt;

&lt;p&gt;If you're a solo dev or indie hacker hesitating to start something because it feels too big—try working this way. You might surprise yourself with what you can ship.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;DocProof is at [docproof.dk]. If you want to follow along with what I'm building, I'm on Mastodon at [handle] and posting updates here.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>blockchain</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Moving Off the Cloud: A Practical Guide to On-Premises Migration for Nordic SMBs</title>
      <dc:creator>DanielG</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:13:02 +0000</pubDate>
      <link>https://dev.to/danielgdk/moving-off-the-cloud-a-practical-guide-to-on-premises-migration-for-nordic-smbs-57ci</link>
      <guid>https://dev.to/danielgdk/moving-off-the-cloud-a-practical-guide-to-on-premises-migration-for-nordic-smbs-57ci</guid>
      <description>&lt;p&gt;Your Azure bill hit €12,000 last month — again — and half your containers are sitting idle at 8% CPU. You are not alone. Across the Nordics, small and mid-sized companies that moved to the cloud five or six years ago are doing the same math and arriving at the same uncomfortable conclusion: the cloud is costing more than it should, and the value proposition has shifted.&lt;/p&gt;

&lt;p&gt;This is not an anti-cloud manifesto. Some workloads genuinely belong there. But if you are running predictable, steady-state applications on managed services you barely use, on-premises infrastructure deserves a serious look. I recently migrated a full .NET backend API and Angular frontend stack from Azure to on-premise Rocky Linux servers for a Nordic client — and the numbers told a clear story.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Nordic SMBs Are Re-Evaluating Cloud
&lt;/h2&gt;

&lt;p&gt;Three forces are driving the conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost.&lt;/strong&gt; Cloud pricing was designed for elastic workloads — bursty traffic, unpredictable demand, fast experimentation. Most Nordic SMBs run the opposite: stable line-of-business applications with predictable load. You are paying a premium for elasticity you never use. When I ran the numbers for a 20-person development team's infrastructure, the cloud bill was roughly 3x what the same capacity cost on owned hardware over a three-year horizon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data sovereignty and GDPR.&lt;/strong&gt; Nordic companies handle data subject to GDPR, and increasingly face questions about where that data physically lives. Azure's &lt;code&gt;Norway East&lt;/code&gt; and &lt;code&gt;Sweden Central&lt;/code&gt; regions help, but they do not eliminate the compliance overhead of proving your data stays within the right jurisdiction. On-premise gives you a simple answer: it is in the server room down the hall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor lock-in.&lt;/strong&gt; Every managed service you adopt — Azure Service Bus, AWS Lambda, Google Cloud Run — adds a dependency that makes leaving harder. After a few years, "multi-cloud" is a fantasy and "exit" is a project nobody wants to scope. Getting ahead of this is cheaper than getting out of it later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision Framework: What Belongs On-Prem vs. What Stays in the Cloud
&lt;/h2&gt;

&lt;p&gt;Not everything should come back. Here is how I think about it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Move on-prem when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The workload has predictable, steady resource consumption&lt;/li&gt;
&lt;li&gt;  You have (or can hire) someone who can maintain Linux servers and Docker&lt;/li&gt;
&lt;li&gt;  The application handles sensitive data that benefits from physical control&lt;/li&gt;
&lt;li&gt;  You are paying for managed services (e.g., Azure SQL, App Service) that a self-hosted equivalent covers at a fraction of the cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Keep in the cloud when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Traffic is genuinely spiky or seasonal (e-commerce flash sales, campaign-driven SaaS)&lt;/li&gt;
&lt;li&gt;  You need global distribution and edge presence&lt;/li&gt;
&lt;li&gt;  The team has zero infrastructure experience and cannot invest in building it&lt;/li&gt;
&lt;li&gt;  You are using cloud-native services that have no practical self-hosted equivalent (e.g., machine learning inference APIs, global CDN)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The grey zone:&lt;/strong&gt; Many workloads fall in between. For these, run the TCO comparison below before deciding. Do not guess — calculate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud to On-Premise Migration Checklist
&lt;/h2&gt;

&lt;p&gt;I have done this migration twice in production. Here is the checklist I wish I had the first time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Inventory and Assessment (2–4 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;List every service and resource&lt;/strong&gt; in your cloud account. Not just compute — storage, DNS, secrets, queues, scheduled jobs, monitoring. Export your cloud provider's resource list. Azure: &lt;code&gt;az resource list&lt;/code&gt;. AWS: &lt;code&gt;aws resourcegroupstaggingapi get-resources&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Map dependencies.&lt;/strong&gt; Which services talk to which? Draw the arrows. Every managed service you use (e.g., Azure Service Bus) needs a self-hosted replacement (e.g., RabbitMQ) or a redesign.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Identify stateful components.&lt;/strong&gt; Databases, file storage, caches. These are your hardest migration targets. Plan them first.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Baseline current performance.&lt;/strong&gt; Document response times, throughput, and error rates before migration. You need this to validate after cutover.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Audit cloud-specific SDK usage.&lt;/strong&gt; Search your codebase for cloud provider SDKs. Each call is a potential migration task.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2: Infrastructure Preparation (2–6 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Provision hardware.&lt;/strong&gt; For a typical Nordic SMB, four VMs or bare-metal servers cover most needs: DEV, TEST, PROD, and a TOOLS server (for CI/CD agents, container registry, monitoring). Budget €15,000–€30,000 for three-year hardware, depending on spec.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Choose your OS.&lt;/strong&gt; Rocky Linux 9 or Ubuntu 22.04 LTS. Both are solid. I use Rocky Linux for production servers — it is the CentOS successor and has a 10-year support cycle.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Set up configuration management.&lt;/strong&gt; Ansible is the right choice for teams under 200 people. It is agentless, readable, and does not require a PhD to operate. Write playbooks for every server role.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Deploy your container stack.&lt;/strong&gt; Docker Compose for orchestration at SMB scale. Kubernetes is overkill unless you are running 50+ services — and even then, think twice.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Stand up supporting services.&lt;/strong&gt; Container registry (Harbor), monitoring (Grafana + Loki + Promtail), secrets management (Vaultwarden or HashiCorp Vault), reverse proxy (Nginx with SSL termination).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 3: Application Migration (4–8 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Containerize everything&lt;/strong&gt; that is not already containerized. If your .NET app runs on Azure App Service, it needs a Dockerfile.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Replace managed services&lt;/strong&gt; one by one. Azure SQL → PostgreSQL. Azure Service Bus → RabbitMQ. Azure Blob Storage → MinIO. Test each replacement in isolation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Update CI/CD pipelines.&lt;/strong&gt; Your build agents need to push to your private registry and deploy to your servers, not to Azure. Self-hosted Azure DevOps agents or Gitea + Drone CI both work.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Migrate data.&lt;/strong&gt; Schedule a maintenance window. For databases, use &lt;code&gt;pg_dump&lt;/code&gt;/&lt;code&gt;pg_restore&lt;/code&gt; or the equivalent. Test the restore on your target environment before the real cutover.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Validate.&lt;/strong&gt; Compare against your Phase 1 baselines. Response times, error rates, throughput. If something regressed, fix it before going live.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 4: Cutover and DNS Switch (1 day)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Final data sync&lt;/strong&gt; during a planned maintenance window.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Switch DNS&lt;/strong&gt; to point to the on-prem servers. Keep the cloud environment running in parallel for 1–2 weeks as a rollback option.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Monitor aggressively&lt;/strong&gt; for the first 72 hours. Grafana dashboards and alerts should be in place before cutover, not after.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;I have hit all of these. Learn from my mistakes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Underestimating bandwidth requirements.&lt;/strong&gt; Cloud providers give you fast, free inter-service networking. On-prem, your network is your own problem. Make sure your internal network can handle the traffic between services. A 1 Gbps switch is the minimum; 10 Gbps is cheap insurance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Licensing traps.&lt;/strong&gt; Some software (looking at you, SQL Server) has on-premise licensing that costs more than the cloud version. Check every license before committing. PostgreSQL, Redis, and RabbitMQ are free — and for most Nordic SMBs, they are more than enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "we will figure out backups later" mistake.&lt;/strong&gt; On Azure, backups are mostly automatic. On-prem, they are your responsibility from day one. Set up automated backups to an offsite location before you migrate production data. Not after. Before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignoring the human factor.&lt;/strong&gt; Someone needs to be on call for hardware failures. In the cloud, you do not think about disk failures or power outages. On-prem, you do. If your team does not want this responsibility, either hire for it or use a colocation facility with managed hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trying to replicate the cloud on-prem.&lt;/strong&gt; Do not install Kubernetes, a service mesh, and a cloud-native API gateway just because you had them in Azure. Start simple: Docker Compose, Nginx, PostgreSQL. Add complexity only when the workload demands it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Cost Comparison: 3-Year TCO
&lt;/h2&gt;

&lt;p&gt;Here is a simplified but realistic comparison for a Nordic SMB running a .NET backend API, Angular frontend, PostgreSQL database, message queue, and monitoring — supporting a 20-person development team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud (Azure)
&lt;/h3&gt;

&lt;p&gt;Item&lt;/p&gt;

&lt;p&gt;Monthly Cost&lt;/p&gt;

&lt;p&gt;3-Year Total&lt;/p&gt;

&lt;p&gt;App Service (2x B2, prod + staging)&lt;/p&gt;

&lt;p&gt;€400&lt;/p&gt;

&lt;p&gt;€14,400&lt;/p&gt;

&lt;p&gt;Azure SQL (S3)&lt;/p&gt;

&lt;p&gt;€500&lt;/p&gt;

&lt;p&gt;€18,000&lt;/p&gt;

&lt;p&gt;Azure Service Bus (Standard)&lt;/p&gt;

&lt;p&gt;€50&lt;/p&gt;

&lt;p&gt;€1,800&lt;/p&gt;

&lt;p&gt;Blob Storage (500 GB)&lt;/p&gt;

&lt;p&gt;€30&lt;/p&gt;

&lt;p&gt;€1,080&lt;/p&gt;

&lt;p&gt;Azure DevOps (5 parallel jobs)&lt;/p&gt;

&lt;p&gt;€200&lt;/p&gt;

&lt;p&gt;€7,200&lt;/p&gt;

&lt;p&gt;Monitoring (Application Insights, Log Analytics)&lt;/p&gt;

&lt;p&gt;€250&lt;/p&gt;

&lt;p&gt;€9,000&lt;/p&gt;

&lt;p&gt;Networking (bandwidth, DNS, load balancer)&lt;/p&gt;

&lt;p&gt;€150&lt;/p&gt;

&lt;p&gt;€5,400&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;€1,580/mo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;€56,880&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  On-Premises
&lt;/h3&gt;

&lt;p&gt;Item&lt;/p&gt;

&lt;p&gt;Cost&lt;/p&gt;

&lt;p&gt;3-Year Total&lt;/p&gt;

&lt;p&gt;4x servers (Dell PowerEdge T350 or equivalent)&lt;/p&gt;

&lt;p&gt;one-time €20,000&lt;/p&gt;

&lt;p&gt;€20,000&lt;/p&gt;

&lt;p&gt;Colocation or server room power/cooling&lt;/p&gt;

&lt;p&gt;€200/mo&lt;/p&gt;

&lt;p&gt;€7,200&lt;/p&gt;

&lt;p&gt;Internet (dedicated 1 Gbps, Nordic ISP)&lt;/p&gt;

&lt;p&gt;€300/mo&lt;/p&gt;

&lt;p&gt;€10,800&lt;/p&gt;

&lt;p&gt;Software licenses (all open source)&lt;/p&gt;

&lt;p&gt;€0&lt;/p&gt;

&lt;p&gt;€0&lt;/p&gt;

&lt;p&gt;Offsite backup storage (Backblaze B2, 1 TB)&lt;/p&gt;

&lt;p&gt;€5/mo&lt;/p&gt;

&lt;p&gt;€180&lt;/p&gt;

&lt;p&gt;Additional sysadmin time (~4 hrs/mo at €100/hr)&lt;/p&gt;

&lt;p&gt;€400/mo&lt;/p&gt;

&lt;p&gt;€14,400&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;€52,580&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3-year savings: approximately €4,300.&lt;/strong&gt; That is not dramatic — but it compounds. The on-prem cost is mostly front-loaded (hardware), while cloud costs only go up. By year five, the gap widens to roughly €25,000–€35,000 because the hardware is already paid for.&lt;/p&gt;

&lt;p&gt;And this comparison is conservative. Many Nordic SMBs I talk to are spending €3,000–€8,000/month on cloud — at which point the on-prem savings are significant from year one.&lt;/p&gt;

&lt;h2&gt;
  
  
  When NOT to Do This
&lt;/h2&gt;

&lt;p&gt;I would not recommend on-prem migration if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Your team has fewer than three developers and zero ops experience. The learning curve will eat your savings.&lt;/li&gt;
&lt;li&gt;  Your application is genuinely elastic — scaling from 2 to 200 instances based on demand.&lt;/li&gt;
&lt;li&gt;  You are a startup that might pivot next quarter. Do not buy hardware for a product that might not exist in six months.&lt;/li&gt;
&lt;li&gt;  You are in a regulated industry that requires specific cloud certifications you cannot replicate on-prem (rare in the Nordics, but worth checking).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;If the numbers look interesting for your situation, start with the inventory in Phase 1. You do not need to commit to anything — just map what you have and run your own TCO comparison with real figures.&lt;/p&gt;

&lt;p&gt;I have put together a &lt;a href="https://calendly.com/znowm4n" rel="noopener noreferrer"&gt;cloud migration checklist template&lt;/a&gt; you can download, along with production-ready Ansible playbook templates for the base server setup.&lt;/p&gt;

&lt;p&gt;Or if you want to talk through your specific situation — whether migration makes sense, what it would cost, and how long it would take — &lt;a href="https://calendly.com/znowm4n" rel="noopener noreferrer"&gt;book a free 30-minute call&lt;/a&gt;. No pitch, just an honest assessment from someone who has done it.&lt;/p&gt;

</description>
      <category>cloudexit</category>
      <category>devops</category>
      <category>onpremises</category>
      <category>nordic</category>
    </item>
    <item>
      <title>When to Modernize Your Legacy .NET Application (And When Not To)</title>
      <dc:creator>DanielG</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:13:01 +0000</pubDate>
      <link>https://dev.to/danielgdk/when-to-modernize-your-legacy-net-application-and-when-not-to-4bib</link>
      <guid>https://dev.to/danielgdk/when-to-modernize-your-legacy-net-application-and-when-not-to-4bib</guid>
      <description>&lt;h1&gt;
  
  
  When to Modernize Your Legacy .NET Application (And When Not To)
&lt;/h1&gt;

&lt;p&gt;Your .NET Framework 4.6 application works. It processes orders, generates reports, handles the business logic that keeps the company running. Nobody touches it unless something breaks. Deployments happen once a quarter — if you are lucky — and they involve a nervous developer, a checklist from 2018, and a prayer.&lt;/p&gt;

&lt;p&gt;Sound familiar? You are sitting on a legacy .NET application, and someone in leadership is asking whether it is time to modernize. The answer is not always yes. But when it is yes, doing it right saves you years of pain. Doing it wrong costs you the same.&lt;/p&gt;

&lt;p&gt;I have modernized .NET applications for Nordic SMBs for the past decade — from small internal tools to full line-of-business platforms with 50+ developers. Here is how I think about when to pull the trigger and when to leave things alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Signs Your .NET App Needs Modernization
&lt;/h2&gt;

&lt;p&gt;Not every old application is a problem. Some .NET Framework apps will run fine for another decade. But these symptoms mean the clock is ticking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Releases take weeks, not hours.&lt;/strong&gt; If deploying a bug fix requires a full regression cycle, manual IIS configuration, and a maintenance window at 02:00 on a Saturday, your delivery pipeline is the bottleneck. Modern .NET with containerized deployments and CI/CD can get that down to minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. You cannot hire developers who want to work on it.&lt;/strong&gt; Try posting a job ad for a .NET Framework 4.5 developer in Copenhagen or Stockholm in 2026. The candidates who respond will either be expensive specialists or junior developers who will struggle with the ancient toolchain. New graduates learn .NET 10, not Web Forms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Security patches are getting harder — or impossible.&lt;/strong&gt; .NET Framework 4.8 is in maintenance mode. Microsoft is not adding new features or backporting security improvements from .NET 10+. If your application depends on libraries that have stopped supporting .NET Framework, you are accumulating risk every month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Your hosting costs are climbing for no reason.&lt;/strong&gt; Legacy .NET Framework apps typically run on Windows Server with IIS. Windows Server licensing, especially in on-prem or hybrid setups, is significantly more expensive than running .NET 10 on Linux. One client I worked with cut their hosting costs by 40% just by moving from Windows VMs to Linux containers — before optimizing anything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Integration with modern tools is painful.&lt;/strong&gt; Need to add OpenTelemetry for observability? A modern authentication provider like Keycloak? A message queue for async processing? In .NET Framework, each of these is a fight. In .NET 10, they are NuGet packages and a few lines of configuration.&lt;/p&gt;

&lt;p&gt;If you are nodding at three or more of these, modernization is worth scoping. If you are only seeing one, you might be fine for now — read the "when not to modernize" section before making any decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The .NET Application Modernization Spectrum
&lt;/h2&gt;

&lt;p&gt;Modernization is not a binary choice. There is a spectrum, and where you land depends on your budget, timeline, and how far gone your current architecture is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rehost (Lift and Shift)
&lt;/h3&gt;

&lt;p&gt;Move the application to new infrastructure without changing the code. Put it in a container or a new VM, update the deployment scripts, done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Applications that work fine but are stuck on expensive or end-of-life infrastructure. A .NET Framework app running on a Windows Server 2012 VM is a candidate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; Low. Days to weeks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you get:&lt;/strong&gt; Lower hosting costs, better infrastructure automation. The application itself is unchanged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you do not get:&lt;/strong&gt; Any of the developer experience or performance benefits of modern .NET.&lt;/p&gt;

&lt;h3&gt;
  
  
  Re-Platform
&lt;/h3&gt;

&lt;p&gt;Upgrade the runtime and framework version while keeping the overall architecture. Migrate from .NET Framework 4.x to .NET 10, replace deprecated libraries, update the build pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Applications with decent architecture that are held back by the framework version. If the code is reasonably structured — controllers, services, repositories — re-platforming is often the sweet spot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; Medium. Weeks to a few months, depending on size.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you get:&lt;/strong&gt; Modern runtime performance (often 2–5x faster for API workloads), Linux hosting, current library ecosystem, easier hiring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you do not get:&lt;/strong&gt; A fix for fundamental architecture problems. If the codebase is a tangled mess of static classes and God objects, re-platforming puts a new engine in a car with no steering wheel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Re-Architect
&lt;/h3&gt;

&lt;p&gt;Restructure the application while migrating. Break a monolith into bounded contexts or services. Introduce proper domain modeling. Fix the sins of the past.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Applications where the business logic is valuable but the architecture prevents the team from moving fast. Typically systems that have grown organically for 8+ years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; High. Months. Requires experienced architects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you get:&lt;/strong&gt; A maintainable, testable system that the team can extend without fear. Plus all the re-platform benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you do not get:&lt;/strong&gt; Speed. This is the slowest option, and the risk of scope creep is real.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rewrite
&lt;/h3&gt;

&lt;p&gt;Start from scratch. New codebase, new architecture, modern stack from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Almost nothing. I am serious. Rewrites are the most expensive, highest-risk option and should be a last resort. The only time I recommend a rewrite is when the existing code is so unmaintainable that understanding it takes longer than rebuilding it — and that is rarer than most people think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; Very high. 6–18 months. Budget overruns are the norm, not the exception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you get:&lt;/strong&gt; A clean codebase — if you get it right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you do not get:&lt;/strong&gt; Guarantees. Joel Spolsky wrote about this in 2000 and the advice still holds: rewrites kill companies. The old system encodes years of business rules, edge cases, and institutional knowledge that no specification document captures.&lt;/p&gt;

&lt;h2&gt;
  
  
  .NET Framework to .NET 10: What Actually Changes
&lt;/h2&gt;

&lt;p&gt;If you are going the re-platform route — and for most Nordic SMBs, this is where I start — here is what the migration involves in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Big Shifts
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Project and solution file format.&lt;/strong&gt; Old-style &lt;code&gt;.csproj&lt;/code&gt; files with hundreds of lines of XML become the new SDK-style format. Solutions get the same treatment — the new &lt;code&gt;.slnx&lt;/code&gt; format replaces the legacy &lt;code&gt;.sln&lt;/code&gt; with a clean, readable XML file that is easy to diff and merge. This alone makes the project manageable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Before: .NET Framework csproj (abbreviated — the real ones are 200+ lines) --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;Project&lt;/span&gt; &lt;span class="na"&gt;ToolsVersion=&lt;/span&gt;&lt;span class="s"&gt;"15.0"&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"http://schemas.microsoft.com/developer/msbuild/2003"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;Import&lt;/span&gt; &lt;span class="na"&gt;Project=&lt;/span&gt;&lt;span class="s"&gt;"$(MSBuildExtensionsPath)\...\Microsoft.CSharp.targets"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;PropertyGroup&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;TargetFrameworkVersion&amp;gt;&lt;/span&gt;v4.6.2&lt;span class="nt"&gt;&amp;lt;/TargetFrameworkVersion&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!-- ... 40 more properties ... --&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/PropertyGroup&amp;gt;&lt;/span&gt;
  &lt;span class="c"&gt;&amp;lt;!-- ... ItemGroups with explicit file references ... --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/Project&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!-- After: .NET 10 csproj --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;Project&lt;/span&gt; &lt;span class="na"&gt;Sdk=&lt;/span&gt;&lt;span class="s"&gt;"Microsoft.NET.Sdk.Web"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;PropertyGroup&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;TargetFramework&amp;gt;&lt;/span&gt;net10.0&lt;span class="nt"&gt;&amp;lt;/TargetFramework&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/PropertyGroup&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/Project&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dependency injection is built in.&lt;/strong&gt; If your .NET Framework app uses Autofac, Ninject, or Unity, the built-in DI container in .NET 10 handles most scenarios. One less dependency to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration moves from &lt;code&gt;web.config&lt;/code&gt; to &lt;code&gt;appsettings.json&lt;/code&gt;.&lt;/strong&gt; The new configuration system supports environment-specific overrides, user secrets, and environment variables out of the box. No more XML transforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entity Framework → EF Core.&lt;/strong&gt; The migration path is well-documented but not trivial. Lazy loading works differently, some LINQ queries need rewriting, and the migration tooling has changed. Budget a week for a medium-sized data layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and Identity.&lt;/strong&gt; If you are using ASP.NET Identity or Windows Authentication, the migration requires attention. ASP.NET Core Identity is a different library with a different API. If you are planning to introduce Keycloak or another OpenID Connect provider, modernization is a good time to make that switch.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Catches Teams Off Guard
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;System.Web&lt;/code&gt; is gone.&lt;/strong&gt; Completely. If your code references &lt;code&gt;HttpContext.Current&lt;/code&gt;, &lt;code&gt;System.Web.Mvc&lt;/code&gt;, or any &lt;code&gt;System.Web&lt;/code&gt; namespace, every one of those calls needs to be replaced. For large codebases, this is the single biggest migration task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SOAP/WCF services.&lt;/strong&gt; .NET 10 has limited WCF client support via &lt;code&gt;CoreWCF&lt;/code&gt;, but if you are hosting WCF services, you need to rewrite them as REST or gRPC endpoints. There is no direct migration path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global.asax and HTTP Modules.&lt;/strong&gt; The request pipeline is completely different. &lt;code&gt;Startup.cs&lt;/code&gt; (or the new minimal hosting model) replaces &lt;code&gt;Global.asax&lt;/code&gt;. HTTP modules become middleware. The concepts map cleanly, but the code does not copy-paste.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third-party library compatibility.&lt;/strong&gt; Check every NuGet package you depend on. Many have .NET 10 versions, but some are abandoned or have been replaced by alternatives. The .NET Upgrade Assistant tool flags these, and it is worth running early.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tool That Helps
&lt;/h3&gt;

&lt;p&gt;Microsoft's &lt;a href="https://dotnet.microsoft.com/en-us/platform/upgrade-assistant" rel="noopener noreferrer"&gt;.NET Upgrade Assistant&lt;/a&gt; automates the mechanical parts: project file conversion, namespace changes, known API replacements. It will not fix your architecture, but it handles the tedious work. Run it first, then fix what it cannot handle.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install and run the Upgrade Assistant&lt;/span&gt;
dotnet tool &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; upgrade-assistant
upgrade-assistant upgrade ./YourSolution.sln
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a recent re-platform project, the Upgrade Assistant handled about 60% of the changes automatically. The remaining 40% was manual work — mostly replacing &lt;code&gt;System.Web&lt;/code&gt; references and updating EF queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  When NOT to Modernize
&lt;/h2&gt;

&lt;p&gt;Here is the part most consultants skip: sometimes the right answer is to leave your legacy .NET application alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The application is stable and requirements are frozen.&lt;/strong&gt; If the system processes invoices the same way it has for five years, nobody is asking for new features, and it runs without issues — why touch it? Modernization has real risk. If the risk is not justified by a real need, the smart move is to leave it running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The business case does not add up.&lt;/strong&gt; Modernization costs money. If the application will be decommissioned in two years because the business is switching to a SaaS product, spending six months re-platforming it is a waste. Ask the hard question: how long will this application need to exist?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You do not have the team for it.&lt;/strong&gt; A .NET Framework-to-.NET 10 migration requires developers who understand both. If your team only knows .NET Framework and you cannot bring in experienced help, the migration will take 3x longer and introduce bugs you did not have before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The application is a small internal tool.&lt;/strong&gt; A small WinForms app used by five people in accounting does not need to be on .NET 10. If it works and the maintenance cost is near zero, let it be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are doing it for the resume.&lt;/strong&gt; This sounds absurd, but I have seen it. A team pushes for modernization because they want to work with the new tech, not because the business needs it. Modernization should be driven by business outcomes — reduced costs, faster delivery, lower risk — not by developer boredom.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Scope and Budget a .NET Modernization Project
&lt;/h2&gt;

&lt;p&gt;If you have decided to move forward, here is how I scope these projects for Nordic SMBs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Audit the Codebase
&lt;/h3&gt;

&lt;p&gt;Before estimating anything, you need to understand what you are working with.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Lines of code (rough indicator of size)&lt;/li&gt;
&lt;li&gt;  Number of projects in the solution&lt;/li&gt;
&lt;li&gt;  .NET Framework version(s) in use&lt;/li&gt;
&lt;li&gt;  Third-party dependencies and their .NET 10 compatibility&lt;/li&gt;
&lt;li&gt;  Database access layer (raw ADO.NET, EF6, Dapper, stored procedures)&lt;/li&gt;
&lt;li&gt;  External integrations (SOAP services, file shares, third-party APIs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run the .NET Upgrade Assistant in analysis mode to get a compatibility report.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Pick Your Strategy
&lt;/h3&gt;

&lt;p&gt;Based on the audit, choose your point on the spectrum. Most Nordic SMBs I work with land on re-platform for the core application, with selective re-architecting of the worst modules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Estimate in Phases
&lt;/h3&gt;

&lt;p&gt;Break the work into phases that deliver value independently. Do not plan a six-month migration with value only at the end — that is how projects get cancelled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 — Core API migration.&lt;/strong&gt; Get the backend running on .NET 10 with existing functionality. 4–8 weeks for a medium-sized application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 — Infrastructure modernization.&lt;/strong&gt; Containerize, set up CI/CD, deploy to Linux. 2–4 weeks, can overlap with Phase 1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 — Data layer migration.&lt;/strong&gt; EF6 to EF Core, database schema cleanup. 2–6 weeks depending on complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 — Frontend and integration cleanup.&lt;/strong&gt; Update any tightly coupled frontend code, replace deprecated integrations. 2–4 weeks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ballpark Budget
&lt;/h3&gt;

&lt;p&gt;For a typical Nordic SMB application (50–200k lines of code, 10–30 projects in the solution, one database):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Re-platform only:&lt;/strong&gt; DKK 300,000–600,000 (€40,000–€80,000)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Re-platform with selective re-architecture:&lt;/strong&gt; DKK 500,000–1,200,000 (€67,000–€160,000)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Full rewrite:&lt;/strong&gt; DKK 1,500,000+ (€200,000+) — and it will go over budget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These numbers assume a senior .NET developer or a small team working focused hours. Your actual cost depends on codebase size, architectural debt, and how many &lt;code&gt;System.Web&lt;/code&gt; references your search turns up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;If your .NET Framework application is showing the symptoms I described above, start with the audit. Install the .NET Upgrade Assistant, run the analysis, and get a clear picture of what a migration would involve. That costs you an afternoon, not a budget approval.&lt;/p&gt;

&lt;p&gt;If you want an experienced set of eyes on the analysis — someone who has done this migration multiple times for Nordic companies — &lt;a href="https://calendly.com/znowm4n" rel="noopener noreferrer"&gt;book a free 30-minute assessment call&lt;/a&gt;. I will tell you honestly whether modernization makes sense for your situation, and what it would realistically cost.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>modernization</category>
      <category>legacy</category>
      <category>nordic</category>
    </item>
  </channel>
</rss>
