<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maksym Trofimenko</title>
    <description>The latest articles on DEV Community by Maksym Trofimenko (@maksym_trofimenko_c203c72).</description>
    <link>https://dev.to/maksym_trofimenko_c203c72</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maksym_trofimenko_c203c72"/>
    <language>en</language>
    <item>
      <title>We Built a Self-Healing Registry Mirror (Because Docker Hub Rate Limits Are No Fun)</title>
      <dc:creator>Maksym Trofimenko</dc:creator>
      <pubDate>Mon, 30 Mar 2026 16:11:35 +0000</pubDate>
      <link>https://dev.to/maksym_trofimenko_c203c72/we-built-a-self-healing-registry-mirror-because-docker-hub-rate-limits-are-no-fun-77b</link>
      <guid>https://dev.to/maksym_trofimenko_c203c72/we-built-a-self-healing-registry-mirror-because-docker-hub-rate-limits-are-no-fun-77b</guid>
      <description>&lt;p&gt;If you've ever stared at &lt;code&gt;ImagePullBackOff&lt;/code&gt; in your cluster at 2 PM on a Tuesday, you know the pain. Docker Hub rate limits hit, your pods can't pull, and suddenly your perfectly fine deployment is stuck.&lt;/p&gt;

&lt;p&gt;We decided to fix this properly — a local registry mirror that automatically copies images from remote registries and patches deployments to use local copies. No more rate limits. No more surprise outages.&lt;/p&gt;

&lt;p&gt;Here's how we did it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup: Zot on GKE
&lt;/h2&gt;

&lt;p&gt;We went with &lt;a href="https://zotregistry.dev/" rel="noopener noreferrer"&gt;zot&lt;/a&gt; — a lightweight, OCI-native registry that runs nicely as a single StatefulSet. Install it with Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add zot https://zotregistry.dev/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's our &lt;code&gt;values.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;persistence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;pvc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20Gi&lt;/span&gt;
&lt;span class="na"&gt;mountConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;configFiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;config.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;"storage": {&lt;/span&gt;
        &lt;span class="s"&gt;"rootDirectory": "/var/lib/registry",&lt;/span&gt;
        &lt;span class="s"&gt;"dedupe": false,&lt;/span&gt;
        &lt;span class="s"&gt;"gc": true,&lt;/span&gt;
        &lt;span class="s"&gt;"gcDelay": "1h",&lt;/span&gt;
        &lt;span class="s"&gt;"gcInterval": "6h"&lt;/span&gt;
      &lt;span class="s"&gt;},&lt;/span&gt;
      &lt;span class="s"&gt;"http": {&lt;/span&gt;
        &lt;span class="s"&gt;"address": "0.0.0.0",&lt;/span&gt;
        &lt;span class="s"&gt;"port": "5000",&lt;/span&gt;
        &lt;span class="s"&gt;"compat": ["docker2s2"]&lt;/span&gt;
      &lt;span class="s"&gt;},&lt;/span&gt;
      &lt;span class="s"&gt;"log": { "level": "info" },&lt;/span&gt;
      &lt;span class="s"&gt;"extensions": {&lt;/span&gt;
        &lt;span class="s"&gt;"search": { "enable": true },&lt;/span&gt;
        &lt;span class="s"&gt;"scrub": { "enable": true, "interval": "24h" }&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/whitelist-source-range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.0.0.0/8"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/proxy-body-size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/proxy-read-timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;600"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/proxy-send-timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;600"&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry-mirror.example.com&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
          &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry-mirror-tls&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;registry-mirror.example.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;zot zot/zot &lt;span class="nt"&gt;-n&lt;/span&gt; zot &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two things to watch for.&lt;/p&gt;

&lt;h3&gt;
  
  
  The docker2s2 gotcha
&lt;/h3&gt;

&lt;p&gt;Zot is OCI-first, which means it rejects Docker V2 Schema 2 manifests by default. You'll see &lt;code&gt;MANIFEST_INVALID&lt;/code&gt; or HTTP 415 when pushing images from Docker Hub.&lt;/p&gt;

&lt;p&gt;That &lt;code&gt;"compat": ["docker2s2"]&lt;/code&gt; in the http section is the fix. Without it, multi-arch images like &lt;code&gt;minio/minio:latest&lt;/code&gt; will fail every time. Took us a few rounds to find this — it's under &lt;code&gt;http&lt;/code&gt;, not &lt;code&gt;storage&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep it internal
&lt;/h3&gt;

&lt;p&gt;Since this mirror sits inside the cluster, there's no reason to expose it to the internet. The &lt;code&gt;whitelist-source-range&lt;/code&gt; annotation locks access to your pod CIDR. Adjust the range to match your cluster. No auth needed — pods pull freely, nobody else gets in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Automation: Tiny Systems Flow
&lt;/h2&gt;

&lt;p&gt;A registry without automation is just a fancy disk. We built a &lt;a href="https://tinysystems.io" rel="noopener noreferrer"&gt;Tiny Systems&lt;/a&gt; flow that runs every 5 minutes and does this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lists all deployments&lt;/strong&gt; (filterable by namespace and labels)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skips&lt;/strong&gt; anything already using the local registry, or scaled to zero&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reads each deployment's own &lt;code&gt;imagePullSecrets&lt;/code&gt;&lt;/strong&gt; for source registry auth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copies the image&lt;/strong&gt; from the source registry to the local mirror&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Patches the deployment&lt;/strong&gt; to use the local copy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole thing is 9 nodes, no code to deploy, no CronJob YAML to maintain.&lt;/p&gt;

&lt;p&gt;The flow handles edge cases you'd forget about in a script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployments without pull secrets (public images) skip the secret read entirely&lt;/li&gt;
&lt;li&gt;Multi-container pods get each container mirrored individually&lt;/li&gt;
&lt;li&gt;Failed copies don't block other images — errors go to a debug sink and the loop continues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ticker (5min) -&amp;gt; Deployment List -&amp;gt; JS (plan) -&amp;gt; Split -&amp;gt; Router -&amp;gt; HAS_SECRET -&amp;gt; Secret Get -&amp;gt; Registry Copy -&amp;gt; Update
                                                                 -&amp;gt; NO_SECRET  -&amp;gt; Registry Copy -&amp;gt; Update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Router splits based on whether the deployment has &lt;code&gt;imagePullSecrets&lt;/code&gt;. Private images (ghcr.io, etc.) go through Secret Get first to grab credentials. Public images (Docker Hub) go straight to copy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;After starting the ticker, every deployment in our namespace got mirrored within a couple of minutes. Images from Docker Hub, ghcr.io, quay.io — all living locally now.&lt;/p&gt;

&lt;p&gt;Next time Docker Hub has a bad day, our cluster won't even notice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://tinysystems.io/solutions/image-mirror-0" rel="noopener noreferrer"&gt;Image Mirror solution&lt;/a&gt; is ready to install. Set your local registry address in the Ticker settings, click Start. That's it.&lt;/p&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A zot registry (or any OCI registry) accessible from your cluster&lt;/li&gt;
&lt;li&gt;These modules installed in your Tiny Systems workspace:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;common-module&lt;/code&gt; — ticker, router, array split, debug&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubernetes-module&lt;/code&gt; — deployment list/update, secret get&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;js-module&lt;/code&gt; — image planning logic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;distribution-module&lt;/code&gt; — registry copy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Happy mirroring.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>automation</category>
    </item>
    <item>
      <title>Cloud-Native Workflow Engine for Kubernetes - What Would You Use It For?</title>
      <dc:creator>Maksym Trofimenko</dc:creator>
      <pubDate>Fri, 27 Mar 2026 11:14:01 +0000</pubDate>
      <link>https://dev.to/maksym_trofimenko_c203c72/i-built-a-cloud-native-workflow-engine-for-kubernetes-what-would-you-use-it-for-3m89</link>
      <guid>https://dev.to/maksym_trofimenko_c203c72/i-built-a-cloud-native-workflow-engine-for-kubernetes-what-would-you-use-it-for-3m89</guid>
      <description>&lt;p&gt;Hey everyone, I've been building an open-source workflow engine that runs natively on Kubernetes called Tiny Systems (still working on the name). Not "deployed on Kubernetes" - actually native. Flows are CRDs. Nodes are CRDs. So entire state lives in etcd. No external database.&lt;/p&gt;

&lt;p&gt;Think n8n or Node-RED, but designed for people who already run Kubernetes and want automation that feels like part of their cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problems I'm solving right now
&lt;/h3&gt;

&lt;p&gt;These are ready-to-use solutions you can clone and deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cluster Cost Saver — scales down your deployments every evening, brings them back every morning. Dead simple, saves real money. No Lambda functions, no cronjobs, no external schedulers - just a flow inside your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pod Failure Alerts — watches pods in real time, catches CrashLoopBackOff, OOMKilled, image pull errors, and sends a Slack message before your users start complaining. Yes, Alertmanager exists, but I get my slack message next second after the crash (not like 1-5min it takes for prometheus chain).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Deploy Bot — receives a webhook after CI builds an image, updates the Kubernetes deployment, posts the result to Slack. A lightweight GitOps pipeline without ArgoCD/FluxCD overhead for simpler setups. Using it for really tiny (the title!) clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Firestore to ConfigMap Sync - listens to a Firestore collection and syncs changes to a Kubernetes ConfigMap in real time. Useful for feature flags. Got idea from app managements, thought why not to use that for K8s.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service Status Page — a self-hosted status page that pings your endpoints and serves a page. No Statuspage.io subscription, no cron jobs, just a flow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What workflow engine does today
&lt;/h3&gt;

&lt;p&gt;You build flows visually - drag nodes, connect them, configure, deploy. Each component module runs as its own pod. Nodes in the same module talk over Go channels, across modules — gRPC. Everything stays inside your cluster.&lt;/p&gt;

&lt;p&gt;There's also an AI assistant that can build flows from a prompt, but that's a story for another post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where I'm headed next
&lt;/h3&gt;

&lt;p&gt;These are the solutions I'm planning to build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CronJob failure alerter — Kubernetes CronJobs fail silently. This would watch them and alert on failure via Slack or PagerDuty.&lt;/li&gt;
&lt;li&gt;TLS certificate expiry watcher — scan ingresses across namespaces, alert before certs expire (sometimes I miss emails from acme)&lt;/li&gt;
&lt;li&gt;Slack command bot — slash commands that trigger Kubernetes actions: restart a pod, get logs, scale a deployment. ChatOps without writing a bot from
scratch.&lt;/li&gt;
&lt;li&gt; Webhook relay and transformer — receive webhooks from external services, reshape the payload, forward to internal services. A lightweight alternative to writing one-off HTTP handlers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why I'm writing this
&lt;/h3&gt;

&lt;p&gt;I've been building in isolation for too long. The engine works. The solutions work. But I keep guessing what people actually need instead of asking.&lt;/p&gt;

&lt;p&gt;So - if you run Kubernetes at work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which of the solutions above would you actually use?&lt;/li&gt;
&lt;li&gt;What's a recurring ops task you've been meaning to automate but haven't?&lt;/li&gt;
&lt;li&gt;What's keeping you on n8n / Airflow / bash scripts?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>nocode</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
